As artificial intelligence (AI) and machine learning continues to evolve and advance, researchers have developed a new way for robots to see the world from a more human perspective. This has the potential to improve how technology (such as driverless cars and industrial or mobile robots) operates and interacts with people. This interesting information came to us from Tech Xplore in their article, “Visual semantics enable high-performance place recognition from opposing viewpoints.”

In what is believed to be a world first, Ph.D. student Sourav Garg, Dr. Niko Suenderhauf and Professor Michael Milford from QUT’s (Queensland University of Technology) Science and Engineering Faculty and Australian Centre for Robotic Vision, have used visual semantics to enable high-performance place recognition from opposing viewpoints.

“We wanted to replicate the process used by humans. Visual semantics works by not just sensing, but understanding where key objects are in the environment, and this allows for greater predictability in the actions that follow,” Professor Milford said.

Melody K. Smith

Sponsored by Access Innovations, the world leader in taxonomies, metadata, and semantic enrichment to make your content findable.