The geniuses at the Personal Robotics Lab at Carnegie Mellon University are working to humanize the movements and behavior of a personal-assistant robot. They want their robot to learn to recognize objects all by itself, as “naturally” as humans. Semantic Web brought this news to our attention in their article, “New Robot Has Semantic Learning Capabilities.”

“A robot with this ability will be able to interact semantically with the world. It will then also be able to interact better with us because it is able to have a common semantic model of the world with us,” said Siddhartha Srinivasa, director of the lab. Many hurdles have been successfully handled, but there are always more. Srinivasa and colleagues found that adding domain knowledge to the video input nearly tripled the number of objects the robot could discover, and reduced computer processing time by a factor of 190. Next up is labeling those objects.

Melody K. Smith

Sponsored by Access Innovations, the world leader in taxonomies, metadata, and semantic enrichment to make your content findable.