Semantic technology still feels foreign, futuristic and unknown for many. Understanding within context for humans is tough enough, but to expect technology to follow suit is overwhelming for some. DATAVERSITY brought us this interesting information in their article, “Machine Learning Is Learning How to Read Lips.”

A technology model for lip reading developed at the University of East Anglia (UEA) in the United Kingdom has been able to interpret mouthed words with a greater degree of accuracy than human lip readers. This with the assistance of machine learning technology to classify the visual aspect of sounds. The uniqueness of this the algorithm doesn’t need to know the context of what you’re discussing to be able to identify the words you’re using.

The core challenge for lip reading techniques to the human eye is there are fewer visual cues than there are acoustic audio sounds humans make. UEA’s visual speech model is able to more accurately distinguish between these visually similar lip shapes.

Melody K. Smith

Sponsored by Access Innovations, the world leader in taxonomies, metadata, and semantic enrichment to make your content findable.