Researchers from McGill University, MIT and Cornell University have taken a step towards teaching a machine how to analyze speech sounds and word structures in the way humans do. They have developed an artificial intelligence (AI) system that can learn the rules and patterns of human languages on its own. This interesting news came to us from McGill University in Montreal in their article, “AI that can learn patterns of human language.”

When you hear about AI learning language and words you think about Alexa or Siri, but there are some deeply exciting developments happening in the AI voice recognition space right now. Advances in AI mean that it is now possible to create complex programs and models that can analyze and score speech.

The model automatically learns higher-level language patterns that can apply to different languages, enabling it to achieve better results. When given words and examples of how those words change to express different grammatical functions in one language, the machine learning model comes up with rules that explain why the forms of those words change.

Most organizations have little knowledge of how AI systems make the decisions they do, or how the results are applied in various fields. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. “Explainable AI” is used to describe an AI model, its expected impact and potential biases.

Melody K. Smith

Data Harmony is an award-winning semantic suite that leverages explainable AI.

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.