Researchers from McGill University, MIT, and Cornell University have taken a step towards teaching a machine how to analyze speech sounds and word structures in the way humans do. They have developed an artificial intelligence (AI) system that can learn the rules and patterns of human languages on its own. This interesting news came to us from McGill University in Montreal in their article, “AI that can learn patterns of human language.”

When AIs learning language comes up, Alexa or Siri come to mind, but there are some other very exciting developments in the AI voice recognition space right now. Advances in AI mean that it is now possible to create complex programs and models that can analyze and score speech.

The model can automatically learn higher-level language patterns that apply to different languages, enabling it to achieve better results. The machine learning model, when it is given a set of words along with examples of different grammatical functions of those words in one language, comes up with rules to explain those words disparate usages.

Most organizations are unfamiliar with AI decision-making and how AI results are applied in various fields. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. “Explainable AI” is used to describe an AI model, its expected impact, and its potential biases.

Melody K. Smith

Data Harmony is an award-winning semantic suite that leverages explainable AI.

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.