Gaining the public’s trust in artificial intelligence (AI) has been an important subject of discussion for quite some time. With each change and progression of the technology, there comes new opportunities and new hurdles. This interesting subject came to us from EIN News in their article, “Public Trust in Artificial Intelligence Starts With Institutional Reform.”

Earlier this year, Congress passed its most significant law to date on AI by tasking the National Institute of Standards and Technology (NIST) with creating a framework to manage risks associated with the use of AI.

NIST’s draft proposal places a lot of emphasis on technical techniques and less on the societal context in which AI tools are designed and used. The publication recommends engaging with the community when developing AI but the definition of “community” is nebulous.

Reducing AI bias has been a frequent topic and concern. It will be interesting to watch how this draft proposal evolves. Building trust is important but that is hard when knowledge and education around what exactly AI is and how it works is so low. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.

Data Harmony is Access Innovations’s AI suite of tools that leverage explainable AI for efficient, innovative and precise semantic discovery of new and emerging concepts to help find the information you need when you need it.

Melody K. Smith

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.