Those who hold data have the potential to control many things. Data can be diffuse, hard to track, and nearly impossible to regulate, which poses myriad challenges. This interesting topic came to us from Deseret News in their article, “Perspective: The dark side of AI.”
Big data companies have poured billions into research to bring technology and data into direct contact with us every day through artificial intelligence (AI). The pandemic offered opportunities for technology to shine in every arena from healthcare to religious services. Its impact on society is huge and expanding rapidly. The problem, as some see it, is AI lacks the capacity for moral or spiritual discernment.
Like all transformative technologies, AI capabilities that were originally intended for good can be diverted to serve destructive purposes. Like art, judging what is destructive is often in the eye of the beholder.
The real problem is few people or organizations understand how AI systems make the decisions they do and, as a result, how the results are being applied. Not knowing creates fear.
Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact, and its potential biases. Why is this important? Because explainability becomes critical when the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Data Harmony, harmonizing knowledge for a better search experience.