A recent data strategy from the Department of Defense (DOD) indicates plans to prioritize learning and agility as it works to responsibly adopt artificial intelligence (AI) capabilities across its mission. This interesting news came to us from NextGov in their article, “DOD’s new AI and data strategy aims to scale adoption of new technologies.”
The new guidance creates a foundation for DOD’s use of AI and other emerging technologies moving forward. The pyramid framework positions quality of data as the main priority. This is followed by governance, analytics or metrics, assurance and responsible AI as the progressive tiers of its process.
The biggest challenge is that most organizations – including governments – have little knowledge regarding how AI systems make decisions. They are also in the dark about how to interpret AI and machine learning results. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and its potential biases. Explainability becomes critical when the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Access Innovations, changing search to found.