Artificial intelligence (AI) in healthcare can enhance preventive care and quality of life, produce more accurate diagnoses and treatment plans and lead to better patient outcomes overall. It can also predict and track the spread of infectious diseases by analyzing data from a government, healthcare and other sources. Fierce BioTech brought this interesting news to us in their article, “AI predicts fall risk for lower limb amputees with 6-minute smartphone test: study.”
Now AI is enhancing lives by predicting injury-inducing accidents for amputees without a lower limb with a new algorithm.
According to recent analyses, lower limb amputees have an even higher risk of falling than the healthy geriatric population—with recorded incidence rates as high as 80% in some cases—but have long been left out of research into fall risk and its potential solutions.
The algorithm specifically factored in the changes in gait that occur after lower limb amputations, resulting in an algorithm that can predict fall risk with similar accuracy to systems designed only with healthy older adults in mind.
Most organizations have little knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning are being applied. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Why is this important? Because explainability becomes critical when the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.