Machine learning has proven itself beneficial in healthcare in numerous ways – predicting disease onset and future hospitalizations, reducing medical errors and managing medications, to name a few. But when bias creeps into the development or use of machine learning models, technologies that intend to improve health outcomes can create barriers for certain patients. This interesting information came to us from Fierce Healthcare in their article, “Health insurers may be using biased machine learning models. Here’s how to fix them.”
Artificial intelligence (AI) has the potential to revolutionize healthcare delivery. With applications in decision support, patient care and disease management, it is fast becoming an industry standard. AI helps clinicians work smarter while improving patient outcomes, from the machine learning algorithms that read patient scans more accurately to natural language processing (NLP) facilitating searches through unstructured data in electronic health records (EHRs).
While there are many real and potential benefits from using AI in healthcare, an important risk is flawed decision making caused by human bias embedded in AI output. Bias occurs when we discriminate against a particular group, either consciously, unconsciously or inadvertently through use of data skewed towards a particular segment of the population.
Most organizations have little knowledge of how AI systems make the decisions they do, and as a result, how the results are being applied in various fields.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.