The use of synthetic data is a cost‑effective way to teach artificial intelligence (AI) about human responses. But what about bias? The Guardian brought this topic to our attention in their article, “Is ‘fake data’ the real deal when training algorithms?“
Big data defines the field of AI by training deep learning algorithms to have a multitude of data points. That creates problems for a task such as recognizing a person falling asleep at the wheel, which would be difficult and time-consuming to film happening in thousands of cars. Instead, companies have taken a novel approach for the field of AI. Instead of filming thousands of real-life drivers falling asleep and feeding that information into a deep learning model to learn the signs of drowsiness, they are building virtual datasets to create millions of fake human avatars to re-enact the sleepy signals.
In reality, most organizations have little knowledge of how AI systems make the decisions they do and how the results are applied. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. “Explainable AI” is used to describe an AI model, its expected impact and potential biases. Why is this important? Because the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Access Innovations, changing search to found.