The future of artificial intelligence (AI) has always been depicted as one that looked easy in every aspect of life—from health, to jobs, to how we connect. But this utopia is starting to disappear as we see how AI can also be used to discriminate, profile, and cause other harms. Are there any legal frameworks that can protect us from the darker side of this technology? Science brought this topic to our attention in their article, “Emerging from AI utopia.”
One good example is facial recognition. Originally it had a dramatic human impact in ease of access, security, etc. However, when it is used for profiling and policing, the risk of harm is high with false positives and much of that can be attributed to bias built into the technology.
Moreover, facial recognition errors are not evenly distributed across the community. In Western countries, where the data is easier to access, the technology is far more accurate at identifying white men than any other group. This is in part because it tends to be trained on datasets of photos that are disproportionately made up of white men.
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.