Facial recognition technology is being utilized in a variety of industries and applications. But its use by law enforcement agencies and courtrooms raises some particular concerns about privacy, fairness, and bias. This interesting topic came to us from Science Friday in their article, “Artificial Intelligence Is A Growing Part Of The Criminal Justice System. Should We Be Worried?

Studies have shown that some of the major facial recognition systems are inaccurate. Amazon’s software misidentified 28 members of Congress and (in an ironic twist) matched them with criminal mugshots. Not surprisingly, these inaccuracies tend to be far worse for people of color and women.

We tend to see machines and algorithms as “race neutral,” but Ruha Benjamin, a professor of African-American Studies at Princeton University, reminds us that they are programmed by humans and can end up reinforcing bias rather than removing it from policing and criminal justice.

Many experts are working on tools to hopefully reduce the different sources of bias. Sharad Goel, a professor of Management Science and Engineering at Stanford University, is developing a risk assessment tool that accounts for different sources of bias. He thinks there is a way to use AI as a tool for more equal outcomes in criminal justice.

Melody K. Smith

Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.