fbpx

The New York Times: The Dangers of Facial Analysis

The New York Times, June 21, 2018: The Dangers of Facial Analysis

When I was a college student using A.I.-powered facial detection software for a coding project, the robot I programmed couldn’t detect my dark-skinned face. I had to borrow my white roommate’s face to finish the assignment. Later, working on another project as a graduate student at the M.I.T. Media Lab, I resorted to wearing a white mask to have my presence recognized.

My experience is a reminder that artificial intelligence, often heralded for its potential to change the world, can actually reinforce bias and exclusion, even when it’s used in the most well-intended ways.

A.I. systems are shaped by the priorities and prejudices — conscious and unconscious — of the people who design them, a phenomenon that I refer to as “the coded gaze.” Research has shown that automated systems that are used to inform decisions about sentencing produce results that are biased against black people and that those used for selecting the targets of online advertising can discriminate based on race and gender.

Specifically, when it comes to algorithmic bias in facial analysis technology — my area of research and one focus of my work with the Algorithmic Justice League — Google’s photo application labeling black people in images as “gorillas” and facial analysis software that works well for white men but less so for everyone else are infamous examples. As disturbing as they are, they do not fully capture the risks of this technology that is increasingly being used in law enforcementborder controlschool surveillance and hiring.

Print Friendly, PDF & Email
Scroll to Top