Racial Bias in AI: How Algorithms Are Failing Black People
From facial recognition software leading to wrongful arrests to discriminatory hiring systems, racial bias in AI is hurting Black Americans.
by Houston Defender
By ReShonda Tate
Artificial Intelligence was once heralded as the great equalizer—promising efficiency, objectivity and progress. But for many African Americans, the growing influence of AI has exposed a much darker reality: algorithms that perpetuate the very racism they were supposed to eliminate.
From facial recognition misfires to discriminatory hiring systems and over-policing through predictive technology, many in the Black community are bearing the brunt of AI’s biases. And experts say it’s not accidental—it’s built into the system.
Understanding the Root of AI Bias
“AI systems learn from data—and that data reflects our society’s biases,” says Dr. Joy Buolamwini, founder of the Algorithmic Justice League. “If you train an algorithm on a flawed history, it will replicate those injustices.”
AI models are developed using massive datasets, often pulled from historical records, social media and even government databases. But when those sources contain racial disparities—such as disproportionate policing or underrepresentation in high-wage jobs—the AI absorbs and amplifies those inequities.
“These systems are tested in sanitized labs, not real-world environments where racial complexity exists. And when they fail, Black people pay the price,” Buolamwini said.
Facial Recognition: A Modern-Day Mugshot Lineup
Facial recognition technology is under increasing scrutiny for its alarming inaccuracy in identifying Black individuals—errors that have already led to wrongful arrests and widespread concern.
Detroit resident Robert Williams knows firsthand the devastating impact of faulty facial recognition.
“A computer said I stole something I had nothing to do with. It turned my life upside down,” said Williams, whose case has been taken up by the ACLU. “I never thought I’d have to explain to my daughters why daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?”
A study by the MIT Media Lab revealed that commercial facial recognition systems misidentified darker-skinned individuals, particularly Black women, at rates far higher than white men. One system misclassified dark-skinned women 34% of the time, compared to just 0.8% for light-skinned men.
Last year, civil rights advocates in Houston raised concerns after the City Council approved a $178,000 contract with Airship AI Holdings, Inc. The deal added a 64-camera network with facial recognition capabilities to the Houston Police Department’s surveillance tools.
Texas Southern University professor Carroll Robinson, a former Houston City Council member, warned of the risks.
“Some innocent person, misidentified, not by a human, but by a camera, ends up in the criminal justice system, incarcerated at the county jail,” Robinson said.
Robinson has called for state legislation to ensure artificial intelligence systems do not perpetuate racial discrimination.
The technology’s failings extend beyond policing. Amazon’s face-ID system, Rekognition, notoriously misidentified Oprah Winfrey as male and falsely matched 28 members of Congress with criminal mugshots in a test by the ACLU.
A more recent study by the U.S. Commerce Department echoed these concerns. It found that facial recognition systems were far more likely to falsely match two different Black faces than white faces—error rates for African men and women were exponentially higher than for Eastern Europeans, who had the lowest error rates.
These disparities stem from how AI systems are trained.
“Algorithms are only as good as the data we feed them,” says Buolamwini. “When those datasets are dominated by white male faces, the systems struggle to identify anyone who doesn’t fit that mold.”
Buolamwini learned this firsthand as a student. While working on a project using computer vision, she discovered that the robot couldn’t detect her face—until she put on a white mask.
The Push for AI Accountability
Activists and civil rights groups are pushing back. Buolamwini’s Algorithmic Justice League is calling for legislation that enforces transparency in AI systems, mandates third-party audits and prohibits the use of certain technologies—like facial recognition—in policing altogether.
There are signs of progress as some local governments are also banning facial recognition tech, and some companies are beginning to reevaluate their tools.
While much of the conversation centers on the harm AI causes, Black technologists are also reimagining what equitable AI could look like.
Organizations like Black in AI, Data for Black Lives and the Algorithmic Justice League are creating spaces where Black developers, ethicists and data scientists are taking the lead.
“Our taxpayer dollars should not go toward surveillance technologies that can be abused to harm us, track us wherever we go, and turn us into suspects simply because we got a state ID,” the ACLU said in a statement.
What You Can Do
- Know Your Rights: If you’ve been wrongfully targeted by an AI-driven system, contact civil rights organizations like the ACLU or NAACP Legal Defense Fund.
- Get Informed: Resources like the Algorithmic Justice League and Black in AI offer education on AI fairness and advocacy.
- Advocate: Support policies that call for transparency, fairness and accountability in AI development. Contact your representatives about AI regulations.
- Diversify Tech: Encourage schools and companies to invest in programs that train and recruit Black professionals into AI and data science.