Exploring Racial Bias in Machine Learning: 5 Fails

Exploring Racial Bias in Machine Learning: 5 Fails

January 19, 20235 min read

For years, individuals have been experiencing racial prejudice and are treated unequally based on their race and ethnicity. Racial bias is a kind of discrimination that presents itself in various forms. Like humans, machine learning is also susceptible to racial bias.

Machine learning is a subset of artificial intelligence that depends on mathematical algorithms to learn from data. Racial bias in machine learning occurs when this data is biased, leading to discriminatory algorithms. This, in turn, leads to erroneous forecasts and decision-making that disproportionately affect some groups of individuals.

Racial Bias in Machine Learning

Bias: Bias is when one group of people/thing is preferred or favored over another.

Racial Bias: When the bias is done on the basis of race and color it is known as racial bias. It is an implicit bias in which racial stereotypes are unconsciously affecting actions, personality, and decisions.

Racial bias in machine learning happens when the data set used to train the AI contains human judgments or decision-making based on race. Racial bias also results from how data is collected and processed. For example, information is collected in a way that is biased against certain groups of people. In that case, this bias is carried over into the training of machine learning algorithms.

5 Examples of Racial Bias in Machine Learning

There is no question that machine learning exhibits bias in some situations. Below are some instances from more recent times when racial bias was seen in ML.

Google’s AI Labels an African Couple as Gorillas

In 2015, Google Photos incorrectly labeled Jacky Alcine- an African-American computer programmer, and his friend “gorillas.” Mr. Alcine criticized Google’s AI on social media, and people called this “the first instance of A.I. racism.”

Microsoft Chatbot went Rogue on Twitter

Microsoft launched an AI-based chatbot named Microsoft Tay. It was an ML project and was designed to engage with people and entertain them. It could chat with people and had the potential to become smarter with more interactions.

However, in 2016, it started using racist language after users on Twitter instructed it to use such language. She started tweeting Nazi sympathizers, genocide supporters, and racists. The team later edited the tweets, and the robot was shut down as the consequences of its release were severe.

AI Tools Unfair Discrimination in Hirings

AI-based hiring tools violate civil rights by discriminating against people with disabilities.

Resume scanners, video interviewing software measuring a person’s facial expressions or speech patterns, online tests for job skill assessment, and employee monitoring software ranking workers based on keystrokes are more likely to show discrimination in decision-making.

These tools screen out people with severe arthritis that leads to slow typing, speech impediments, mental or physical impairments, and other things.

Computer Algorithm Got Biased in Courtroom

Borden-an 18-year-old girl who took a bike from children playing outside, and Parter- a 41-year-old seasoned criminal, both were charged with burglary and theft.

Yet when they were booked into jail, a computer program was used to spit out a score that predicted the likelihood of both committing future crimes. Since Prater was white, he was rated low risk, and Borden was black and was rated high risk and was sentenced to 14 years of imprisonment.

 Racial Bias in Healthcare Algorithms

According to a study published in Science, it was found that the algorithm used in US hospitals to provide health care to patients has been systematically discriminating against black people. Researchers found that the algorithm only assigned 17.7% of black people for extra care; if the algorithm had been unbiased, it would have been 46.5%.

The Consequences of Racial Bias in Machine Learning

Below are some consequences of racial bias in machine learning.

False Accusations: If a facial recognition system is trained using data from one racial group, it performs poorly when recognizing people from other groups. This leads to false arrests or missed opportunities for law enforcement.

Healthcare Department: Studies have shown that some predictive analytics algorithms are biased against certain racial groups when making treatment recommendations. This means that people are less likely to receive lifesaving care or may be prescribed harmful treatments based on their color difference.

Armed Conflict: The problems of responsibility and accountability in weaponry automation occur. Decisions in conflict zones already lack an understanding of the cultures involved, and there’s less time for data management compared to civil society, so AI-enabled military devices delegate the biased decisions of life and death.

Professional Setback: The use of AI tools in hiring leads to discriminatory decisions about hiring based on gender. Women, especially black women, have greater chances of getting affected by this discrimination.

How to Avoid Bias in Machine Learning?

Well, it’s not easy. But there are a few things we can do to try to mitigate bias:

  1. Be aware of your own biases.

  2. Collect data from diverse sources to ensure the data is less likely to be biased.

  3. Use publicly available unbiased data sets for training and testing the machine.

  4. Be transparent about the data usage and where it comes from.

  5. Regularly audit your machine learning systems.

Bottom Line

There are many ways in which racial bias in machine learning may exhibit. These biases are brought in by the data used to train the machine learning algorithm, or the algorithm itself brings them in. Both scenarios are possible.

Machine learning offers many solutions to the bias problem, but it is essential to remember that there is no flawless system. There is no time to let our guard down when monitoring for bias and taking action to combat it.

Back to Blog

Copyright 2024 © Indigomark - All Rights Reserved.