AI, why you ain’t fair? : Understanding AI Bias

JaydeepBorkar


Description:

Basic outline of the Poster

1] What do we mean by biases in AI?

2] How do biases enter AI systems?

3] Examples in the real-world

4] Consequences of AI Bias on society

5] How to mitigate biases in AI?

6] Key takeaways

7] References

What do we mean by biases in AI?

We all know that AI systems have been making their way everywhere. They make decisions on a fresh data set by studying and understanding the patterns in the training data set. But lately, these systems have been reported for their unfair and biased decisions against a specific group/community of people. The AI systems have been making biased decisions based on factors like gender, race, color, caste, geographical location, etc.

Bias is an unfair slant towards one or more groups of people. Algorithms are biased when built on biased data sets. For example, the early days of speech recognition built models based mainly on samples from white male speakers, Meaning speech recognition did not work as well for women. (Cathy Pearl, Sense.ly)

How do biases enter AI systems?

Biases usually enter AI systems due to the biased data sets. These data sets are created by humans, and hence are prone to human biases.

But, how are data sets biased?

  • As the data sets are created by humans, sometimes knowingly or unknowingly due to human bias, the minority groups/sections are not considered while making the data set. This means that the data set isn’t diverse.

  • Since the data of the minority groups isn’t there in the training data, AI systems don’t get exposed to it.

  • As a result, while making decisions on such groups in the real-world, the AI systems end up giving inaccurate decisions that aren’t in favour of these groups.

Examples in the real-world

1] Amazon’s facial recognition software, Rekognition, made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time. (News)

A possible reason for this is that the faces of dark-skinned women weren’t considered in the data set. Whereas faces of color were considered for men, hence the software predicted gender of women of dark color as male instead of female.

2] Facial recognition software mistakenly identified people of dark color as criminals (News)

  • This is due to the fact that the data set contained past records of criminals who were dark colored, but this doesn’t necessarily mean that all dark colored people are criminals and white people won’t commit any crimes.

3] AI bias in hiring -- The AI hiring system started showing bias against women. For example: One of the AI powered recruiting engine was trained to scan through incoming resumes and observe candidates of past job postings over the past 10 years. Since most of the applicants were men, the recruiting tool developed a bias to prefer men candidates and would penalize resumes that included gender keywords related to women, such as “women’s golf club member” and also those who went to women’s colleges -- (News)

  • Similarly, AI systems have been also showing bias towards candidates on the basis of color, region, race,caste, etc. Please have a look at this game to understand it better.

4] Bias in Natural Language Understanding -- Accents of people from under-represented groups weren’t recognized by the virtual assistants. This is primarily because the Natural Language Understanding systems weren’t exposed to the voices from under-represented groups.

Consequences of AI Bias on society

  • In Healthcare, bias can lead to misdiagnosis missing key signs of illness. For example, women's symptoms of a stroke are not always the same as men's, and because of this, women's strokes sometimes go undiagnosed. If we use only men's reported symptoms to train up a stroke detection system, some women will not have their symptoms detected. (Cathy Pearl, Sense.ly)

  • In Finance, loans have been denied without justification, based on the race or address of the applicant. In traditional applications, the loan officer would be able to make an informed decision based on each circumstance and provide the would-be-borrower with the exact issues with the application. (The Conversation, 2017)

  • In Recruitment, typically ‘male’ roles aren’t shown to female applicants. For example, if a computer searching résumés for computer programmers associates ‘programmer’ with men, mens’ résumés will pop to the top. (ScienceMag, 2017)

How to mitigate biases in AI?

  • The most effective way to reduce the bias in AI systems is making the data set as diverse as possible, because AI systems are only as good as the data set.

  • If the data set includes the data of under-represented communities or specific groups, the possibility of AI system being biased towards such communities and groups is nearly zero. For example: including the images of faces of dark-skinned women with appropriate labels in the data set will reduce down the possibility of gender misclassification.

  • For the data set to be diverse, we need the involvement of people from various groups/communities, and not just similar categories of people.

  • Researchers are developing different algorithms to reduce the bias. For example: IBM’s optimized pre-processing algorithm reduces the dependency of the AI model on the biasing attributes (like gender, color, race etc) in the data set.

Key takeaways

Achieving accuracy in AI systems is important, but also equally important is AI to be fair, reliable, robust, responsible, and more ethical. This poster will motivate the AI practitioners and people aspiring to have their career in AI to build more fairer and ethical AI systems.

References

Note -- This poster is inspired by the work of Joy Buolamwini and Timnit Gebru, tireless efforts by Anima Anandkumar to make AI more ethical, and countless other AI Ethics and Bias Researchers.

Prerequisites:

Basic Understanding of AI.

Content URLs:

Draft Poster

Speaker Info:

Jaydeep Borkar is a final-year undergraduate student in Computer Engineering at Savitribai Phule Pune University. He aims to build systems through which humans and computers can talk to each other in ways that are natural, and not awkward. His current research and learning areas span across building state-of-the-art Reading Comprehension Systems that have a true understanding of the language, understanding Adversarial Examples in Machine Learning and making classifiers robust to different kinds of adversarial attacks, as well as Ethics and Biases in AI. He believes in robust, fair, responsible, and more ethical AI. He also has his own small non-profit startup Empowerange

Speaker Links:

Personal Website

Tech Blog

WordPress (40% life & 60% tech, wish to swap the ratio!)

Section: Data Science, Machine Learning and AI
Type: Poster
Target Audience: Intermediate
Last Updated: