Building fair machine learning systems

G POORNA PRUDHVI (~poornagurram)


0

Votes

Description:

Your machine learning models might be intelligent enough to make predictions but may lack the wisdom to prevent bias. They may be as vulnerable as a child getting influenced by inappropriate sources encouraging racism, sexism or any unintended prejudice. Models learn exactly what they are taught. The more biased your data is, the more biased is your model.

For instance, a text model by Google says how “Engineer is to a Man” is the same as “Housewife to a Woman”. This shows how incidental data can lead to unintended bias. Machines are given the power to judge so there is a need for us to ensure we prevent biased/unfair judgments. In this talk, we are going to discuss how to arrive at "Engineer is the same for both man and woman" [debiasing gender] by following the steps below :

Intro to Machine Learning bias and word vectors? [10 min]

Analyse bias from word vectors and it's problems [10 min]

Debiasing algorithm [10 - 15 min]

Questions [5-10 min]

One Famous example of bias:

enter image description here

Prerequisites:

Knowledge of python

Knowledge of building machine learning models / Interest in building one

Content URLs:

Will be updated soon!

Speaker Info:

I am a software developer, speaker, opensource contributor and a wannabe developer evangelist. I love everything python and NLP(Natural Language Processing) research. I have been volunteering with various local startup and tech communities to promote entrepreneurship and technology. I work at mroads and help them develop better a.i.

Speaker Links:

Links:

Linkedin: https://www.linkedin.com/in/poornagurram/

Github: https://github.com/poornagurram

StackOverflow: https://stackoverflow.com/users/5443381/poorna-prudhvi

Section: Others
Type: Talks
Target Audience: Intermediate
Last Updated: