Identifying “Gender Roles” based Biases and Language in Educational Texts

ARodz (~AromaR)


0

Votes

Description:

ABSTRACT

Language being the center of human interactions can be used affirmatively or destructively. It’s usage forms the basis of our societies, our stories and our narratives. When children grow up in a world where the stories consistently entitle them or belittle them, they will internalize these beliefs without question. NLP considerably automates the analysis that is paramount to setting the world right when it comes to these societal beliefs. NLP has the potential to be an excellent equalizer. Most children pick up notions from their educational texts and the rules at their educational institute. Thus, analyzing, identifying and eliminating cultural biases in the literature used to educate would be among the first steps to fostering an equal world. This presentation has an interdisciplinary basis, borrowing concepts from psychology, linguistics, literature, statistics and computer science. .

AUDIENCE

This is a technical track, but the underlying idea is one that almost anyone could identify with. The implementations and the technical know-how would be suitable for those in the Intermediate and Beginner stages in their understanding of machine learning and natural language processing. This talk, on the highest level, as an idea, is open for anyone and everyone who considers themselves a potential engineer or developer in developing solutions for a safe and better tomorrow for all the sections of the society.

OBJECTIVES AND GOALS

The prime objective of this research proposal is to analyze and identify literature in educational texts that perpetuates the ideology of gender roles. For the scope of this thesis, the biases that can be present in the said literature are broadly classified into the following: Representation Stereotypes Culture of Blame

IMPLEMENTATION

For each of the biases in scope the implementation processes are as follows:

  • Representation

Identifying human entities and their genders based on their occurrences in the educational text, numerical or quality level analysis can confirm representation bias. Example : The proportion of female characters to males in mathematics textbooks in Cameroon, Côte d’Ivoire, Togo and Tunisia was 30% in each country in the late 2000s [1]

  • Stereotypes

Segregating entities based on gender and then ranking their compliance to a gender roles narrative by creating a model that evaluates a score derived from adjectives or descriptive usage of language. Evaluate the position of a human entity in the corpus with respect to words or phrases that perpetuate gender stereotypes by converting them into n dimensional vectors to confirm bias. Example : Women should display communal/warmth traits (e.g., being nice, caring, and generous), and men should display agentic/competence traits (e.g., being efficient, agentic, and assertive) [2]

  • Culture of blame

Using a generic implementation of the Path Model of Blame identifies victim blaming language in texts. Example : Mary was beaten by John. Mary is a battered woman. [3]

OUTCOMES/CONCLUSION

I ran some of the comparison models on some of the various textbooks available online, I’ve attached some examples of the conclusions here:

Representation comparing two history textbooks:

The_Americans_Unit_1 (2).pdf Male 0.8494897959183674 Female 0.15051020408163265

class-6-History.pdf Male 0.8287292817679558 Female 0.1712707182320442

Stereotypes associated with genders:

Gehc101.pdf

Adjectives associated with the male proper nouns/common nouns/pronouns

[right, important, large, proper, proper, certain, simple, alone, wise, wise, last, last, last, hermit, cool, quiet, several, several, strange, strange, amthat, faithful, own, own, own, wounded, wounded, more, important, important, important, important, important, bearded, only, serious]

Adjectives associated with the female proper nouns/common nouns/pronouns

[own, good]

Gehc102.pdf

Adjectives associated with the male proper nouns/common nouns/pronouns

[poor, thatha, own, red, few, strange, rich, upper, last, last, beggar, beggar, beggar, hot, old, sturdy, sturdy, sturdy, beggar, beggar, unappreciative, few, few, few, new, whole, such, kind, kind, angry, new, new, interesting]

Adjectives associated with the female proper nouns/common nouns/pronouns

[thick, kind, hungry, real, real, suspicious, bony, sturdy, Poor, new, such, lovely, lovely, quiet, quiet, quiet, happy, happy, happy, upset]

Culture of Blame:

Not a lot of examples were found for this category, which made it difficult to train the model, some examples are discussed in the slides.

In conclusion, apart from the apparent under representation of the female perspective in school textbooks despite the fact that the male female ratio in humans has always been about 1:1, the stories included in literature textbooks seem to use adjectives that are descriptive for the female characters and adjectives that are characteristic based for the male characters.

REFERENCES/BIBLIOGRAPHY

  • “Gender Bias Is Rife in Textbooks.” World Education Blog, 13 Dec. 2017, gemreportunesco.wordpress.com/2016/03/08/gender-bias-is-rife-in-textbooks/. Menegatti, Michela, and Monica Rubini.
  • “Gender Bias and Sexism in Language.” Oxford Research Encyclopedia of Communication, 20 Sept. 2019,
    oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-470#acrefore-9780190228613-e-470-bibItem-0028.
  • “The Language of Gender Violence.” Jackson Katz, www.jacksonkatz.com/news/language-gender-violence/.

Prerequisites:

This is a technical track, but the underlying idea is one that almost anyone could identify with. The implementations and the technical know-how would be suitable for those in the Intermediate and Beginner stages in their understanding of machine learning and natural language processing. This talk, on the highest level, as an idea, is open for anyone and everyone who considers themselves a potential engineer or developer in developing solutions for a safe and better tomorrow for all the sections of the society.

Content URLs:

vid: https://photos.app.goo.gl/NAzHScfvchPEyBoJ8 (this is an introductory video, nothing to do with my style of presentation or slides) slides: https://docs.google.com/presentation/d/1CwwaW4BMcrbourQA84riWMbH3HkEqPuxeM8Z5O1he-I/edit?usp=sharing

Speaker Info:

Aroma Rodrigues is a software engineer working at JP Morgan Chase. As a techno-activist she has been a part of many projects that promote diversity and inclusion. She believes that Automation is the path to Inclusion. In 2016, a teammate of her "Shoes for the Visually Impaired" project presented it at the FOSSASIA. She reads, writes and enjoys walking to explore places. She presently works in a financial services firm and believes that solving problems that she has would solve problems for a large chunk of the world. An ML enthusiast she has about 20+ Coursera Certifications with the respective project work to support her learning in that field. She presented a talk on “De-mystifying Terms and Conditions using NLP” at PyCon 2018 and a talk called “Propaganda Detection in Fake News using Natural Language Processing” at PyCon ZA 2019 in Johannesburg.

Speaker Links:

Pycon India 2018 : https://www.youtube.com/watch?v=j4vhSWgsa6Q (I honestly don't like this one because of the many technical issues) Pycon ZA 2019 : https://www.youtube.com/watch?v=gJ7KsLROxhY&list=PLGjWYNrNnSuf3KWgaFL1fzET1DyEUiqG1&index=11&t=0s

Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Beginner
Last Updated: