Bias in Natural Language Processing

  ·   Dec 05, 2022

Hannah Devinney, a doctoral candidate at Umeå University recently held a lecture on “Bias in Natural Language Processing (NLP)” for the Adlede team. In the lecture, Hannah focused on explaining gender bias in NLP and shared some tools with us regarding how this bias can be mitigated.

This is extremely important for Adlede because we advocate for fair and ethical advertising through contextual advertising and our contextual advertising processes heavily depend on NLP.

Image by John Manoogian III (Wikimedia)

What is gender bias in NLP?

Oftentimes NLP models are trained on data that has clear gender bias in it. Some examples of the bias may look like as such:

Such bias in NLP can lead to negative consequences. These assumptions render certain groups “unintelligible” and being unintelligible to society and computer systems can have violent consequences. For example: Such bias erase trans, nonbinary, and intersex people, and reduce cis people to stereotypical view. Thus, cannot produce fair NLP.

Bias in NLP causes harm and Hannah described harm in the following ways:

We observe harms of allocation when systems distribute resources (material and non-) in a way that is unfair towards subordinated groups. For example: a website shows STEM job advertisements to men, but not women with the same qualifications/profiles.

We observe harms of representation when systems reinforce the subordination of already-disadvantaged groups. For example: names associated with Black Americans are given less positive sentiment scores than white names. Or, when the system reinforces the stereotypes which influence people’s biases leading to more bias. For example: In the following figure we can see that biased data produces biased models which influence the world which produces biased data. Hence, the cycle of bias continues.

unnamed.png

Hannah gave Adlede an example of a spell checker where gender-neutral pronoun “ze” was not included.

Tools and tips to mitigate bias

The summed-up takeaway from this lecture is that we should be clear and critical about our (working) definitions of our data. For example: to be careful about if the data is biased, or how the data has been collected.

Since we live in a normative society we may not have completely unbiased dataset therefore these are the following 2 approaches (can be blended) that can be taken to mitigate bias in NLP:

  1. Manipulate Data
  2. Augment with artificial data (“gender-swapping”)
  3. Identify and remove “influential” documents
  4. Annotate with gender
  5. Manipulate Model
    1. Fine-tune with similar datasets (counter-biased)
    2. Embedding ‘debiasing’
    3. Remove gendered subspace
    4. Train models to isolate gender in specific dimensions, then ignore those later
    5. Adjust algorithms (constraints, adversarial learning)
    6. Generally involves specifying gender as a protected attribute

By Kabir Fahria (They/them or hen)

Developer, Adlede

Read more