Skip to main navigation menu Skip to main content Skip to site footer

Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings

Abstract

Word embeddings are the standard model for semantic and syntactic representations of words. Unfortunately, these models have been shown to exhibit undesirable word associations resulting from gender, racial, and religious biases. Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing methodology which not only eliminates the bias present in a word vector but also alters the spatial distribution of its neighbouring vectors, achieving a bias-free setting while maintaining minimal semantic offset. We also propose a new bias evaluation metric - Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of evaluation metrics show that RAN-Debias significantly outperforms the state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also reduces direct bias, adding minimal semantic disturbance, and achieves the best performance in a downstream application task (coreference resolution).

Article at MIT Press Presented at EMNLP 2020

Author Biography

Tanmoy Chakraborty

Dr. Tanmoy Chakraborty is currently an Assistant Professor and a Ramanujan Fellow with the Department of Computer Science and Engineering, IIIT-Delhi, New Delhi, India, where he leads a Research Group, LCS2, which is primarily focusing on social computing and natural language processing. He has received several awards including Google Indian Faculty Award, Early Career Research Award, DAAD Faculty Award. More details can be found at http://faculty.iiitd.ac.in/~tanmoy/.