From Code to Consequence: Interrogating Gender Biases in LLMs within the Indian Context
Credits: Pāus
Project
/
Aug 2023

From Code to Consequence: Interrogating Gender Biases in LLMs within the Indian Context

Partners
Gates Foundation /Quicksand Design Studio

Overview

Credits: Paūs

The development and use of large language models (LLMs) in India has the potential to drive access to crucial information and services, further equity in knowledge access and production, and fuel homegrown innovation. Yet, as with any technology, the use of these models comes with many risks — including bias, discrimination, exclusion, and informational harms. With respect to gender in particular, LLMs are known to reproduce many existing gender biases we find in the world around us.

However, the majority of the research on gender biases in LLMs focuses on the English language and often limits itself to narrow definitions of what constitutes such bias. Moreover, while governments and civil society organisations are increasingly leveraging LLMs for critical social sectors such as healthcare or agriculture, very little is known about the potential implications of such efforts, especially from a gender equity perspective.

To bridge this knowledge gap, we undertook an extensive year-long research study, examining gender-related issues across the development lifecycle of LLM applications, particularly those deployed in critical social sectors. We focused specifically on chatbots, given their predominance among other LLM use cases in India.

Synthesising our insights, we released a detailed guidebook, bringing together key insights, recommendations, and tools in a structured and accessible format. In addition to this, we also provide a diverse set of recommendations and tools for various stakeholders, including AI developers, government, and philanthropic organisations. These aim to foster more equitable and inclusive LLMs within critical social sectors while recognising the nuances involved in building gender-responsive technologies and the rapid pace of advancements within the LLM space.

Disclaimer: Given the rapidly evolving nature of LLMs, the research and knowledge shared in these outputs is not meant to be definitive. We anticipate that many of the issues, recommendations, and tools detailed in our research will need to be continuously updated, as this space further evolves. However, we hope our research can serve as a starting point for further investigations into gender equity issues associated with LLMs - both within and outside of India.

Please note that in this study, we limit our analysis to women specifically - while acknowledging that many of these concerns may not only apply to non-binary individuals and communities but may also be more pronounced for them.

Outputs
Please select a tab to view its outputs

From Code to Consequence: Interrogating Gender Biases in LLMs within the Indian Context

These reports were produced by Digital Futures Lab and supported by the Gates Foundation. The views expressed in this publication are those of the authors and do not necessarily represent the perspectives of any organisation involved in supporting or enabling this research.

The research for this project was conducted between August 2023 and July 2024.

Team

Research Team: Urvashi Aneja, Aarushi Gupta, Anushka Jain, and Sasha John

Production: Quicksand Design Studio

Report Design: Kyra Pereira and Quicksand Design Studio

Landing Page Design: Quicksand Design Studio

Illustrations: Pāus

Communications: Shivranjana Rathore