
DFL Newsletter Issue #22: Bye 2024, Hello 2025!
2024 was a big year for Digital Futures Lab! We conducted pioneering research at the intersection of AI, Gender and Labour, launched India’s first capacity-strengthening program on Responsible AI for Social Impact organisations, and started a newsletter and podcast series on AI and Climate in Asia. We engaged in high-level policy dialogues on AI Governance and Digital Public Infrastructure and initiated our first participatory foresight exercise on the unintended consequences of Generative AI in India.
We have much more in store for 2025, including projects on Open Source AI, AI and Public Health, and Regulating Voice Technologies in India. Here’s a brief preview of the year that was and the year coming up!
the year that was ⏮
Mitigating Gender Bias in Indian Language LLMs

We studied the sources of gender bias across the AI development and deployment cycle, tested multiple Indian language LLMs and developed a user-testing guide for LLM developers. Learn about the project and read the outputs here.
We were honoured to have IndiaAI CEO, Mr Abhishek Singh, deliver a keynote address at the report’s launch. This was followed by a panel discussion with some of India’s leading AI thinkers and practitioners - Kalika Bali (Microsoft Research India), Nidhi Bhasin (Digital Green Trust), Safiya Husain (Karya), and Saurabh Karn (Sarvam AI), Danish Pruthi (IISc, Bangalore), and Anaita Singh (Gates Foundation India).
AI and Climate Futures → Code Green
A key highlight of 2024 was our work on AI and Climate Action in Asia, where we examined the opportunities, challenges and risks of using AI for Climate Action across 9 Asian countries. If you haven't visited the website already, we strongly recommend you do! With country briefs from leading regional experts, policy recommendations on Responsible AI, and access to a first-ever database of emerging signals of change, it’s an invaluable resource for those working on AI and/or climate policy.
As a follow-up, we launched Code Green - a newsletter and podcast series aimed at showcasing the latest scientific research at the intersection of AI and Climate and forging stronger linkages across academia and policy. The latest issue of the newsletter examines the environmental impacts of AI in Asia - a hot topic at the recent AI Action Summit. The latest podcast episode looks at AI’s potential in managing energy transitions in the region.

Generative AI and the Future of Work
Using foresight methods like futures wheel, we examined the likely first, second and third-order consequences of Generative AI on jobs, labour rights and social protection in India. We also developed ‘tiny tales’, fictional stories that helped contextualise the impacts of Generative AI on labour markets and people's everyday lives.

Responsible AI Fellowship
We launched the Responsible AI Fellowship, India’s first capacity-strengthening programme on Responsible AI. Through 1:1 mentoring, peer exchange, and expert lectures, the Fellowship supported 14 of India’s leading social impact organisations to develop knowledge frameworks, tools and capacities for responsible AI.
The 2024 Fellowship culminated with a public event in Bangalore where fellows shared their experiences navigating responsible AI principles and practices into their programmes. We’ll soon release the ‘Practice Playbook on Responsible AI’, which provides guidance on responsible AI for social impact organisations. Stay tuned to this space for updates!
If you’d like to join the next cohort of the RAIL Fellowship or would like to support it, write to us at hello@digitalfutureslab.in expressing your interest.

AI Sovereignty Workshop
One of the highlights from last year was an opportunity to host 18 of India’s leading thinkers and practitioners for a 2.5-day ‘slow workshop’ on AI sovereignty where they discussed the contours of what this phrase meant in the Indian context, followed by thematic deep dives around questions of AI infrastructure, geopolitical considerations, public interest AI, and more. The workshop provided an opportunity for first-principles thinking and forward-looking debate on what AI sovereignty means, its feasibility, and its implications for India.
Stay tuned for our report, ‘Provocations on AI Sovereignty’. In the meantime, read Urvashi’s op-ed in The Hindustan Times on ideas for designing India's AI safety institute where she outlines key goals for the Insititute in its early years: regulate AI beyond just safety, monitor post-deployment impact, mitigate data harms and strengthen AI literacy among key stakeholders.

Global Tech Thinkers
As a member of the Global Tech Thinkers, Urvashi attended a closed-door dinner hosted by French President Emmanuel Macron. She had the honour of participating in a candid discussion with President Macron about Big Tech's dominance over the AI ecosystem and its implications for democracy, as well as the priorities for the AI Action Summit in France earlier in February.

Safer DPI
Urvashi was co-chair of the UN Working Group on Safe Digital Public Infrastructure, which developed actionable safeguards for designing and implementing digital public infrastructure. The report outlines a set of key guiding principles and strategies for governments, DPI implementers and other stakeholder groups. Read the report here.
coming up in 2025 🌟
Speculative Friction: Exploring the Unintended Consequences of Generative AI
Generative AI is expected to improve access and inclusion, with many new applications being piloted in critical social sectors. But, it can also contribute to new forms of harm and risks, many of which are unforeseen at the time of development and deployment. Speculative Friction aims to shine a light on the potential unintended consequences of GenAI, so we can prevent harmful technological and policy lock-ins in the future. Using methods of participatory foresight and creative storytelling, we are developing a series of provocative fictional stories to communicate this counter-narrative on GenAI to the masses.
Open-Source AI: Policy Options for India
Market concentration and a lack of transparency and accountability characterise much of AI development. Making AI systems open-source could address these challenges, democratising the AI ecosystem and enabling greater auditability of AI models. However, open-source is also not a simple solution. History shows that it has benefitted big tech companies and could contribute to new forms of data extractivism. This year, we explore some of these tensions around open-source AI and develop recommendations for policymakers and AI developers in India.
Related to this, take a look at Urvashi’s recent op-ed in The Hindu on Democratising AI.
Voice Technologies for Indian Languages
Voice technologies in Indian languages have the potential to significantly enhance economic, social and political engagement across the country's diverse landscape. Open-source voice data sets and models can also promote more inclusive technology development. However, there are also many challenges related to the misuse of voice technologies, such as intellectual property violations in the context of deepfakes. Voice technologies must also accurately capture cultural nuances across a wide range of Indian languages. New forms of licensing are also required to ensure fair and equitable use. With our partners at Art Park and Trilegal, we will identify best practices for hosting, developing, moderating, and licensing Voice Technologies in India.
AI and Public Health: Developing Risk Mitigation Frameworks for India
AI tools can improve access to and quality of public health. However, the use of AI also poses many risks and harms. This year, we are working with the Center for Responsible AI (CeRAI), and IIT Madras to develop a risk classification framework and associated risk mitigation strategies for AI in Public health. We aim to build something that is practical, actionable and can provide ready guidance to medical professionals, administrators and regulators.