
DFL Newsletter Issue #21: AI Sovereignty, Responsible AI in Action and DFL in Paris!
The end of the year has been busy for DFL - we’ve been working round the clock to give you all the gift of actionable insights on Responsible AI and more this holiday season! 🎁 🎄
From a successful symposium to three op-eds out to a whole host of events in partnership with some established organisations and professionals in the field, we think we’ve made Santa’s nice list. 🤶🏽
Read on for updates on the last two months of the year at DFL! You’ll have to wait a bit more for the ‘year in review’ edition of the newsletter, coming in 2025! 😉
policy ⚖️
As a member of the Global Tech Thinkers, Founder and Director, Urvashi Aneja, attended a closed-door dinner hosted by French President, Emmanuel Macron. She had the honour of participating in a candid discussion with President Macron about Big Tech's dominance over the AI ecosystem and implications for democracy, as well as the priorities for the upcoming AI Safety Summit in France.


events 🎤
The Responsible AI Lab Fellowship Symposium took place on Thursday, November 28th, at the Museum of Art and Photography Auditorium in Bangalore!
We heard from six Fellows on their existing journeys with AI integration and how the Fellowship supported them in shifting their approach to responsible AI and engaged in very involved Q&A. We closed off the day with a robust panel discussion with some of the biggest names in the development sector and AI research and development.
Lectures and the Practice Playbook on Responsible AI based on learnings and insights from this iteration of the Fellowship will be coming out soon! Sign up here to be notified when they come out!

Between 2023-24, with support from the Gates Foundation, Digital Futures Lab conducted extensive research to investigate the sources of gender bias in large language models (LLMs) built and customised for social sector use cases in India. We recently hosted a webinar to formally launch the design principles and indicative strategies for de-biasing these models! Joined by an esteemed panel of speakers, including MeitY’s Mr. Abhishek Singh, Research Manager Aarushi Gupta shared learnings and perspectives that emerged from the year-long project.
Digital Futures Lab, with support from the Samagata Foundation, recently hosted a workshop with 18 of the country’s leading thinkers and practitioners in Goa, exploring the concept of AI sovereignty and its implications for India.

Research Associate, Dona Mathew, co-facilitated a workshop on Sustainable AI and Climate Futures in Bangalore with Algorithm Watch, in association with Goethe-Institut and Quicksand Design Studio. 25 participants from civil society, creative and design backgrounds, academia and climate-tech startups came together to explore the opportunities and complex challenges at the intersection of AI and climate action.
Research Manager Shreeja Sen and Research Associate Anushka Jain hosted a session at the Digital Citizen Summit, 2024 titled, "Co-creating Responsible AI Principles for India: Building on Learnings from DFL's RAIL Fellowship". They presented certain sections of the upcoming Practice Playbook on Responsible AI and led a brainstorming activity for additional recommendations.
With support from the Rohini Nilekani Philanthropies, we hosted a workshop at our studio space in Goa on the unintended impacts of Generative AI in India, as part of the Speculative Friction project. Participants engaged in foresight exercises to explore short and long-term consequences of GenAI use cases in agriculture, healthcare, local governance, and the judiciary.
Research Associate and Public Engagement Manager, Sasha John, spoke on a panel on ‘AI for Social Good’ at the Goa Institute of Management’s Centre For Social Sensitivity and Action (CSSA)’s 2nd annual conclave, "Tech4Society: Leveraging Technology for Sustainable Development”. She stressed the importance of moving toward ‘AI for Social Good’ as the norm, not a distinct category of AI interventions.
Sasha also spoke at EuroPCom 2024, Europe's largest annual gathering of public communication experts, jointly organised by EU institutions. On the panel on ‘AI for effective public communication’, she translated learnings from interventions in the social sector in India to inform responsible AI intervention design in the field of public communications.
research 📑
A third issue of the Code Green newsletter is out! In this latest issue, ‘GeoAI meets Climate Action’, we look at data sources, risks, and frameworks for responsible GeoAI development.
The fourth Code Green podcast episode is also out! We talk to Cindy Lin and Sherif Elsayid-Ali about the sustainability of AI interventions for climate action, focusing on realistic narratives, collective restraint and context-specific innovations.
media ✍🏽
Urvashi co-wrote an op-ed with the Head of Rockefeller Foundation’s Asia Regional Office, Deepali Khanna, on ‘building trust and inclusivity in AI-Driven climate action’. They advocate for AI solutions for climate action that must be grounded in the needs of communities affected by climate change.
Urvashi also wrote an op-ed on how ‘democratising AI needs a radically different approach’ published in print and online in The Hindu.
Research Manager, Harleen Kaur, co-wrote an op-ed on the collaborative future between science and society that was published in The Analysis!
what we’re reading 📖
- Shreeja
📕 Location Data Firm Offers to Help Cops Track Targets via Doctor Visits by 404 Media - Harleen
📕 What is Digital Public Infrastructure? Towards More Specificity by Tech Policy Press - Anushka
📕 India’s Advance on AI Regulation by Carnegie India
coming up ⏩️
We’ve got some exciting publications coming out in the first couple of months of 2025: we’ll be launching the first two illustrated narratives from the Speculative Friction project and sharing our report on Generative AI and the future of work in India (supported by Friedrich-Ebert-Stiftung)!