
Developing a Democratic and Responsible AI Ecosystem in India: Report of the Working Group on AI Governance
This report summarises key discussions held among the working group members on topics related to the AI landscape in India, what responsible AI looks like for India and how the best regulatory and policy levers can be designed to achieve the goal of using AI for social progress.
Despite rapid advancements in AI adoption and use in India, evidence has increasingly demonstrated that while it can be a force for good, it can also produce unintended harms that must be mitigated and managed. While the debate on how to ethically and responsibly implement AI has emerged globally, discussions about how best it can be integrated and regulated across critical sectors in India remain nascent. Proposed policies from both central and state governments have sought to establish a framework to govern AI use, however, it is necessary to evaluate their applicability given the constantly evolving nature of AI.
In response, DFL instituted the ‘working group on AI governance’ with the goals of identifying AI related policy priorities and pathways in India, building contextually relevant and applicable AI governance frameworks and developing a network of stakeholders that are committed to principles of responsible AI. The working group was formed in late 2021 and convened 4 times over 18 months. The group focused primarily on the use of AI within the public sector.
Discussions occurred across two broad themes:
- democratising the AI ecosystem
- responsible AI use and adoption
The working group did not have the opportunity to comment on some of the more recent advancements in the AI field such as the emergence of language models and advancements in generative AI which have added both vast optimism towards the opportunities that AI can facilitate concerns over the possibility of greater risks. However, the recommendations provided by the group are still pertinent and applicable given the changing nature of the field—with these rapid advances serving as more of a reason for policymakers and stakeholders to take quick action.
The working group consisted of 14 members:
- Aakrit Vaish: Haptik
- Balaraman Ravindran: IIT Madras
- Rama Devi Lanka: Government of Telangana
- Shailesh Kumar: Jio
- Smriti Parsheera: Fellow, CyberBRICS
- Subhashish Banerjee: IIT Delhi
- Vidushi Marda: REAL ML
- Vrinda Bhandari: Lawyer Abhinav Verma: Independent
- Abhishek Singh: Digital India Corporation
- Ameen Jauhar: Vidhi Policy
- Rentala Chandrashekhar: Former NASSCOM
- Divy Thakkar: Google India
- Nehaa Chaudhari: Ikigai Law
Discussion from the group first focused on the development of a democratic AI ecosystem in India. The group identified five strategies to achieve this:
- Creation of public infrastructure and public goods by ensuring sufficient public investment across the AI value chain
- Ensuring safe and secure access to high quality government data
- Promoting competition in the field to ensure a level playing field and preventing the concentration of power among a few actors
- Implementing forward-looking governance frameworks that identify potential risks and constrain unwanted outcomes
- Supporting community participation through the creation of a multi-stakeholder body that can engage with the wider community
The group then focused its efforts on identifying principles necessary to ensure the responsible development and implementation of AI in the public sector. These included:
- Suitability: An assessment of the appropriateness of an AI solution in addressing the identified challenge of the policy gap
- Scientific rigour: Comprehensive evaluation of the technical components of the system, eg. error rate.
- Transparency and Accountability: Ensuring explainability and interoperability of algorithmic decisions so that AI use is transparent and accountable
- Humans in the loop: Key decisions should always involve human input.
- Non-discrimination: AI use should follow the principle of non-discrimination. This can include mapping out potential harms towards different socio-economic and political groups as a result of AI use and ensuring they are mitigated.
Finally, the experts outlined four measures through which these principles could be realised across AI use:
- Appropriate problem identification and AI suitability assessment
- Pre-deployment and ongoing impact assessment to determine harms, risks and successes
- Well designed procurement practices
- Development of stringent monitoring and review mechanisms to determine efficacy