AI on the Ground: A Snapshot of AI Use in India
Report
/
Sep 2022

AI on the Ground: A Snapshot of AI Use in India

Urvashi Aneja /Harsh Ghildiyal /Angelina Chamuah /Joanne D’Cunha /Vikram Mathur /Abishek Reddy

This report presents an exhaustive study on the current uses of AI in India, reviewing 70+ use cases across 9 key sectors, including policing, agriculture, health, and education. It examines the opportunities, challenges and risks posed by the use of AI across key social sectors and identifies policy priorities to align AI with the public interest.

The debate around AI in India has become polarised - while advocates present it as a panacea for solving India’s persistent developmental challenges, critics highlight the threats to liberty, rights, and equal opportunity that AI technologies. There is currently a lack of extensive and grounded research in India on the actual uses of AI in order to inform policy. While new opportunities and harms related to AI are announced in the media almost daily, there is a need for research grounded in the country’s unique context in order to identify the key issues related to AI development and deployment in India.

This report provides a snapshot of India’s AI ecosystem, by examining the existing AI-based products and services across 9 key sectors. It identifies 70+ types of AI use across these sectors, and provides an overview of possible benefits, challenges to adoption, and potential harms across these sectors. Based on this analysis, the report presents key policy pathways for AI in India that can shape societal trajectories of AI towards an equitable, safe, and just technological future.

The sectors examined include:

  1. Agriculture
  2. Banking, financial services and insurance
  3. Education
  4. Energy and Water
  5. Enterprise solutions
  6. Healthcare
  7. Policing
  8. Public Tech
  9. Workplace

Through our study we have uncovered a range of findings including

  • Small but incremental benefits are accruing from the deployment of AI and ML-enabled systems.
  • Narratives of the transformative impacts of AI are yet to be matched by current use cases.
  • It is not simply the technology, but its use that also requires closer public scrutiny.
  • Many development and deployment challenges are similar across sectors.
  • Greater oversight is needed when Machine Learning (ML) applications are central to decision- making about people, their rights, livelihoods, and relationships.
  • ML systems that enable the profiling of individuals and groups require adequate checks and balances.
  • The use of AI in public service systems and safety-critical sectors should be held to higher standards of transparency and accountability.
  • Monopolisation of data and differential access to resources to build ML systems increases market inequities.
  • Uneven distribution of technology gains can entrench existing societal inequities and create new ones.

Based on these findings we outline the following recommendations:

  • Policy interventions should be based on an evaluation of the social impact of ML applications.
  • Investments in state and regulatory capacity, along with analog components of digital society are needed.
  • Red lines should be drawn around certain types of use.
  • Data protection frameworks need to be accompanied by community rights and accountability frameworks.
  • Risk management approaches must be accompanied by upstream management of technological innovation processes.

This project was carried out by the team at Tandem Research, the former home of the Responsible Technology Initiative.