Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?
Credits: Neeti Banerji
Report
/
Apr 2023

Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?

Urvashi Aneja /Dona Mathew

Current approaches towards smart automation of the judicial system have been scattered, without due deliberation and legislative processes. Such an approach can increase the risk of harms and pose tangible challenges to individuals, communities and legal institutions. This is an original DFL report by Urvashi Aneja and Dona Mathew that examines how Artificial Intelligence has been integrated into India’s judicial infrastructure, identifies the actors behind this integration and articulates clear challenges and risks that have emerged as a result.

They find that the techno-legal ecosystem can be characterized by a state of ‘organised irresponsibility’ where the actions of different actors have cumulatively contributed to existing harms without any one actor being able to be held culpable.

The conversation around adopting Artificial Intelligence (AI) for automating court processes and legal research has picked up significantly since the introduction of the Supreme Court’s AI portal ‘Supreme Court Vidhik Anuvaad Software’ (SUVAS) in 2019. Recent developments such as the Punjab and Haryana High Court soliciting views on bail from ChatGPT and the allocation of 7000 crores to the eCourts project in the 2023 annual budget are an indication of India’s tech-first-and-forward attitude towards the future of the judiciary.

While AI in courts is at a nascent stage in India, it is an opportune time to consider how automation will change the justice system and foresee challenges it may pose to individuals, communities and legal institutions.

Judicial digitalisation has been a long term state goal, with initial attempts that can be traced back to 2005. Recently, the Supreme Court’s emphasis on digitalisation coupled with the COVID-19 pandemic has accelerated the growth of technology enabled judicial processes. AI has been identified as a tool to help improve the judiciary’s efficiency by assisting with case analysis and case outcome prediction.

To realise this vision, a range of actors including private organisations and academic institutions have begun to develop new AI tools for the judiciary. Actors in the space - ranging from academics, civic tech startup folks, lawyers, judges, etc. - are envisioning a future where repetitive and mechanical tasks are automated, significantly reducing pendency and delays in the justice system. Such an approach is underpinned by a developing notion of justice delivery as a service - which in turn leads to a prioritisation of values such as speed, efficiency and ease of use.

The report makes the case that the techno-legal ecosystem can be currently characterised as one of "organised irresponsibility", where the actions of "many agents together cumulatively and collectively generate risks for others, but in which all of the distinct agents are able to either minimise or fully avoid culpability because of the difficulty in tracing the overall damage to the specific harmful actions of any one of those agents."

Some key points of analysis from the report are as follows:

  1. Justice system problems are often oversimplified, lending themselves to technology solutions that address efficiency, but efficiency does not always amount to justice.
  2. The legal community and judges are not uniformly familiar with advanced technologies nor have the means to access them, increasing the risk of exclusion.
  3. The growing push for open judicial data, without adequate guard rails for protecting sensitive and personal information, increases the risks of privacy violations.
  4. Skewed datasets used to build these databases for AI tools can lead to biased outcomes, especially for marginalised groups who may be over or under-represented in datasets.
  5. The report recommends approaching AI development with an ethic of care that values the needs and capacities of all parties, as a necessary correlative to a robust legal framework.

You can read the summary of our 7th Tech and Society Dialogue (T&SD) session, 'Artificial Intelligence in Indian Courts, on this topic here.

You can read a live-tweet thread of the T&SD session here.