Enforcement Mechanisms for Responsible #AIforAll
Report
/
Apr 2022

Enforcement Mechanisms for Responsible #AIforAll

Urvashi Aneja /Angelina Chamuah /Amrita Vasudevan

The NITI Aayog working document on “Enforcement Mechanisms for Responsible #AIforAll” advocates a flexible and context-specific, risk-based approach to regulating AI which is to be effectuated by an oversight body. Our response argues that a one-size-fits-all approach is not sustainable in the light of the rapid growth of AI in the past few years, across diverse industries and sectors in India.

Our response is organised in three parts: part I examines risk-based and principle-based approaches to the regulation of AI, and considers these with regard to alternative and complementary frameworks and approaches; part II examines the role of the oversight body; and part III focuses on the need for upstream management of technological innovation and the role of responsible innovation with regard to AI.

PART 1

We highlight that risk-based approaches are not neutral policy instruments. Instead, they should be seen as a complex set of choices regarding which risks will be prioritised and the degree of risks that will be tolerated. These choices are grounded in values and cannot be resolved through objective assessments alone.

Risk-based regulatory approaches to AI also face methodological and epistemic challenges. For instance, not all AI risks may be amenable to categorisation of low, medium and high thresholds. Though some risks of AI may have a low impact, their cumulative effect could be overwhelming.

Risk-based approaches based on the principle of welfare maximisation are not equipped to safeguard against the disproportionate impact of AI harm on minorities and marginalised populations.

For a risk-based approach to be effective, regulators must be explicit about the criteria of selection of the risks to be regulated as well as the risk appetites. Furthermore, the selection of risks must make room for open and transparent public deliberation. Any form of risk-based calculation should prioritise and uphold constitutionally guaranteed rights and liberties, and place greater weight on the disproportionate impact of AI on vulnerable populations.

PART 2

We argue that India requires a strong regulatory body to steer the regulation of AI (as well as other emerging tech), as the present regulatory landscape has critical lacunae. We recommend that the NITI Aayog should reconsider the proposal that the oversight body is to perform only an advisory function. We also point out that the working document fails to provide key information on the location of the oversight body; information that one would need to understand the extent of the body’s influence as well as its independence. We also urge the NITI Aayog to include mechanisms to secure measures for transparency, accountability and consultation which are currently lacking.

PART 3

Our response makes a case for upstream governance of AI. The Responsible Research and Innovation (RRI) framework is a useful way of thinking through non-consequentialist framings of responsibility. RRI has been interpreted by scholars to include product and process dimensions. Product dimension explores how the innovation process can be engineered to not only mitigate risks to locally evolved values (such as the principles for responsible AI) but actively promote them. The process dimension seeks to incorporate public engagement suitably early in the innovation process to have meaningful impact. Both these dimensions, we posit, should find resonance within the regulatory landscape of emerging tech and the functions of the oversight body. We agree with the sentiment of RRI that rolling back of technology is next to impossible, therefore anticipatory governance focused upstream is necessary.