
AI Governance in 2020: Observations of 52 Global Experts
Many countries around the world, including India, are developing risk-based frameworks for the governance of AI. Risk based frameworks are perhaps appropriate to fuel innovation. But they are inappropriate for developing countries like India, where AI is viewed as a tool to address complex development and governance challenges. The stakes and trade-offs are different for developing countries because emerging technologies like AI are shaping development and state building trajectories. Low levels of regulatory and institutional capacity pose further challenges to the suitability of risk based approaches. Risk based approaches can create regulatory blind-spots with regard to disparate impacts for vulnerable populations and systemic risks. Assessing risk is not an objective exercise; it is deeply embedded in socio-cultural values and priorities. Risk based approaches also face methodological and epistemic challenges - even while some AI applications have a low risk, their cumulative effect could be large. While these concerns may be less paramount from the perspective of enabling innovation, they are certainly crucial from a development perspective.
Part of the problem for regulators around the world, including India, has been to establish a threshold for regulatory intervention. Risk based approaches are tempting in this regard, but to work, there needs to be open, inclusive, and transparent dialogue around risk identification and assessment. For this process to be meaningful, it is essential that civil society has the knowledge and capacity to evaluate the impact of AI; transparency and expertise are two sides of the same coin. These capacities are currently limited in India, and greater investments are needed in interdisciplinary research and public communication. Trust in judicial systems and institutions is also paramount - the absence of adequate grievance redressal mechanisms for many digitally enabled governance interventions in India do not bode well for building such trust.
Rather than thinking of AI governance in terms of specific high or low risk products and services, it is more fruitful to think of AI as a field of research - how we enable more responsible AI research and innovation? It is also helpful to adopt an infrastructural lens when thinking about AI governance. This focuses our attention on a wider range of issues that need to be governed - from the political economy of AI innovation trajectories, to the invisible labor enabling AI growth, to the societal impacts of AI. It also helps establish a certain set of values for anchoring or steering AI governance.
Finally, ethical frameworks may be inadequate for industry self-regulation, but at a societal level, we need to have far greater conversations about the ethics of automated and algorithmic decision making—we need to make important societal choices about where and how we want to introduce AI systems. At a time of growing surveillance and authoritarianism around the world, drawing a clear red line on the use of automated facial recognition and emotional recognition systems, by public and private actors, should be a priority.