
Rethinking AI Governance: From Problem Solving to Problem Diagnosis
Growing evidence of the harm and risks of algorithmic decision-making systems has made AI governance an urgent issue. Governments, industry, and civil society bodies across the globe are developing frameworks for safer, fairer and responsible AI. This article reviews dominant approaches to AI governance and argues that while these approaches are important and necessary, they do not adequately address the core challenges around ethics, power, and community. Alongside ‘problem solving’ approaches that focus on reducing the harm produced by AI systems, we also need ‘problem diagnosis' approaches that examine how current AI imaginaries and economies have arisen. This can help to develop more systemic and emancipatory ways to align AI innovation trajectories with societal wellbeing.
Problem solving through self-regulation, technical standards and rights
An approach popular among technology companies is to champion a set of ethical principles to guide the development and deployment of AI. AlgorithmWatch has counted over 100 AI Ethics statements released in 2018 and 2019 alone. While they differ in the specifics, almost all coalesce around principles of transparency, fairness, equality, accountability and safety. A key problem with ethical frameworks is that they are not binding or enforceable; it costs little to either pledge to these principles or violate them. The statements themselves are often framed as vague promises, with little elaboration of how they will be realised or implemented. For the most part, ethical frameworks seem like a way for industry to argue that self-regulation will be enough to check AI harm, and avoid other command and control forms of regulation.
A question we need to ask is whose ethics are being privileged in such frameworks. Why, for example, have particular concepts of fairness and transparency gained traction in conversations on ethical AI and what are the alternative ethical frameworks that are foreclosed as a result? As Noopur Raval and Amba Kak point out, ‘it cannot be assumed that terms like fairness, transparency and accountability carry the same meanings or even meaningful import in AI, Ethics and Governance discussions in the Global South.’
Another approach, emerging among the computer science community, is to build fairer, more accountable, and transparent algorithmic systems. There are clear and important strides to be made here. For example, new research on the Compass algorithm used in American courts suggests that the same results could be achieved with significantly fewer data points, than the twenty or more that are currently used. Simplifying the algorithm as such could improve its explainability and, perhaps, enable more informed decision-making and accountability.
Ultimately however, fairness is a property of social systems, not technical systems. It can only be understood in context. Even within the computer science community, there are multiple definitions of fairness. In certain cases, positive discrimination might also be required to adjust or correct for already existing societal inequities. The quest to build better algorithms also risks foreclosing conversations about whether these algorithmic systems should be deployed at all. Consider, for example, the case of facial recognition applications - even if the algorithms were fair, transparent, and accountable, that does not diminish, and in fact even strengthens, concerns about their impact on civil liberties and freedoms.
A third and more promising direction is the turn towards the application of international human rights frameworks. They are universal and binding, and codified in international law; responsibilities of governments and companies are well articulated and a range of domestic, regional, and international institutions are available to provide remedy. This has become a significant area of focus for governments and the research community alike, with numerous recent papers, frameworks, and policies calling for the adoption of a rights-based framework for the governance of artificial intelligence.
But three critical issues remain.
One, rights-based frameworks take the individual as the primary unit of concern, focusing on individual harm. But in an AI world, harm is often societal and structural, not individual. Take for example, AI-based alternative credit scoring models that are based on user social media data and consumption habits. The problem is not only of privacy, or that certain individuals are discriminated against because of the friends they keep or their spending behaviour, but that over a period of time, new definitions or understanding of what constitutes a credit-worthy individual are developing. Algorithms, as Tarleton Gillespie argues ‘are not just codes with consequences, ’but are ‘intimately bound up in the production of meaning.’ What does it mean for regulation, when what is at stake is how we know and understand the world? These kinds of questions are not covered within a human rights framework.
Equally, a single instance of harm could be unrecognisable and inconsequential but many taken together could have harmful consequences in the aggregate, or over a period of time, altering societal structures. Restructuring of entire industries through intelligent automation will, for example, have adverse effects on employment. The new efficiencies enabled by AI are necessarily going to reduce the need for labour, and this is likely to exacerbate global inequality. This is a particular concern for developing countries that are already confronting high rates of underemployment and unemployment as well as the re-shoring of manufacturing industries. New online labour market places are emerging in the Global South to provide the back-end data annotation and labelling services for global AI industries, but these are characterised by low and volatile earnings and unfair working conditions. These types of structural harm are not easily captured by rights-based approaches.
Second, rights require corresponding duty bearers, typically the state or other delegated organisations. However, recent evidence shows that states, including liberal democracies, are acquiring AI-based surveillance technologies at a faster rate than ever before to monitor, track, and surveil citizens. To what extent can these institutions also provide meaningful protection against infringements of human rights? This approach wrongly assumes that systems of redressal are functioning and accessible to all. However, in many places, particularly in developing countries, human rights institutions are already weak and have been unable to protect the most vulnerable. As Chinmayi Arun argues, ‘When companies deploy these technologies in Southern countries, there are fewer resources and institutions to help protect marginalised people’s rights. Young democracies lack institutional stability since it takes time to build institutions and institutionalise democratic practices.’
Third, as Anna Lauren Hoffman argues, rights-based frameworks focus on discrete bad actors rather than the broader structures in which they are embedded. The 2019 documentary on Netflix on Cambridge Analytica, for example, villainises Facebook and Mark Zuckerberg, ignoring the fact that the underlying logic of micro-targeting users to shape their preferences, whether for commercial or political purposes, is already being done extensively, and the benefits even being accepted and applauded by many. Further, as Hoffman argues, human rights conversations do not take into account how structures of privilege are created and how this might produce unfair outcomes. They do not help to deal with changing structures of power and agency. Consider the issue of workplace monitoring. Even if worker consent is sought for collecting data to evaluate performance, and strict data protection standards are met, this does not address the fundamental shift in power dynamics between employers and employees.
Unpacking AI Imaginations and Economies
These frameworks, particularly human rights frameworks, certainly have a crucial role to play in setting standards, establishing norms and responsibilities, and steering AI innovation trajectories toward societal benefits. But we also need to take a more structural approach, that address the narratives, values,interests, and histories underlying current AI trajectories, and that re-centre issues of power and community.
National strategies from governments across the globe are marked by a ‘winner takes all’ narrative - that securing a competitive place in the AI race is necessary to secure future economic growth and prosperity. Policy documents often reflect an anxiety of ‘being left behind.’ For developing countries, AI is often associated with ‘modernity’ and ‘progress,’ as a technology that can help developing countries ‘catch-up’ and ‘leapfrog’ persistent development challenges. AI is supposed to catapult the developing world towards an imagined but undefined future that leaves behind many of the existing political, economic and social challenges.
This imagination of ‘artificial intelligence’ has been fueled by technology companies. As Yarden Katz writes, AI is a marketing tool‘ used to rebrand what was known a decade ago as large scale data analytics and data centre business.’ It has also created a fertile space for private sector companies to market and sell AI-based products in the name of improving efficiency or solving a critical developmental problem. Much of global AI development is being led by a few ‘superstar’ global technology companies, who have access to a majority of global digital data; the growth of the current AI industry is in fact both predicated on, and reproduces, a concentration of power. Narratives of ‘AI for Social Good’ are often advanced by technology companies to distract from the commercial interests driving AI innovation, or obfuscate the harmful uses and impacts.
We need better precision in the terms we are using when talking about AI. This would help temper the hype and identify more meaningful opportunities for AI development that support societal wellbeing. Machine learning - the dominant technique for AI today - is essentially a form of computational statistics that involves two core processes - classification or pattern recognition and prediction for pattern generation. These patterns are generated though statistical correlation, and do not indicate a relationship of causation. Matteo Pasquinelli argues that machine learning, as a form of statistics, represents a form of ‘information compression’ which goes hand-in-hand with ‘information silences.’ Information compression is what enables profit for companies, but the information loss ‘often means a loss of the world’s cultural diversity.’ Data are also never neutral and always partial and representative. Since machine learning systems are based on generating inferences based on historical data,the patterns or predictions produced by these systems will tend to reproduce and amplify existing socio-economic inequities. The data-intensive nature of machine learning is also fuelling today’s extractive information or data economy, at the expense of individual rights and privacy.
If we think of AI in these terms, as computational statistics, that, as Cathy O’Neil argues, ’reproduce the future based on the past’, it must force us to temper our expectations about AI for Development or AI for Social Good. Statistical models are certainly useful -they can help identify trends, provide informational support, aid decision-making,plan resources, and automate certain processes. But we need to recognise the limitations and silences of these systems as well as the commercial interests driving current innovation and use trajectories. This is particularly important for developing countries, where AI is being positioned as a solution for complex development problems. In many developing countries state capacity is simultaneously hollowing out as a result of the growing complexity of new technologies and the space this has created for technocrats and large technology companies to influence state technology policy.
Renowned AI scientist, De Kai, has argued that the current era is best understood as an ‘AI nap,’ that large-scale statistical models such as machine learning and deep learning do not represent ‘intelligence’. The common example used to explain machine learning as being similar to teaching a child through repeated exposure to the same set of inputs - say a cat - is inaccurate. It misunderstands how human intelligence works - children are able to identify a cat after just a few sightings, as they have the capacity for abstraction. There may be an opening here for a different AI future - to develop other computational techniques that could create intelligent machines that are less reductionist and do not depend on the huge computing resources and data that are only available to a select few technology companies.
As we do so, we should also consider the type of ‘intelligence’ that is privileged in current AI research. Current paradigms are mostly based on a particular understanding of person-hood, as a free and rational agent working towards desire maximisation. Diana Myers notes how these atomised conceptions of the individual are detached from social and political forces, and rather better identified with the instrumental rationality of the marketplace.’ The values associated with intelligence are often the ability to reason and make inferences. But the rationality assumption is one that is proven wrong almost every day; a look at how governments and the public have responded to the pandemic is a case in point. The focus on individual ‘desire maximisation’ is also potentially harmful. The vaccine politics of today or the resistance to masking are pertinent examples. Different sorts of intelligence are also crowded out in this definition of artificial intelligence- particularly around emotional intelligence, or qualities of empathy and care. Again, a look at the pandemic shows just how important these other kinds of intelligence are – it is mutual aid groups in India that helped it manage a brutal second wave, and these are based on a duty of care, not a rational calculation of interest maximisation.
As we develop new computational techniques for machine intelligence, we should consider other types of person-hood, intelligence, and ethics to chart alternative AI futures. Indigenous knowledge systems, for example, do not position humans as outside or above natural systems. Relational paradigms based on social and environmental sustainability have long informed technology development in indigenous cultures. Such relational ethics are a core tenet in many cultural traditions across the world - which emphasise the duty to care for others - a communal duty of care toward people and the environment. Critiquing Western reason of rationality in shaping the philosophical terms on which AI and AI ethics is dominantly conceived, Mhlambi details how the Sub-Sahara African notion of Ubuntu, which centres on the relationality of personhood, can undergird a framework for addressing the structural logics that produce harm in the context of AI systems. Other scholars are exploring how ethics in Asian countries that focus on a ‘communal self’, and values such as empathy and compassion, can shape AI governance.
Such explorations are critical. However, the glaring problem here is the weakness of knowledge institutions in the Global South. The pandemic has been a perfect illustration of this. Analysis from developing countries, even in a middle-income country like India, has been poor and absent - media coverage has been fairly superficial and the academic and policy communities have not developed robust analyses of the situation. Most of the reliable and timely evidence has emerged from experts in industrialised countries studying the developing world. We are already seeing this knowledge asymmetry playing out in the AI ethics conversation. Conversations around decolonising AI, for example, are mostly hosted in universities and research centres in industrialised economies. It is therefore imperative that a more emancipatory agenda for AI is built through investing in knowledge capacity in developing countries.
Investing in such capacity can also provide a critical balance to private sector impulses and government lure to the magic of AI. The push back we are seeing from civil society against AI systems in the US and Europe is a case in point - academics, researchers, and advocacy organisations have played a critical role in identifying, documenting and studying the harm; their studies are now the basis of government action. This is obviously a long-term solution. Even if these investments were to be made today, the impacts would be visible only much later.
In the shorter, or more immediate term, two things are critical. First, we need to develop new social organisations for data stewardship. There are a number of experiments, such as data cooperatives and data trusts, already underway around the globe. Giving people control over their data is fundamental - who gets to structure, analyse and benefit from data is just another articulation of who has power, status and recognition in society.
But we also need to put limits on today’s extractive data economy. In other words, it is not just about the responsible or ethical use of data, but to ask questions about the need for extensive and gradual personal data collection in the first place. There is little evidence to support claims that ‘personalisation’ improves company profitability. In fact, some of the leading global companies, have switched away from a form of behavioural advertising to contextual advertising.There is also evidence to show that businesses with other less intrusive forms of data collection can be profitable.
Second, we need to address the market dominance of Big Tech firms. Competition policy would need to be updated to consider not just price as a proxy for consumer welfare, but also control over data. Mandating platform neutrality and a check on mergers and acquisitions could create a more competitive market place. Open standards could help dismantle Big Tech walled gardens and create a more decentralised digital ecosystem. None of these are easy solutions and some may even create new undesirable consequences, but are within the reach of traditional legal instruments. Such policy interventions are especially important for developing countries. In India, for example, global technology giants are filling gaps in state and market capacity, and exercising an enormous amount of civic power, but accountability mechanisms are generally absent.
Our Contested Futures
AI governance conversations are often limited to the high corridors of power -policy makers, leading industrialists, leading public intellectuals, and leading scientists. But, in many ways, this is a conversation about our shared and contested futures - what are our societal priorities or visions of a ‘goodlife’ and what role do we want automated systems to play in that life? We could look to social justice movements around the world to understand peoples’ needs, desires, and priorities. Such movements often bring together or speak of the concerns of the most marginalised members of our society. Our collective visions of the future must centre these concerns, or we will continue to reproduce existing patterns of societal inequity and injustice.
Four ongoing movements in India come to mind. Farmers in the country have been protesting about a new farm bill that would privatise agriculture; beyond the merits or demerits of the bill itself, the core point of contention for the farmers is that they were not consulted in the decisions being made about them and their future. In late 2019, people of all religions and classes protested against a proposed amendment to India’s citizenship bill. Again, without getting into the politics of the bill itself, the protests drove home the point that people have multiple and fluid identities, and that identity fluidity is important for both physical safety and social mobility. Gig workers on delivery and ride hailing platforms have also been protesting across the country against unfair and exploitative working conditions. There is a clear recognition of the grossly unequal distribution of value between platforms and gig workers, at the expense of their wellbeing. And finally, youth in India have self-mobilised to draw attention to climate change and the health of future generations.
Across the four social movements, citizens are trying to negotiate the future, what constitutes, ‘a good life’. Their demands are around agency, identity, work, and the environment. We must link these conversations - see what movements like these are telling us about visions of a future good life and find ways in which AI can serve, not threaten, those futures.
This was first written in May 2021. It was commissioned and published by the Barcelona Center for International Affairs, as part of it annual handbook.
Read the Spanish version here.