Table Of Contents

Beyond the North-South Fork on the Road to AI Governance:

An Action Plan for Democratic & Distributive Integrity*

* Acknowledging that the categories of South and North are not watertight, this paper argues for situating geopolitical and geo-economic power within the history of post-colonial development.

I

AI Governance at a Crossroads

Fragmentation vs. Coordination

Today’s emerging artificial intelligence (AI) governance landscape is highly fragmented1. Over 160 sets of artificial intelligence ethics and governance principles currently exist, but no common platform brings these different initiatives together (Report of the Secretary-General, 2020; Radu, 2021). Private sector and governments have relatively even input in these AI governance initiatives while civil society organisations have less robust representation (Ulnicane et al., 2021). Further, there is an overwhelming geographic disparity in norm-setting around AI.2

While initial conversations on AI governance mostly unfolded in silos, with technologists Notably, most of these guidelines originate from wealthy Organisation for Economic Cooperation and Development (OECD) nations while voices from the Global South remain poorly represented (Haas et al., 2020). Reviews of existing frameworks suggest that equality and non-discrimination, transparency, accountability, safety, social well-being, privacy, human dignity, and autonomy constitute the common core of normative concerns in the global conversation on AI governance (Fukuda-Parr et al., 2021).

While initial conversations on AI governance mostly unfolded in silos, with technologists focusing on solutionism in “the machine learning model, the inputs, and the outputs” (Aizenberg et al., 2020), key recent events3 have paved the way for an ethical turn in which not only technologists, but also public policy actors, civil society activists and Big Tech corporations actively participated. Unfortunately, in the absence of enforceable standards and accountability measures, the moral values embodied in the human rights discourse too often end up being deployed as mere rhetorical devices within these guidelines (Fukuda-Parr et al., 2021) – resulting in an open-ended, anything-goes, ethical practice.

Fortunately, recent conceptual explorations in AI governance reflect a necessary techno-social interdisciplinarity, albeit from a select few industrialised contexts, connecting, for instance, intelligent automation and the future of work; algorithmic public sphere and democratic life and citizens’ rights and the digital welfare state (Gurumurthy et al., 2019). Yet, without a corresponding institutional arrangement for clear and enforceable obligations and commitments in the AI governance ecosystem, the policy impacts of this ethical turn may well be limited. A rights-based AI governance paradigm4 with workable remedies for consumers and citizens – especially, vulnerable individuals and groups implicated in AI systems across the world – is thus an urgent imperative.

Rising socio-economic inequality and the intensification of the labour-capital divide in the structural transformation wrought by the current hyper-capitalist AI paradigm pose twin concerns for the socio-economic rights of the majority the world over (Acemoglu et al., 2020; Bughin et al., 2019). Emerging evidence also shows that the histories and geographies of colonialism have structured the international politico-economic order of the AI age (Mohamed et al., 2020), indelibly influencing the right to development for nations and peoples across the Global South. In today’s AI economy, most developing countries are mere sources of the new raw material of data, while also proving dependent on the Global North for AI infrastructure and services (Feijóo et al., 2020). Critically, these countries are also sources of physical raw materials that are used to create and power AI systems.

Critiques of algorithmic systems in the context of the North-South problematic have been varied, including: the overwhelming ‘whiteness’ of algorithmic decision systems (Cave et al., 2020); intensification of global labour hierarchies in the transnational data value chains that power AI business models; and the export of dubious, rights-violating AI product-testing to countries with less robust legislative frameworks are all manifestations of an ‘algorithmic coloniality’ (Mohamed et al., 2020), representing the exploitation and dispossession of the Global South in the emerging AI-driven international order. A rights-centred AI governance system must therefore be particularly attentive to socio-economic rights as they arise in the international political economy of development, straddling all generations of human rights.

The compact between the state and the market under global data capitalism is an important political arena wherein contestations for a just world order are already emerging. This paper argues for reclaiming the AI paradigm and shifting it towards democratic and distributive integrity, tracing common concerns as well as identifying fault lines to which progressive civil society in the Global North and South must attend.

  1. See https://oecd.ai repo.
  2. See https://www.technologyreview.com/2020/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion.
  3. Such as the Cambridge Analytica scandal (2016) which lifted the lid off the risks of the algorithmified public sphere for democracy; the exposés of Project Maven and Project Dragonfly (2018-19) that alerted the wider public to the new military-industrial complex, and growing disquiet about algorithmic discrimination in welfare systems and the UN Special Rapporteur, Philip Alston’s, investigation on the digital welfare state (2019).
  4. It is important to recognise the limitations of rights-based regimes in countries with weak institutional and regulatory capacities. Rights-based perspective may also not be able to adequately deal with structural and collective harms.