Table Of Contents

Beyond the North-South Fork on the Road to AI Governance:

An Action Plan for Democratic & Distributive Integrity*

* Acknowledging that the categories of South and North are not watertight, this paper argues for situating geopolitical and geo-economic power within the history of post-colonial development.

II

Erosion of the Civic-Public Space

Why AI Governance Needs a Paradigm Shift

AI is transforming the structures of collective choice through which social policy outcomes are generated in contemporary democracy, refashioning the state’s exercise of political power (Risse, 2021). This transformation holds the potential of concentrating ever greater power in fewer hands. The automated public sphere is a fount of disinformation, hate speech, computational propaganda, and information warfare. There is copious evidence that user engagement-maximising algorithms at the heart of the social media business model are amplifying highly polarising content and hate speech (Dasgupta, 2021). Hate, xenophobia, and incitement to violence on social media platforms are on the rise. As the United Nations (UN) Special Rapporteur on Minority Issues observed in early 2021, three-quarters or more of the victims of online hate speech are members of minority communities (Office of the United Nations High Commissioner for Human Rights, 2021). Online and sexist hate has also snowballed to unprecedented levels during the global COVID-19 pandemic (Dehingia et al., 2021).

Platform self-governance dependent on a combination of human and AI moderation has fared poorly with respect to ensuring expedient removal of harmful content (Lyons, 2021). Jurisdictions throughout the Global South are at additional risk in this respect. The Facebook Files released by Frances Haugen through the Wall Street Journal in September 2021 suggest that the company has failed to establish effective terms and conditions of service, revise existing business models, and invest in the development of AI systems to filter local language hate speech and misinformation in developing countries, even when internal teams have flagged these as high-risk contents (Elliot et al., 2021). Facebook, however, is by no means unique among Global North corporations facing scrutiny for algorithms and practices seemingly harmful to Global South citizens and civil society.

Social media manipulation and digital surveillance tactics of governments and political parties are also to blame for undermining public discourse in digitally mediated forums (Neudert et al., 2019). A 2019 research study by the Oxford Internet Institute shows that politicians and political parties had deployed cyber propaganda, spreading manipulated media to amass fake followers and garner voter support in 45 democracies (Bradshaw et al., 2019). Also consider the case of the Israeli cyber-arms company NSO Group’s Pegasus spyware deployed globally since at least 2011 to perform surveillance upon politicians, journalists, and activists, for a variety of motivations and with a broad range of harmful results (Marczak et al. 2018). Such cases reveal the broad vulnerability of digital systems and should inform how algorithms, generally, and AI platforms, specifically, might be abused by unchecked governments and nefarious actors alike.

Further, the abuse of AI surveillance technology is hardly confined to illiberal states. Carnegie’s AI Global Surveillance Index (2019) that mapped 176 countries around the world found that 75 of the countries surveyed, including 51 percent of advanced democracies, were engaging in AI surveillance practices. The study showed that 56 countries had deployed smart city/safe city platforms, while 64 had rolled out facial recognition systems, and 52 had adopted smart policing practices (Feldstein, 2019). The deployment of facial recognition technology without safeguards by law enforcement agencies has emerged as a major bone of contention not just in the Global South – India (IFF, 2020), Uruguay (Datysoc, 2020), Brazil (Network Rights Coalition, 2019 & 2020), and South Africa (Lekabe, 2021) – but equally, in the North – the United States (US) (New America, 2021), the United Kingdom (UK) (Privacy International, 2021), the European Union (EU)5. Despite being a proponent of a ‘trustworthy human-rights based approach’ to AI governance, the EU has a wide berth for AI-based surveillance by law enforcement agencies (Vincent, 2021).

The US and the EU are guilty of what China is frequently criticised in international policy discourse – exporting AI surveillance technology that could threaten civic and political freedoms in other countries (Greco, 2021). A 2020 Privacy International study found that the EU has been directing aid funds to build mass-scale, high-risk biometric identity systems across the African continent to manage migration flows, without any data protection and human rights impact assessments (Privacy International, 2020). Foreign influence operations on social media are another threat, with social media companies having detected the presence of cyber troops engaged in such practices in at least seven countries: China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela (Bradshaw et al., 2019). The deployment of troll-farms and bots makes such propaganda warfare harder to trace and address (Barsotti, 2018).

Another emerging concern in both the Global North and South, as noted in the 2019 report of the UN Special Rapporteur on Poverty and Human Rights, is the algorithmification of the welfare state (Secretary-General, 2019). The algorithmic ranking and sorting of citizens to determine eligibility to access benefits is being rolled out without consideration for citizen rights: an upgrade of the Victorian poorhouse for the digital age, automatically sorting impoverished citizens into those ‘deserving’ and ‘undeserving’ of state largesse (Eubanks, 2018). Additionally, the need to create and maintain one or multiple online identities to access digital-by-default services adds a layer of long-term vulnerability (Kira et al. forthcoming). Citizens in the Global South are additionally disadvantaged as their governments’ AI systems are frequently imported from the Global North and deployed without regard for contextual factors (Secretary-General, 2019)6.

The lack of a global agreement on social media governance has largely enabled the The lack of a global agreement on social media governance has largely enabled the corporations that own the platforms to operate with impunity, particularly across the Global South. The Christchurch Call (Christchurch Call, 2019) on how online content should be moderated is perhaps the closest statement to any global consensus on the issue. However, the Christchurch Call is still not a multilateral agreement, lacking legally binding obligations for digital companies (Pandey, 2020). A manipulated and weaponised cyberspace can erode substantive democracy, obscuring the collusion of the state and the market in the brazen disregard of human rights and the rule of law. A stalemate on an international covenant on cybersecurity (Clarke, 2021) also means that political sovereignty and national security interests are threatened in an international order where clandestine AI-enabled information warfare by foreign states is becoming the norm (Ördén et al., 2021). The adoption of AI in national welfare systems without appropriate tests for necessity, proportionality and legality may herald a crisis for citizenship rights with no recourse or remedy in international human rights benchmarks.

The status quo signals the inadequacy of current institutional frameworks to protect and nurture the democratic content of society through appropriate political mediation of the meaning, use and limits of AI. The immediate task for AI governance thus centres on restoring the democratic integrity of the social order in the current conjuncture.

  1. See https://panoptic.in/central/FRT-000025; https://reclaimyourface.eu.
  2. There are a few exceptions, such as India’s domestic use and exportation of Aadhaar to other countries.