Algorithmic Governance Discussion (Paper)
The AI revolution is still in its infancy. As algorithms become more widespread in their application and powerful in shaping governmental policies and business practices, they will affect our lives and delineate the future of our communities. The effects will vary widely around the world, interacting with existing divisions and inequities, likely creating new ones.
Algorithmic decision-making can be a force for good, carrying out tasks that would otherwise be too complex or time-consuming efficiently and inexpensively. They can be used to help make society a fairer place, enabling widespread access to new services and conveniences. They have been linked to everything from more capable systems of producing and distributing food and energy, to the automation of multifaceted functions in medicine, law, and finance. Yet as the world becomes more and more automated, who will really be in charge? Who will hold political and economic power and how will they wield it? What will happen to our societies? What if the decisions made by these algorithms don’t always match up with values we share collectively? How will impacts and access differ across the globe, North and South?
Though efficient, AI tools have been criticized for their tendency to reinforce existing social and economic structures and social biases, excluding those that do not fit in with these structures – sometimes aggravating problems such as classifying racial and socioeconomic groups, and increasing divisions within society.
Cases have been documented in which algorithms have, whether unwittingly or by design, discriminated against particular groups of people. For instance, the application of algorithms in the workplace has led to discrimination against women. The algorithms used by credit card companies can discriminate against customers based on their race, gender, and even residential address. How can we avoid these types of problems intensifying as the use of AI decision-making platforms becomes more common?
Data flows curated by algorithms have revolutionized the information market, democratized access to information, and allowed many marginalized voices to be heard. They have, however, also been shown to create echo chambers, to speed the spread of misinformation, and to reinforce polarization between groups of people who only see information that confirms their own biases. As such, the spread of algorithmic decision-making bears important effects upon society, institutions, and democracy. They must be better controlled, debated, and regulated. So, who’s accountable? Many algorithmic decision-making platforms are not subject to human oversight, and accountability for their actions is often unclear. Are the negative externalities of such applications the responsibility of the developer, the IP owner, or the user?
What does automated decision-making mean for personal liberty, individual autonomy, and collective rights? These systems continuously ingest and synthesize new data to form a rich and dynamic understanding of how people respond to certain stimuli and, in turn, predict and even shape how they will react to new situations. As such, some commentators contend that these algorithms will eventually be better than humans in making important life choices.
That said, human perception, which functions through cognitive biases and makes judgments based on assumptions, may also be subject to manipulation by sophisticated algorithms. Algorithms are designed to make complex processes easier and more routine. In short, they’re designed to make life easier. How do we ensure that this convenience does not come at the expense of individual freedoms, collective rights, and cultural specificities?
Consider locked-in policy choices. By design, algorithmic decision-making tends to favor solutions that have been tried before, and are thus unlikely to consider alternative approaches. It also means that they tend to ignore other relevant factors like fairness, justice, and equality.
Further, these important questions cannot be answered in the same way from one country to another — they depend on social, economic, and historical context. The discussion around algorithmic tools should not impose a convergence of societal models. We cannot compel Northern problematics upon Southern countries. The pursuit of this technology across much of the Global South is often driven by functionality, practicality, and economic necessity.
In certain countries, the algorithmic determination of who is even allowed access to banks may prove just as consequential as differentiated loan arrangements available to various groups of people because of poorly-designed systems. These variations in priorities can result in a gap not only in conversations related to digital agendas but also capacity-building initiatives.
In advocating for a Digital Bill of Rights, we envision a soft law framework derived from a multi-stakeholder dialogue and exchange that consolidates a widely held set of norms. Moving ahead will require an examination of previous efforts and precedents in diplomacy and international law, an assessment of evolving developments, inclusive expert and advocate consultation, and — perhaps most of all — patience and flexibility. An alternative approach is to contemplate what the future might look like without a digital rights framework.
From our perspective, however, the first step to identifying and ordering a set of norms is to create a series of working groups representative of a wide range of perspectives from around the world. Including civil society organizations, advocates, scientific experts, technologists, academics, business leaders and policymakers, these working groups will map the terrain, determining what an international digital rights framework might look like and then advancing a process of broad adoption and potential formalization in time.