A troubling gap constrains the fight for technology reform today.

Experts, advocates, and many policymakers broadly agree on the need for reforms in internet policy and for advance planning in artificial intelligence (AI) regulation. The general public is increasingly becoming aware of the potential risks and rewards of autonomous systems. Still, we lack a coordinating force that brings together and empowers a globally-representative assembly of scholarly minds, civil society organizations, and policymakers to work for concrete policy change. Calls for reform must be channeled into a meaningful framework that champions the public good at the appropriate levels of power—envisioning a new digital social contract.

Moreover, algorithmic accountability and regulation cannot be assessed at the level of risks for individual consumers alone. There is a tension between individual rights and interests, and the collective implications for our societies and the communities within them that must be more directly and more inclusively addressed. For example, AI decision-making applied to the information market does not produce individual harms per se (it results in better-targeted information) but leads to important collective damages (information bubbles, polarization, systemic discrimination, etc.).

Similarly, recent work has shown that algorithms for predictive policing, loan granting or college admission decisions tend to entrench biases and discrimination against whole communities, identities, and demographic groups, even when they are corrected for fairness—since over iterations algorithms tends to reinforce differences between statistical groups irrelative to the characteristics of the individuals composing these groups. These risks prove particularly acute for societies challenged by institutional instability, deep socioeconomic inequality, or power-aggrandizing regimes. We cannot allow the increasing use of digital tech and decision-making algorithms in employment, social welfare, public health decisions, or policing aggravate further inequity and discrimination. Such unmonitored and unregulated use would have the capacity to entrench social hierarchies, collide with cultural systems and values, and tear at the very fabric of society.

To underscore that point, many of these trends have been highlighted by and are likely to accelerate in the wake of the Covid19 crisis. More intellectual work, political awareness, and industry accountability must be focused on the impact of the rapid spread of algorithmic decision making on our societies. Barring intervention, a series of negative externalities arising from the way the digital sector operates will bear increasingly negative effects for civil society, particularly across the Global South.

This is a tolling bell: efforts to reform the digital sector should be redoubled as a result of the outbreak and expanded in a globally inclusive way; especially while the dominant digital firms may be strengthened in their market position and power as a result of the pandemic and its economic fallout. Recognizing the work already begun by other initiatives, our focus will be on providing an inclusive platform for leading and emerging thinkers from diverse professional and geographical backgrounds. We will accomplish this by cultivating an inclusive working group of policy experts, scholars and civil society activists, drawn from around the world, prioritizing perspectives from the Global South.

Get Involved

Thank you for taking the time to connect with Initiate: Digital Rights in Society. Your perspective and ideas are important to our shared work. We are eager to collaborate with experts and organizations from around the world interested in participating in our working groups and other activities. Please let us know how you would like to be involved and our team will respond as soon as possible.