The Right To Be Free From Automation

A qualified right to freedom from automation can designate what is worth protecting in a technologically dependent human world.

Ingo Pohl

Ziyaad Bhorat is a South African political theorist, a USC-Berggruen Fellow and Technology and Human Rights Fellow at the Harvard Kennedy School’s Carr Center for Human Rights.

Is it possible to free ourselves from automation? The idea sounds fanciful, if not outright absurd. Industrial and technological development have reached a planetary level, and automation, as the general substitution or augmentation of human work with artificial tools capable of completing tasks on their own, is the bedrock of all the technologies designed to save, assist and connect us. 

From industrial lathes to OpenAI’s ChatGPT, automation is one of the most groundbreaking achievements in the history of humanity. As a consequence of the human ingenuity and imagination involved in automating our tools, the sky is quite literally no longer a limit. 

But in thinking about our relationship to automation in contemporary life, my unease has grown. And I’m not alone — America’s Blueprint for an AI Bill of Rights and the European Union’s GDPR both express skepticism of automated tools and systems: The “use of technology, data and automated systems in ways that threaten the rights of the American public”; the “right not to be subject to a decision based solely on automated processing.” 

If we look a little deeper, we find this uneasy language in other places where people have been guarding three important abilities against automated technologies. Historically, we have found these abilities so important that we now include them in various contemporary rights frameworks: the right to work, the right to know and understand the source of the things we consume, and the right to make our own decisions. Whether we like it or not, therefore, communities and individuals are already asserting the importance of protecting people from the ubiquity of automated tools and systems.

Consider the case of one of South Africa’s largest retailers, Pick n Pay, which in 2016 tried to introduce self-checkout technology in its retail stores. In post-Apartheid South Africa, trade unions are immensely powerful and unemployment persistently high, so any retail firm that wants to introduce technology that might affect the demand for labor faces huge challenges. After the country’s largest union federation threatened to boycott the new Pick n Pay machines, the company scrapped its pilot. 

As the sociologist Christopher Andrews writes in “The Overworked Consumer,” self-checkout technology is by no means a universally good thing. Firms that introduce it need to deal with new forms of theft, maintenance and bottleneck, while customers end up doing more work themselves. These issues are in addition to the ill fortunes of displaced workers.  

Moreover, the techno-optimistic idea that automation frees labor up for new and better things must confront the reality that actual workers want assurances of those things for themselves before buying into automation decisions. If workers are not unionized, they have very little capacity to make demands. And even when unions do negotiate for workers, their relative strengths and weaknesses can create distortions across different industries facing higher or lower levels of protection. 

“Is it possible to free ourselves from automation?”

Political leaders are usually unwilling to position themselves in these dogfights between firms and labor, especially across international supply chains. But failing to deal with automation-induced un(der)employment, the associated anxieties and ultimate instabilities can be politically costly. For example, scholars at the University of Oxford who studied the 2016 U.S. presidential election found a “rage against machines” mentality present in electoral districts with a higher share of routine jobs exposed to automation. 

Enforcing the right to work in accordance with preexisting international human rights obligations therefore means strengthening both international and domestic labor protections, particularly around corporate decisions to automate. A qualified freedom from automation here means that firms be legally mandated (or at very least publicly incentivized) to exhaust all possibilities of saving affected labor and wages when automation comes to town. As a 2022 report on automation and worker training by the U.S. Congressional Research Service found, “the federal income tax offers no targeted incentive for employers to invest in worker training.” The options are on the table, but they have yet to be comprehensively realized.

These problems with automation and labor have appeared mostly between blue-collar workers and their employers. But now a whole host of content can be produced by automated means, including art and communications media. Knowledge workers, artists and other professionals are finding that the age of automation has caught up to them as well. Generative AI and synthetic media tools like DALL-E or natural language generation technologies like GPT threaten broader swathes of workers and creators, while simultaneously offering new forms of augmented production.

But AI-generated content raises a specter that threatens a second right that humans have come to cherish: the right to know and understand the source of the content we consume. Whether to safeguard against deceit and plagiarism, protect artistic works against copyright exploitation or simply not feel duped, reviewers and consumers of digital content are justified in wanting to assert a right to know and understand its production source. For example, controversy erupted recently when an AI-generated artwork won a blue ribbon at the Colorado State Fair, but the source of the submission had been previously disclosed, which played a key role in responses defending its win. 

In many ways, a right to know — as a component of a general right to freedom from automation — takes inspiration from decades-long global advocacy initiatives over food source transparency, especially in the context of GMOs and chronic disease prevention. We could think of the efforts for transparency in digital content creation as analogous to the fight for detailed food nutrition labels. Some companies are already thinking this way. IBM, for example, promises that its AI Factsheets tool allows “consumers to better understand how the AI model or service was created,” and ChatGPT’s creator, OpenAI, has proposed investments in watermarking technology and labeling to tell the difference between its synthetic media and media directly created by humans. 

These companies are aiming to get ahead of regulatory efforts that are starting to appear. One is the proposed EU AI Act, which contains strong language to reinforce a right to know. Proposals include requirements that people “be notified that they are interacting with an AI system” unless otherwise obvious, and that those using an “AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated.” 

What we do with that information is up to us, but its provision in the context of a right to know is something on which we ought to insist.

“Not everyone has the same access to technology, and not everyone can afford to bear its distributive failures.”

A right to freedom from automation also involves our ability to make decisions for ourselves within the new context of automated decision-making (ADM). This is a core part of the principle of autonomy that underlies human rights. ADM systems are intended to help us make better decisions in a complex world of options, but in its most pessimistic case, ADM can erode and supplant human decision-making altogether.

Predictably and understandably, civil society groups like AlgorithmWatch are alarmed by ADM. AlgorithmWatch developed an “Atlas of Automation” in Germany, which stresses the importance of ensuring ADM enhances social participation. Taking humans out of the decision-making loop can be used to improve service delivery and efficiency in some cases, like making driver’s license renewals a less painful affair, but ADM can be a serious problem for accessing public goods and services and exercising other fundamental rights, especially in marginalized or low-tech communities. 

Not everyone has the same access to technology, and not everyone can afford to bear its distributive failures. As the political scientist Virginia Eubanks documented in her book “Automating Inequality,” ADM predictive algorithms have been used by public agencies in the U.S. to make devastatingly consequential decisions about welfare provision, homelessness prevention and child protection. These decisions can be the difference between someone receiving lifesaving care and quite literally dying as their benefits are cut. 

Moreover, ADM pushes deliberative decision-making further away from ordinary people, which is a disaster for cultivating moral and political citizens. Learning to make good choices and exercising our capacity for deliberation and decision-making has been a bedrock of ethical and political thought since classical antiquity. If we automate decision-making in moral and political life, we accept a reduced capacity and function for individual moral and political excellence in society.  

Enforcement of a right to freedom from automation here means ensuring that people sit at the apex of decisions that materially affect them while also providing channels to challenge the use of ADM technology. The EU’s GDPR contains perhaps the most sophisticated rights-based treatment on this point: Article 22’s right not to be subject to a decision based solely on automated processing. There have also been proposals in the U.K. that include a dedicated algorithm ombudsman to hear challenges on different areas where ADM has been employed to render a decision, from credit scoring to healthcare, making sure that people retain the last word on decisions that affect them.

Highlighting the consequences of an automatic society running amok, the late philosopher of technology Bernard Stiegler suggested that successive ages of technological proletarianization since the 19th century have resulted in human losses first in the knowledge of how to make and do (savoir-faire), followed by the knowledge of how to live (savoir-vivre), and now theoretical knowledge (savoirs théoriques). Perhaps that’s overstated, but a comprehensive right to freedom from automation implies that we take seriously the idea that we be presented with alternative options or leverage to protect us from losing the abilities to work, to know and understand how the things we consume were produced, and to decide things for ourselves. 

A right to freedom from automation can also ensure human life remains resilient when faced with technological failure or disruption. Ongoing global energy crises as a result of geopolitical warfare (Ukraine-Russia) and state incapacity (South Africa), and the risk of existential climate collapse, mean that stacking our lives on automation’s tall technological stilts is a dangerous endeavor. What happens, we always need to ask, if the lights go out?