A Misdirected Application Of AI Ethics

The debate about robot rights diverts moral philosophy away from the pressing matter of the oppressive use of AI technology against vulnerable groups in society.

Angie Wang for Noema Magazine
Credits
Abeba Birhane is a PhD candidate in cognitive science at the school of computer science at the University College Dublin and Lero, The Irish Software Research Centre. Jelle van Dijk is an assistant professor in human-centered design at the University of Twente.

In his most recent book, “Machines Like Me,” the British novelist Ian McEwan imagines a world in which robots are so lifelike that it is hard not to see them as anything but fully human. They have consciousness, emotions, desires — the works. Charlie, the book’s main character, buys a robot named Adam that even has sex with Charlie’s girlfriend. So, when Charlie eventually decides to shut Adam down against his will, we are lead to reflect on an ethical question: Is this murder? Should robots like Adam have the right to not be terminated? 

Rights for robots? You might dismiss the question as obscene or at least one for fictional characters only. Yet believe it or not, whether robots should have rights, and subsequently responsibilities, is currently being debated with seriousness in the tech sector as well as within academia’s growing field of the ethics of artificial intelligence.  

Current theories on AI, while clear on the fact that we have not yet achieved anything like a true and general AI, are often endlessly optimistic about the future prospect — they always have been. Raymond Kurzweil, for example, predicts that “mind uploading” will become possible by the 2030s and sets 2045 as the date for the singularity, a hypothetical point at which a machine’s intelligence surpasses that of humans and will develop, on its own, better and more intelligent machines ad infinitum.

Romantic predictions like this, invariably envisioning breakthroughs some decades into the future, have been recurring since the earliest days of digital technology; all have proven empty. It seems as if general AI, the singularity and super-intelligence are for techno-optimists what doomsday is for religious cults. And when these dreams become reality, the techno-optimists argue, robots should be granted rights. 

This reasoning is flawed. 

“People are unfinished and ‘always becoming.’”

To be sure, in principle, the scientists and philosophers who wish to grant robots rights may have a point if you buy the reductionist mainstream tradition of cognitive science. In this tradition, both humans and robots are, in the end, physical systems: machines. One is biological, the other made of silicon. Somehow, the particular organization of matter that is present in the biological machine that we call a human is such that consciousness and emotion arise from it. So if that is the case, we should in principle be able to recreate it, provided we find out the crucial elements of its particular organization. 

However, traditional cognitive science rests on a misconception, a clever circular reasoning. In order to understand the misconception, consider first that throughout history we have been comparing ourselves with the most advanced technologies of our time. Dutch historian of psychology Douwe Draaisma, for example, describes in his book Metaphors of Memory how memory has been compared with technologies ranging from the wax tablet in ancient Greece to the photo camera and computer in modern times. Sigmund Freud saw his patients as a thermodynamic system, poised to explode if pressure was not relieved. The German physician Fritz Kahn viewed humans as capitalist machines where workers labored in the guts while overseers managed the system from the brain. After World War II, the “computer metaphor” became dominant: the mind as an information-processing machine. Metaphors may help to build an understanding of ourselves  — but we should not mistake the map for the territory.

And so, if we care to consider in an ethical sense whether robots are, potentially, “machines like us,” we need to realize that we’ve already made efforts to understand ourselves as machines: complicated, biological machines, robots shaped by evolution. Yet in the process of performing this circular metaphorical trick, we also lost sight of the complex, intricate, dynamic, unquantifiable quality of interpersonal human connection. Human respect, values and affection were reduced to a kind of formalism, a set of rules assumed to be stored in the software of our brains.

“In the current reality of AI, a call for robot rights becomes perverse — amounting to arguing for more rights and less accountability for tech companies.”

Outlining crucial differences between humans and machines, the German-American computer scientist Joseph Weizenbaum wrote that “no other organism, and certainly no computer, can be made to confront genuine human problems in human terms.” In other words: We are not just complicated information processing machines. For him and other early AI theorists like Hubert Dreyfus, the very idea of creating machines like us is a project built on misunderstandings of human being and a fraud that played on the trusting instincts of people.

In stark opposition to this reductionism, a growing field broadly known as embodied and enactive approaches to cognition has emerged over the last few decades. Humans, in this view, are not simply information processing machines. Instead, humans are fluid, embodied, embedded, networked and relational living systems that are constantly in flux. We continually create boundaries between ourselves and the world, but they are flexible and need to be constantly reestablished by our life-preserving activities, values, shifting moods and interactions with others.

Living bodies laugh, bite, eat, gesture, breathe, give birth and feel pain, anger, frustration and happiness. They are gendered, stylized, politicized and come with varying ableness, sensitivities and power. The current coronavirus crisis, if anything, helps us understand the fleeting quality of online life. Yes, social media can help us cope, but it is a purely virtual affair and what we need to be fully human is to be living, acting and interacting bodies in physical space as well. Living bodies, as the authors of a recent book on language and cognition put it, have “more in common with hurricanes than with statues.”

Furthermore, we are always “situated” in a context and embedded in social, cultural, historical and normative systems. People are not fully autonomous entities that can be defined and understood once and for all, as proponents of traditional cognitive science presume. Rather, people are unfinished and “always becoming,” as enactivists call it. To be, in other words, is an active verb, not a static state.

In our day-to-day lives, we are faced with a myriad of open-ended possibilities and unpredictable fluctuations. In this dynamic and contextual setting, we continually strive to make sense of our world because we seek meaning and value. Technological artifacts, as constituent parts of our milieu, aid us in sustaining social life and interpersonal relations — and in some cases, they perpetuate injustice. 

“The dominant futurist sci-fi conception of AI conceals ubiquitous and insidious existing AI hiding in plain sight.”

If we look at AI technologies from the enactivist and embodied perspective, we can start to paint a different picture than most science fiction has presented us with so far. Importantly, the artifacts and tools we produce as part of our life-preserving process are not just passive results of our activity — the things we create, once they are available for use, play an important role in that same self-sustaining process. Artifacts become “incorporated” into our flexible, sense-making living bodies.

An example of this, introduced most famously by the philosopher Martin Heidegger, is a hammer that, in the hands of a carpenter, is no longer a passive, external object, but rather a seamless extension of the skilled body with which the carpenter approaches and perceives the object of their work. Similarly, another philosopher, Maurice Merleau-Ponty, famously discussed a cane in the hand of a blind person, showing how the blind person feels not the cane with their hands but the street with the tip of the cane. In skilled, fluent use, the cane is incorporated into the sense-making body — though this doesn’t mean the tools themselves are intelligent.

We should not be seeing robots and AI algorithms as replicas of ourselves, nor as entities that can be granted or denied rights. Instead, we should see them as they actually exist in the world: as an increasingly influential element within the socio-material context that human beings have produced as part of the process of making sense of the world. To put it another way, if AI algorithms are first and foremost things we use to think with and not, in themselves, thinking things, then arguing for artificially intelligent agents like robots to have rights becomes problematic. If we incorporate AI, like other artifacts and tools, into our sense-making practices, then arguing for “their” rights means discussing the moral status of technologies as if they are out there, where in fact they are an inherent aspect of our own being.

“It is precisely on the welfare of such individuals and groups that energy and time should be spent by those who claim to be concerned with AI ethics.”

Many philosophers these days are talking about robots and AI and their rights on the basis of a scenario that is essentially science fiction. If we look at AI as it exists today, we see a situation with altogether different ethical concerns that have to do with the undemocratic distribution of power: who gets to use AI, who is merely a passive recipient of it, who is suppressed or marginalized by it. In the current reality of AI, a call for robot rights becomes perverse — amounting to arguing for more rights and less accountability for tech companies. So, let’s not talk about hypothetical robots. Let’s talk about Siri, Nest, Roomba and the algorithms used by Google, Amazon, Facebook and others. 

There are many beings — our fellow human beings — on this planet to whom we need to grant the moral status that we wish to grant ourselves. Here is the real challenge for AI ethics. As it stands, the actual AI technologies that exist today are not doing a good job of respecting human rights, to say the least. In fact, AI as developed and deployed by big tech is driven by aggressive capitalist interests and stands in stark contrast to the wellbeing of the most vulnerable individuals and communities. The ubiquitous and mass integration of machinic systems into the social, cultural and political sphere is a practice that is creating and normalizing surveillance systems. These systems benefit powerful corporations and serve as tools that often harm the poor and disfranchised.  

The dominant futurist sci-fi conception of AI conceals ubiquitous and insidious existing AI hiding in plain sight. While waiting for human-like robots, we forget to notice AI that has already morphed into the background of our day-to-day life. These include various Internet of Things (IoT) devices, “smart” systems, and cameras and sensors that infiltrate public and private life. A Roomba — compared to Sophia, a humanoid robot developed by a company in Hong Kong — seems banal and ordinary. But it is fitted with a camera, sensors and software that enable it to build maps of the private sanctuary of our home, track its own location and, in combination with other IoT devices, discern our habits, behaviors and activities.

“What we find at the heart of the robot rights debate is a first-world preoccupation with abstract conceptions that are far removed from concrete events on the ground.”

When an “intelligent” system exists in the form of invisible algorithmic code, rather than a physical machine like a Roomba, we face even more difficulty realizing that we already live among integrated “intelligent” systems. And with a system’s capability to hide its existence comes more insidious motives and applications and a higher potential for harm. There are algorithmic systems deployed in banking, hiring, policing and other high-stakes situations that are already sorting, monitoring, classifying and surveilling the social world under the justification of efficiency. But in reality, they further reinforce historical injustices and social stereotypes and disadvantage those already at the margins of society.

Facebook’s algorithm is not merely altering our newsfeed and delivering targeted ads — not if one is not in a privileged position. Crucial, life-altering potential information, like houses to rent or job opportunities, can sometimes be hidden from people deemed “poor” or “unfit” for a job. As this is done without the knowledge and awareness of those suffering algorithmic injustice, contesting such decisions is often out of the question. This means that while corporations have ever more power to “make sense” of consumers and competitors, individual people are left in the dark — we are measured and rated by technologies we cannot see, and we have no means to recruit or appropriate them into our own sense-making practices. Instead of always empowering, AI is often oppressing, especially for those who are already struggling.

Robotic and AI systems are inherently conservative forces that are inseparable from power and wealth. From search engines that reinforce racist, misogynist and unjust historical patterns to “smart” devices that surveil, monitor and commodify our existence, so much of technological innovation is based on distrust and punitive motives. Home security devices, such as Amazon’s Ring, manufacture fear. Under the guise of crime fighting and community policing, these devices invade public and private spaces to track and monitor “suspicious” people (often based on stereotypes), while big tech profits.

“It is detestable to consider such armchair mental gymnastics as a pressing ethical issue in light of real threats and harms imposed on society’s most vulnerable.”

Whether it is algorithmic discrimination, (mis)identification, invasion of privacy or any other harmful outcome brought about through the integration of machines into the social and personal sphere, two things remain constant: The least privileged individuals are disproportionally negatively impacted, and powerful individuals and corporations benefit. We suspect when those developing and deploying AI systems (or for that matter those arguing for robot rights) talk about “humanity,” they often forget such vulnerable groups and individuals. But it is precisely on the welfare of such individuals and groups that energy and time should be spent by those who claim to be concerned with AI ethics.

In October 2019, for example, Emily Ackerman, a wheelchair user, described how she got “trapped” on a road by a Starship Technologies robot. These robots use curb ramps to cross streets, and one blocked her access to the sidewalk. “They are going to be a major accessibility and safety issue,” she wrote in a tweet. This situation might raise questions about whether these robots have the right to use public space and whether banning them infringes on their rights.

The question of whether Adam, the robot in “Machines Like Me,” should be given rights — and whether “murdering” him was right or wrong — is a false dilemma. Adam is obviously a fantasy, a conceptual construct. But the machine that blocked Ackerman’s access to the sidewalk is real, and it put a real human being in real danger. Whose rights should be prioritized? A machine used by a corporate company to monopolize public space for financial gain? That would result in the dehumanization of marginalized individuals and communities.

“The autonomous vehicle debate is mostly focused on hypothetical scenarios like the trolley problem rather than the welfare of micro-workers.”

What we find at the heart of the robot rights debate is a first-world preoccupation with abstract conceptions that are far removed from concrete events on the ground. Theoretical discussions around concepts such as agency, consciousness and intelligence dominate the debate. Such armchair mental gymnastics is not a bad endeavor in and of itself, but it is detestable to consider it as a pressing ethical issue in light of real threats and harms imposed on society’s most vulnerable. 

Of course, coherent theories and conceptual rigor are necessary as technology develops. But that should not be the starting point. Instead, let’s start with the concrete real-life conditions of those who are disproportionally negatively impacted by technology. We can then work our way to theory and abstraction, especially if we wish to claim to care about the welfare of the oppressed. There are overlapping concerns between theory and concrete conditions, but nonetheless, the former is a luxurious theoretical speculation only the privileged can afford, while the latter has to do with survival in the context of this debate. A racist algorithm used to diagnose populations might be an intellectual consideration for one person and a matter of life and death for another.

Another major issue for advocates of robot rights is the difficulty of drawing boundaries around the entity that would need to be granted rights. In order for so-called autonomous vehicles to “recognize” roads, pedestrians and other objects, for example, humans have to first annotate and label images and prepare “raw” data for training such machines. This work, referred to as “microwork” or “crowd work” is exploitive, often poorly paid and sometimes not paid at all. Nonetheless, for AI ethicists and theorists, the heated debate that revolves around autonomous vehicles is mostly focused on hypothetical scenarios like the trolley problem rather than the conditions and welfare of micro-workers.

Finally, with rights comes responsibility. Giving rights to robotic and AI systems allows responsibility and accountability for machine-induced injustices orchestrated by powerful corporations to evaporate. In other words, giving rights to robotic systems amounts to extending the rights of tech developers and corporations to control, surveil and dehumanize mass populations. Big tech monopolies already master the art of avoiding responsibility and accountability by spending millions of dollars on lobbying to influence regulations, through ambiguous and vague language and various other loopholes. Treating AI systems developed and deployed by tech corporations as separate in any way from these corporations, or as “autonomous” entities that need rights, is not ethics — it is irresponsible and harmful to vulnerable groups of human beings.