Uncanny Valets

Ideas about machine intelligence in both East and West still reflect some key cultural divides.

Matthew Craven

Amanda Rees is a historian of science at the University of York.

Mistrust of machine intelligence abounds in Anglo-American popular culture, from the world’s most famous genocidal cyborg the Terminator to the fearful anticipation of the “singularity,” the point where machines become smarter than humans. Not so much in the East, however — China, Japan and South Korea are far more welcoming, with 65% of Chinese people saying in a 2018 survey that they expect robots and AI would enhance future job opportunities. 

Why would this be the case? Different cultures, politics, industry, history — religion, even? Or does the answer lie less in attitudes towards machine intelligence and more in the assumptions that people from different cultures make about each other? 

Europeans first saw robots emerging from the East. Around the turn of the 9th century, European travelers and traders returned with reports of amazing machines on display. According to Liutprand of Cremona, Byzantium’s throne of Solomon featured singing birds and roaring lions; William of Rubruck described a trumpet angel that played in Karakorum, the capital of the Mongol Empire. And sometime around 807, Harun al-Rashid, the caliph of Baghdad, sent Charlemagne, the emperor of the Romans, a gift: a water clock in which mobile knights marked the passing of time. To Charlemagne’s court, this was an astounding and unique gift — a “marvelous contrivance.” 

The “Book of Ingenious Devices,” which described the automata invented by the Banū Mūsā brothers — proficient mathematicians, engineers and astronomers — was by then circulating widely throughout the Muslim world. Such machines, capable of independent movement and made in the shape of natural objects, were vivid and unmistakable demonstrations of the technological and aesthetic superiority of the East. 

Nowadays, ideas about machine intelligence in both East and West still reflect some key — and sometimes problematic — cross-cultural divides, which in turn rely on a number of assumptions about the nature of labor and the value of the individual. Perhaps most importantly, they are also profoundly influenced by the different ways in which “humanity” and the idea of “the human” are defined and understood. 

Most broadly, cultures deriving from monotheistic traditions suffer far more from dualistic thinking than do other societies, which has critical implications for the assessment of machine intelligence. Both the mind/body binary and the strict policing of the human/animal division, for example, help create a situation where humans are seen as uniquely dominant over all creation, an ascendancy based on characteristics (a soul, intelligence, free will, special creation, language) that only humans possess. Machines, themselves specially created, seem to show intelligent autonomy and can very easily be seen as a threat to that unique human status. 

“Ideas about machine intelligence are profoundly influenced by the different ways in which ‘humanity’ and the idea of ‘the human’ are defined and understood.

Cultures deriving from other traditions — Shintoism, Daoism, Buddhism — don’t treat the world in that binary way. If an animal or a place can be endowed with spirit or soul, then why can’t a machine? Instead of threatening to out-compete the human occupation of a particular eco-cultural niche, machine intelligence would just add another aspect to the complex, constantly interacting world revealed by these more holistic approaches. 

More specifically, there are at least three different ways in which East and West are thought (at least by Westerners) to differ with regard to the social status of robots, all of which turn on the different ways in which labor — routine, creative or emotion work — is understood and carried out. 

Firstly, just as animals once did, machines can substitute for and improve on the physical labor of humans. Not only do they provide brute strength, but they can more reliably carry out tasks requiring speed, dexterity and accuracy. Initially used in factories, they are now ubiquitous to global workplaces. But whether in the office or the factory, in the form of weaving machines or computer chips, automation is regarded as a real threat to the livelihoods and, ironically, to the autonomy of Western workers. 

As many sociologists and economists have demonstrated over the years, automation enables the de-skilling of both trades and professions, thus making it possible for management to treat human beings themselves as interchangeable cogs in a corporate machine. So on two levels in the West, automation is a threat: not only a potential replacement for human workers, but also as an ideal-type, a model for the behavior and supervision of any remaining human workers. In this context, it’s clear how robots come to symbolize the loss of free will and the elimination of individuality.

Related articles:

The Thoughts The Civilized Keep
Giving Up On Humanity And Finding Ourselves
Prosthetic Memories, Writing Machines

What’s interesting when it comes to understanding Western vs. Eastern visions of the robot future is that this fear also reflects some pervasive Western stereotypes of the East: That bureaucratic control, whether technocratic or totalitarian, radically restricts individual freedom through enforced central planning. It’s a profoundly culturally specific anxiety, where the threat to the individual here doesn’t actually arise out of automation but is rooted in the Western (Anglo-American?) cult of individual competition, rather than collective social responsibility. In other societies — Japan and South Korea, for example — where workers have been replaced by robots, steps are taken to retrain and redeploy them, helping them to take advantage of the new opportunities created by automation.

At the same time, the threat in the West is not just that workers will lose their jobs to robots, but that they’ll be managed by them. Increased surveillance of the workforce via cameras, keyboards and motion sensors means that, whether working from home or in the factory, an employee’s every micropause can be monitored, measured and penalized, with staggeringly bad impacts on mental and physical health. Algorithms and automation mean that Frederick Taylor’s early 20th-century system of optimizing worker performance through detailed management of time and motion can, in the 21st century, be taken to a phenomenal level of control. And it’s important here to realize that the people most directly affected by these Taylorist innovations often belong to the least powerful communities within our societies. These workers are not being replaced by machines, but they are being treated like them in ways that are experienced as inhumane.

Robots also challenge the way in which intellectual labor — and in particular, the role of science and the scientist — is understood. Two of the key characteristics used to differentiate between humans and robots in Western science fiction, for example, are creativity and emotion. The robot Andrew Martin, in Isaac Asimov’s “Bicentennial Man,” first manages to assert his humanity — or at least, his claim to be more than just a robot — by demonstrating his capacity for both artistic creativity and compassion. This reflects (again) another key stereotype in Anglo-American culture: despite strenuous efforts to demonstrate the abiding relationships between them, science, creativity and art are seen as separate domains. Emotional response, in particular, is generally dismissed as irrelevant to scientific engagement. 

Robots — products of the natural sciences — are in this sense caricatures of scientists in the West. Unable to apprehend emotion and incapable of engaging with nuanced social interactions — despite their coldly rational cognitive brilliance — robots are a threat because they have brains but no heart or conscience. Machine (scientific?) intelligence is dangerous because its intellectual brilliance is not matched by empathy or by ethical consideration: Frankenstein created a monster by failing to care humanely for his offspring.

Matthew Craven

In other cultures, the (presumed) separation between science and society on which this ethical detachment is based is not so profound. In South Korea, for example, instead of the government prioritizing STEM (science, technology, engineering and math) teaching, STEAM (all of the above, plus the arts) is stressed. As in Japan, and in contrast to the West, scientists and engineers are much more closely socially integrated. 

Partly, this arises from the fact that industrial technoscience, despite deep historical roots in the Islamic world and China, arrived as a 19th-century imperial import: Precisely because they were initially alien, there was a profound need, and desire, for these skills and practices to be indigenously assimilated (even, sometimes, via science-fiction stories). In the post-WWII period, economic growth and cultural confidence were closely tied to automation and robotics; their advent was the source of regional and community pride rather than anxiety. 

At the same time, however, it was a Japanese roboticist, Masahiro Mori, who first identified the “bukimi no tani” — the “uncanny valley,” or emotional rollercoaster, whereby the more an object resembles a human being, the more unsettling it is. Objects that lie on the boundaries between cultural categories might be less unsettling in the East than the dualistic West, but quasi-humans (robots, zombies, vampires … angels?) can make humans queasy, regardless of cultural background. Japan might have 400 years of experience of animate machines in Karakuri puppets, but the Astro Boy manga series demonstrates that compassion can still be reduced to calculation (by the engineer as well as the robot). 

What this shows is that cross-cultural attitudes are actually much more nuanced and ambivalent than easy distinctions between West and East might suggest. Consider the ways people from different cultures respond to the use of robots for emotional labor. There is a growing mismatch in developed economies between the number of people needing care and the number of people willing or able to provide it. This, coupled with increasing hostility toward immigration in some countries (immigrants often do care work) means that technological fixes are being actively sought. 

Some governments and organizations see the solution in robots that can provide physical and social support — medicine, childcare, domestic labor — to communities in need. Crucially, robots won’t just be cleaning the floors and toilets — they’ll also be doing emotion-work.

“Robots reflect the politics of racial division.”

This emotional labor is one of the key justifications for continuing to construct and experiment with humanoid robots, the uncanny valley notwithstanding. In both West and East, scientists and artists have explored the capacity of robots to become part of a relationship. The play “Sayonara,” for example, written and directed by Oriza Hirata with input from robotics experts from Osaka University, features a robot reading aloud to a dying woman. “The Electric Grandmother” (a made-for-TV movie based on a “Twilight Zone” episode scripted by Ray Bradbury) focused on a family dealing with parental loss by buying a grandmother-shaped android to do their mothering for them. 

Both examinations of robot-generated emotional support demonstrate how hard humans have to work to create it. Bradbury’s grandmother mirrors (in appearance and behavior) the motherless children; Hirata’s dying woman must initiate contact with her robotic nurse. 

The MIT academic Sherry Turkle and her colleagues studied children interacting with Kismet and Cog, humanoid robots. They charted the lengths to which people will go in order to make sense of even a badly functioning robot’s participation in an interaction or relationship. People from both Western and Eastern cultures are clearly willing to accept some humanoid robots as appropriate relationship partners — but at the same time, in order to be able to do that, they have to do a lot of emotional and interactive work on the robot’s behalf.  

But why should machine intelligence come in human form? We already share our lives with non-human intelligence in the form of the domestic animals that inhabit our homes and working spaces. For millennia, animals have been used to facilitate human emotional and social needs, to augment human physical capacities and to express human identities. 

What if animal intelligence was used as the model for a form of AI that enabled us to expedite our navigation of the uncanny valley? During the pandemic, for example, crucial emotional support for the elderly has been provided not by humanoid robots, but by robotic animals. A therapeutic robot called Paro, a baby seal, is used to reduce stress in care homes and facilitate relationships between residents and caregivers. Companion animals play an immensely important role in many people’s lives, and their impact can be measured in the grief expressed at their loss. Even Britain’s royal family has a pet cemetery at Sandringham. Would it matter if they were robotic, rather than biological in origin? The Buddhist funerals for robot pets, “killed” when their software was no longer supported by the relevant company, would suggest not. 

Or perhaps what we need instead is to acknowledge that a significant part of the Western perception of AI and robots as unsettling and uncanny is linked to a deep-seated association between automata and the East, not to mention the mutilating legacy of slavery in a culture ostensibly committed to individual freedom and dignity. 

From the ethical problem of Asimov’s robots kept in eternal serfdom through the imposition of the “three laws” to the question of how technological innovation untrammeled by moral judgment might change the human future, robots reflect the politics of racial division. In the context of the current debates between China and the U.S. about the wider weaponization of AI, and the potential for a Sino-American arms race, that’s a difficult issue that (white) Westerners urgently need to consider.