Rethinking Intelligence In A More-Than-Human World

If we humans are going to gain a better understanding of the vibrant world around us — and the damage we are doing to it — we’re going to need a new conception of nonhuman intelligence.

Laura Wächter for Noema Magazine
Credits

Amanda Rees is a historian of science at the University of York.

We spend a lot of time debating intelligence — what does it mean? Who has it? And especially lately — can technology help us create or enhance it?

But for a species that relies on its self-declared “wisdom” to differentiate itself from all other animals, a species that consistently defines itself as intelligent and rational, Homo sapiens tends to do some strikingly foolish things — creating the climate crisis, for example, or threatening the survival of our world with nuclear disaster, or creating ever-more-powerful and pervasive algorithms. 

If we are in fact to be “wise,” we need to learn to manage a range of different and potentially existential risks relating to (and often created by) our technological interventions in the bio-social ecologies we inhabit. We need, in short, to rethink what it means to be intelligent. 

Points Of Origin

Part of the problem is that we think of both “intelligence” and “agency” as objective, identifiable, measurable human characteristics. But they’re not. At least in part, both concepts are instead the product of specific historical circumstances. “Agency,” for example, emerges with the European Enlightenment, perhaps best encapsulated in Giovanni Pico della Mirandola’s “Oration on the Dignity of Man.” Writing in the late 15th century, Mirandola revels in the fact that to humanity alone “it is granted to have whatever he chooses, to be whatever he wills. … On man … the Father conferred the seeds of all kinds and the germs of every way of life. Whatever seeds each man cultivates will grow to maturity and bear in him their own fruit.”

In other words, what makes humans unique is their possession of the God-given capacity to exercise free will — to take rational, self-conscious action in order to achieve specific ends. Today, this remains the model of agency that underpins significant and influential areas of public discourse. It resonates strongly with neoliberalist reforms of economic policy, for example, as well as with debates on public health responsibility and welfare spending. 

A few hundred years later, the modern version of “intelligence” appears, again in Europe, where it came to be understood as a capacity for ordered, rational, problem-solving, pattern-recognizing cognition. Through the work of the eugenicist Francis Galton, among others, intelligence soon came to be regarded as an innate quality possessed by individuals to greater or lesser degree, which could be used to sort populations into hierarchies of social access and economic reward. 

“Both intelligence and agency are grounded firmly in the lived experience of one particular human group.”

In the 21st century, the concept continues to do much the same work, with most nations making at least a token commitment to a meritocratic ideal; formal examinations test reasoning, numerical and linguistic skills alongside discipline and memory, which are then used to manage access to high-status schools, universities and eventually careers. 

Both intelligence and agency are habitually identified as universal human attributes, part of the very definition of what it means to be human. But both are actually grounded firmly in the lived experience of one particular human group. It’s understandable that elite European scholars and gentlemen, trying to interrogate and apprehend what constituted humanity, should use their own experience as a basis for their studies. But their perspectives were necessarily limited. If asked, would their servants have pointed to free will as the defining element of their humanity? What about their slaves? Or their wives

Sustained, protracted and sometimes vicious debates during the 20th and 21st centuries highlighted the cultural, gendered and racial biases inherent in both intelligence tests themselves and the conditions under which they can be administered. Critics of the concept have frequently complained that, far from providing an objective meritocratic measure, intelligence testing continues to reflect the experience of particularly privileged groups, justifying social divisions and the unequal distribution of scarce resources. 

The problem, however, is much bigger than the question of how to operationalize the concept of intelligence in a fair and just way. 

“Scala Naturae”?

Consider, for example, the ways intelligence has been used in the study of animal and artificial intelligence. 

In both cases, the paradigm of rational problem-solving cognition has placed limitations and restrictions on understanding the way that other minds might interact with the world. “Cognitive ethology” emerged as a key research area in the late 1970s as scientists began to seek out and document evidence for animal “theory of mind,” associative reasoning and tool use.

But just as elite groups of white males had earlier stood as the standard for human intelligence, human intelligence continued to be the measure of animal minds. This happens explicitly, when cognitive performance is assessed by comparing it to the capacity of a human infant or child, and also implicitly, when the types of tasks used to assess intelligence speak to human strengths. 

In particular, reflecting the significant role played by hand-eye coordination in human evolution, intelligence tests still focus on visual as opposed to aural, tactile or olfactory stimuli. Cetaceans, corvids and cephalopods have all nevertheless turned in amazing performances in both lab and field tests — but the creatures commonly assumed to be most “intelligent,” like primates, are still those that look and behave most like humans. And even within the primates, there is gradation based on resemblance and relatedness, with the Africa-based chimpanzees ranked higher than inventive stone-tool-using capuchin monkeys of the Americas.

The progress of artificial intelligence is also measured against the human yardstick. During the 1950s and 60s, researchers like Marvin Minsky and Frank Rosenblatt used different methodologies to develop what they believed would, within a generation, become machine-based artificial general intelligence, generating a cognitive capacity that would be equivalent to that possessed by humans. It was an overconfident and incorrect assumption. This has not, however, stopped expectations of AI accelerating over the past decade, alongside advances in computing, data analysis, cloud technology and deep learning. Joining computer scientists in their anticipation of AI, eagerly or otherwise, are theologiansartists and the general public. 

“What’s left for Homo sapiens if its eponymous unique selling point is usurped?”

But again, the idea of what constitutes “intelligence” closely resembles the earlier 19th-century model of rational, logical analysis. Key research goals, for example, focus on reasoning, problem-solving, pattern recognition and the capacity to map the relationship between concepts, objects and strategies. “Intelligence” here is cognitive, rational and goal-directed. It is not, for example, kinesthetic (based on embodiment and physical memory) or playful. Nor — despite the best efforts of Rosalind Picard and some others — does it usually include emotion or affect. 

Perhaps as a result, people tend to fear the consequences of AI. With animals, if humans are the model for intelligence, at least we will always score higher. AI discourse, on the other hand, has had a strong tendency to harp on the question of whether humanity’s creations (mechanical, biological or cybernetic) will rise up and consume their creator. This has been a major theme within AI narratives ever since Mary Shelley’s “Frankenstein,” and it continues to dominate fictional representations of machine intelligence.

For Western democracies, the history of computing is inextricably intertwined with the fear that humans will become obsolete, from the Luddite riots caused by the invention of the Jacquard loom (itself based on binary code) to the projected approach of the “technological singularity.” This phenomenon, an anticipated point in time at which accelerating machine intelligence will outperform and overwhelm human capacities, is regarded by some theorists as an existential threat to humanity itself. What’s left for Homo sapiens if its eponymous unique selling point is usurped, and another kind of creature climbs past it on the ladder of evolution?

All these debates about intelligence and the human future are based on the assumption that intelligence is fundamentally rational and goal-directed — that is, that the 19th-century understanding of the concept is still the most appropriate interpretation of what it is. What if it isn’t? And what about agency? What if agency isn’t self-conscious, or even based in an individual? 

Ecologies Of The Imagination

By the end of the 20th century, studies of learning and decision-making began to note the importance of play and the significance of emotion to the development of both intelligence and agency. It became clear that emotion is central to the process of learning: It influences attention, retention and reasoning. 

Damage to the amygdalae, the areas of the brain intimately involved in an individual’s experience of emotion, create difficulties in making decisions and reaching conclusions. In the 1960s, the renowned psychologist of childhood development Jean Piaget argued that play was central to learning, enabling children to familiarize themselves with skills and scenarios in safe environments. More recent ethological work on non-human animals confirms that individuals who play more are more successful (in terms of individual life span and reproductive success) than those who don’t — although the precise mechanisms linking play with these outcomes remain unclear. 

Even more importantly, when it comes to considering both AI and risk, researchers have recently begun to pay a lot more attention to the significance of stories when it comes to understanding public behaviors and decision-making. For a long time, “telling stories” was something that was categorized as just another form of play — something that you did for, or to, children or the child-minded. But the startling growth of both the range and size of digital and analog entertainment platforms has demonstrated the economic weight of the imagination.

Play, it turns out, is serious work. Fairy tales may well prove more useful than factor analysis in understanding human agency in the Anthropocene. This is because stories are vitally important in both explaining and expanding an individual’s understanding of a situation. Particularly in the past decade, the West has seen how stories (myths, post-truths, history) help form collective community identities, which can sometimes exacerbate inter-community tension. 

But stories that involve multiple points of view also encourage readers or viewers to see the world from a different perspective. Different stories and different sensory orientations can open up different opportunities for dialogue and insight. And — since every story has a beginning, a middle and an end — they can build different models of the world around us, creating more resources from which to manage risks. 

“One of the most distinctive and universal characteristics of humanity is our ability to cooperate.”

Again, ethologists have insights to offer here. Wittgenstein might have claimed that, if lions could talk, humans could not understand them, but decades of work on animal behavior have demonstrated both that animal minds are different and that it is possible to devise experiments to explore them. 

Stories, whether we are consuming them or acting them out, can themselves be considered experimental interventions, as we play out what the (economic, emotional, logical) consequences of particular actions might be. They enable us to interact with others more effectively.

This point is important because interaction is probably the most unwisely ignored aspect of wise action in the Anthropocene. Discussion of agency in traditional economics and philosophy usually focuses on competition and the role of the (rational) individual. But when it comes down to it, one of the most distinctive and universal characteristics of humanity is our ability to cooperate. This capacity is literally built into our biology. 

Without cooperation, without the ability to form alliances, women would not be able to give birth or raise children: We would become extinct as a species. Homo sapiens is the most social of all the primates, with the capacity to work together at very different levels of complexity (the family, the village, the committee, the team, the congregation, the nation), depending on the tasks at hand.

Even more significantly — and without even touching on the question of how much of a human actually consists of bacterial DNA — we cooperate across species lines. Our capacity to form close bonds with dogs, for example, may have been a key factor in enabling Homo sapiens to out-compete other hominid species, as canines with specialized olfactory and endurance running skills collaborated with sharp-eyed humans wielding distance weapons in order to provide a food surplus for all. 

In many ways, the Anthropocene can be understood as the expression of collective multispecies agency: Without companion animals and livestock, industrial society could never have developed. It’s not an accident that we still speak of an engine’s “horsepower,” or turn to puppies for companionship in the middle of a devastating pandemic. Even in the heart of cities, we live in immensely complex more-than-human biosocial ecologies, which we often navigate without conscious awareness. 

“In many ways, the Anthropocene can be understood as the expression of collective multispecies agency.”

When it comes to anticipating what artificial intelligence might look like or do, we urgently need to get beyond our limited idea of what constitutes intelligence. We can do this first by building on our long history of multispecies entanglement, using our past and present collaborations with animals as models for the development of supporting intelligences in different fields. 

It’s hard not to be anthropomorphic here, but we now have more than half a century’s worth of efforts by scientists to study animal cognition. Their work provides us with models of how to experience the world as a wasp or a robin or even a chimpanzee. This can help us perceive different kinds of problems, connections and possibilities within that world. 

We could potentially go further — there’s no need to be zoo-centric. Charles Darwin studied the behavior of cucumber plants in the 1860s, watching as their mobile tendrils searched for support as they climbed, coiling like a spring in two directions to secure the growing plant against environmental disturbance. And Mimosa plants, the leaves of which curl up into themselves when touched or disturbed, may demonstrably have memory even though they lack neurons. Taking networked vegetative sapience seriously could provide important new perspectives on the broader concept of intelligence.

Within our own human experience of intelligence, we have to get beyond the rational. We have to explore the possibilities of involving emotion and embodiment in our models of artificial learning. Studies of emotion in AI currently focus on the computer’s capacity to produce emotion in the human user — not on the existence of emotion in the machine. But is that the limit of what can be imagined or achieved? 

The prospect of an emotional, embodied artificial intelligence certainly currently lies wholly in the realm of science fiction — but science fiction itself is essentially a collection of experimental narratives that enable people to test the limits of possibility and consequences of action. It’s also important to remember the role of emotion in “Frankenstein,” the original AI horror story. This narrative, often used as shorthand for scientific hubris, is actually driven by parental failure. Frankenstein’s sin was that he didn’t provide his creation with the nurturing care it needed. 

Nurture, of course, is the foundation for our most successful interspecies collaborations (dogs, horses, cats, dolphins), where learning is incentivized through emotional or physical rewards, and expectations, although often anthropomorphized, are nuanced to particular relationships. Sentient machines do not exist — but we could build on our experience of relationships with sentient non-human individuals to improve both our ethical and our pragmatic expectations of AI. Rather than anticipating human extinction, adopting a collaborative, cooperative — even caring — approach to our creations is a good first step towards surviving the climate catastrophe.