AI Makes Us
Less Intelligent And
More Artificial

Understanding how algorithms are training our minds.

Jonathan Zawada for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

While intelligent machines can outperform humans in manifold tasks, as well as learn new ones, they literally do not understand what they are doing. Understanding comes from context. The uniquely human labor of filling in the cracks between bits of data with unprogrammable awareness is what creates meaning and constitutes a whole reality. Yet, the more our minds are trained by daily interactions with digital technologies to think like algorithms that lack understanding, the less intelligent and more artificial we ourselves become.

This, in a nutshell, is the lucid thesis argued by Shannon Vallor, a professor of the ethics of data and artificial intelligence at the University of Edinburgh, in Noema this week. “Understanding does more than allow an intelligent agent to skillfully surf, from moment to moment, the associative connections that hold a world of physical, social and moral meaning together,” she writes. “Understanding tells the agent how to weld new connections that will hold under the weight of the intentions, values and social goals behind our behavior.”

To make her case, she evaluates the most advanced AI-powered language-generation program to date, GPT-3, that produced, when prompted, an essay on consciousness. “The instantaneous improvisation of its essay wasn’t anchored to a world at all,” she observes. “Instead, it was anchored to a data-driven abstraction of an isolated behavior-type, one that could be synthesized from a corpus of training data that includes millions of human essays, many of which happen to mention consciousness. GPT-3 generated an instant variation on those patterns and, by doing so, imitated the behavior-type ‘writing an essay about AI consciousness.’”

Despite the subject of consciousness, she continues, “understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated computation or behavior, no matter how clever … [W]hen we talk about intelligent machines powered by models such as GPT-3, we are using a reduced notion of intelligence, one that cuts out a core element of what we share with other beings. This is not a romantic or anthropocentric bias, or ‘moving the goalposts’ of intelligence. Understanding, as joint world-building and world-maintaining through the architecture of thought, is a basic, functional component of human intelligence. This labor does something, without which our intelligence fails, in precisely the ways that GPT-3 fails.”

In a telling complement to her thesis, when Facebook’s oversight board last week over-ruled 4 out of 5 cases of the removed posts it reviewed, it was precisely because the board saw a lack of contextual understanding in making such decisions. For example, one post was removed for quoting the Nazi propagandist Joseph Goebbels, but, on examination, it was done in a way not meant to promote hate speech but to compare him to Donald Trump. In another case, an algorithm identified an image of nipples as porn when in reality the post on a Portuguese site concerned education about breast cancer.

One of Vallor’s more disturbing insights about how AI diminishes the human labor of understanding suggests that conspiracy-minded zealots like those who recently stormed the U.S. Capitol have already adopted an algorithmic way of thinking. “Extremist communities, especially in the social media era, bear a disturbing resemblance to what you might expect from a conversation held among similarly trained GPT-3s,” she says. “A growing tide of cognitive distortion, rote repetition, incoherence and inability to parse facts and fantasies within the thoughts expressed in the extremist online landscape signals a dangerous contraction of understanding, one that leaves its users increasingly unable to explore, share and build an understanding of the real world with anyone outside of their online haven.”

We should all be worried about where our information civilization is headed if we don’t become digitally woke: As our minds become ever more adapted to absorbing information that shapes our reality through algorithmic filters that don’t possess the uniquely human capacity of understanding, we will become more like AI than AI becomes like us.