Nathan Gardels is the editor-in-chief of Noema Magazine.
The movie director Christopher Nolan says he has spoken to AI scientists who are having an “Oppenheimer moment,” fearing the destructive potential of their creation. “I’m telling the Oppenheimer story,” he reflected on his biopic of the man, “because I think it’s an important story, but also because it’s absolutely a cautionary tale.” Indeed, some are already comparing OpenAI’s Sam Altman to the father of the atomic bomb.
Oppenheimer was called the “American Prometheus” by his biographers because he hacked the secret of nuclear fire from the gods, splitting matter to release horrendous energy he then worried could incinerate civilization.
Altman, too, wonders if he did “something really bad” by advancing generative AI with ChatGPT. He told a Senate hearing, “If this technology goes wrong, it can go quite wrong.” Gregory Hinton, the so-called godfather of AI, resigned from Google in May saying part of him regretted his life’s work of building machines that are smarter than humans. He warned that “It is hard to see how you can prevent the bad actors from using it for bad things.” Others among his peers have spoken of the “risk of extinction from AI” that ranks with other existential threats such as nuclear war, climate change and pandemics.
For Yuval Noah Harari, generative AI may be no less a shatterer of societies, or “destroyer of worlds” in the phrase Oppenheimer cited from the Baghavad Gita, than the bomb. This time sapiens have become the gods, siring inorganic offspring that may one day displace their progenitors. In a conversation some years ago, Harari put it this way: “Human history began when men created gods. It will end when men become gods.”
As Harari and co-authors Tristan Harris and Aza Raskin explained in a recent essay, “In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.”
They went on:
For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence. …
Soon we will finally come face to face with Descartes’s demon, with Plato’s cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away — or even realize it is there.
This prospect of a nonhuman entity writing our narrative so alarms the Israeli historian and philosopher that he urgently advises that sapiens stop and think twice before we relinquish the mastery of our domain to technology we empower.
“The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it,” he, Harris and Raskin warn. “If we wait for the chaos to ensue, it will be too late to remedy it.”
The “Terminator” Scenario Is A Low Probability
Writing in Noema, Google vice president Blaise Agüera Y Arcas and colleagues from the Quebec AI Institute don’t see the Hollywood scenario of a “Terminator” event where miscreant AI goes on a calamitous rampage anywhere on the near horizon. They worry instead that focusing on an “existential threat” in the distant future distracts from mitigating the clear and present dangers of AI’s disruption of society today.
What worries them most is already at hand before AI becomes superintelligent: mass surveillance, disinformation and manipulation, military misuse of AI and the replacement of whole occupations on a widespread scale.
For this group of scientists and technologists, “Extinction from a rogue AI is an extremely unlikely scenario that depends on dubious assumptions about the long-term evolution of life, intelligence, technology and society. It is also an unlikely scenario because of the many physical limits and constraints a superintelligent AI system would need to overcome before it could ‘go rogue’ in such a way. There are multiple natural checkpoints where researchers can help mitigate existential AI risk by addressing tangible and pressing challenges without explicitly making existential risk a global priority.”
As they see it, “Extinction is induced in one of three ways: competition for resources, hunting and over-consumption or altering the climate or their ecological niche such that resulting environmental conditions lead to their demise. None of these three cases apply to AI as it stands.”
Above all, “For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek to preserve humanity since we are as fundamental to AI’s existence as oxygen-producing plants are to ours. This makes the evolution of mutualism between AI and humans a far more likely outcome than competition.”
To assign an “infinite cost” to the “unlikely outcome” of extinction would be akin to turning all our technological prowess toward deflecting a one-in-a-million chance of a meteor strike on Earth as the planetary preoccupation. Simply, “existential risk from superintelligent AI does not warrant being a global priority, in line with climate change, nuclear war, and pandemic prevention.”
The Other Oppenheimer Moment
Any dangers, distant or near, that may emerge from competition between humans and budding superintelligence will only be exacerbated by rivalry among nation-states.
This leads to one last thought on the analogy between Sam Altman and Oppenheimer, who in his later years was persecuted, isolated and denied official security clearance because the McCarthyist fever of the early Cold War cast him as a Communist fellow traveler. His crime: opposing the deployment of a hydrogen bomb and calling for working with other nations, including adversaries, to control the use of nuclear weapons.
In a speech to AI scientists in Beijing in June, Altman similarly called for collaboration on how to govern the use of AI. “China has some of the best AI talents in the world,” he said. Controlling advanced AI systems “requires the best minds from around the world. With the emergence of increasingly powerful AI systems, the stakes for global cooperation have never been higher.”
One wonders, and worries, how long it will be before Altman’s sense of universal scientific responsibility is sucked, like Oppenheimer, into the maw of the present McCarthy-like anti-China hysteria in Washington. No doubt the fervent atmosphere in Beijing poses the mirror risk for any AI scientist with whom he might collaborate on behalf of the whole of humanity instead of for the dominance of one nation.
At the top of the list of clear and present dangers posed by AI is how it might be weaponized in the U.S.-China conflict. As Harari warns, the time to reckon with such a threat is now, not when it is an eventuality realized and too late to roll back. Responsible players on both sides need to exercise the wisdom that can’t be imparted to machines and cooperate to mitigate risks. For Altman to suffer the other Oppenheimer moment would bring existential risk ever closer.
One welcome sign is that U.S. Secretary of State Antony Blinken and Commerce Secretary Gina Raimondo acknowledged this week that “no country or company can shape the future of AI alone. … [O]nly with the combined focus, ingenuity and cooperation of the international community will we be able to fully and safely harness the potential of AI.”
So far, however, the initiatives they propose, essential as they are, remain constrained by strategic rivalry and limited to the democratic world. The toughest challenge for both the U.S. and China is to engage each other directly to blunt an AI arms race before it spirals out of control.