Nathan Gardels is the editor-in-chief of Noema Magazine.
Not having had a nuclear exchange during the Cold War remains one of history’s more under-heralded triumphs of the successful management of great power rivalry. That was because, in those decades fraught with tensions, the United States and Soviet Union were able to arrive at a secure degree of certitude about each other’s strategic intentions and military capacities.
As that conflict matured, constant, direct contact between leaders at the highest levels and sufficient transparency — through arms control negotiations that reliably tallied the number of nuclear warheads and their launching platforms verified by surveillance — enabled the adversaries to establish a rough balance of power that deterred war on purpose or through misapprehension.
Today, as hostility between China and the U.S. has reached a Cold War temperature, no such comfortable certitude about intentions and capacities exists. Far from urgently meeting to fathom each other’s strategic perspective, leaders of the two nations so far are barely on speaking terms when not hurling insults across the Pacific. With the unprecedented weaponization of AI and cyber capabilities thrown into the military mix, a new opacity shrouds any firm accounting of capacities. With each side only guessing at motives and what a balance of power might actually look like, the logic of national security dictates a rapid buildup of wired arms so as not to be vulnerable, or to prevail, in any worst-case scenario in the event of open conflict.
Historians argue that such strategic uncertainty, bound up with a bloc mentality, is what ignited World War I when the perceived balance of power was tested by a minor event that amplified anxieties.
A repeat of this scenario in an age when global network platforms spread disinformation across borders, and when the same GPS-linked machine-learning intelligence that guides your car to a holiday destination also enables automated military strikes, is what worries both technologists and strategic thinkers with a wary eye on the future. It is the profoundly important theme of a new book, “The Age of AI And Our Human Future,” by the veteran Cold War strategist Henry Kissinger, former Google CEO Eric Schmidt and computer scientist Daniel Huttenlocher.
Connectivity Divides The World
The paradox that technologies of connectivity are dividing the world anew is not lost on the authors. Instead of uniting the planet in a common perspective, the evolution of AI and other tools that frame the use of data, the flow of information and the openness of expression reflect the civilizational and cultural values that undergird them and stand at the heart of divergence between East and West. “In time,” the authors predict, “an industry founded on the premise of community and communication” may end up “uniting blocs of users in separate realities … evolving along parallel but entirely distinct lines and with communication and exchange between them growing increasingly foreign and difficult.”
For the authors, this divergence is compounded by technological escape from the control of human reason historically grounded in the locality of place. As they put it, “Now day-to-day reality is accessible on a global scale, across network platforms that unite vast numbers of users. Yet the individual human mind is no longer reality’s sole — or perhaps even its principal — navigator. AI-enabled continental and global network platforms have joined the human mind in this task, aiding it and in some areas, perhaps moving toward eventually displacing it.”
The Scrambling Of Strategy
Distributed computation capacity so rapid and complex that it eludes human grasp scrambles all previous concepts of strategy that plot probable outcomes. Unlike the troops, bases, ships, planes, missile silos and warheads that could be located and counted in the past, dual-use networks can be deployed to launch contagious cyberattacks from deniable or untraceable addresses anywhere. And the more digital technologies are integrated into all aspects of life from energy grids to air traffic control to naval fleets and nuclear command and control, the more vulnerable to hostile disruption a society becomes.
In such circumstances, what margin of superiority will then be required to assure defense or victory in a conflict, and how can it possibly be measured to confidently define a power balance? “At what point does superiority cease to be meaningful in terms of performance?” the authors ask. “What degree of inferiority would remain meaningful in a crisis in which each side used its capabilities to the fullest?”
Without knowing what advances in AI and other frontier technologies parallel rival systems are making and willing to deploy, each will pursue any advantage for a leg up, even further exacerbating uncertainty.
Facing these multiple conundrums, what to do?
To start with, the authors argue, we must accept that the idea of “total” victory or defeat is obsolete since the battlefield is the very network platforms that have become the foundation upon which our wired societies thrive. “The human mind has never functioned in the manner in which the internet era demands,” the authors warn, with the manifold crosscutting effects of network connectivity on all realms from defense to transportation to public health. This poses dilemmas “too complex for any one actor or discipline to address alone.” Thus, “victory” in each commercial or technological contest in any platform wars ahead cannot be assumed. “Instead,” strategists should humbly “recognize that prevailing requires a definition of success that can be sustained over time” in a way that does not hang us with our own disabled optic fiber cables.
With that awareness in mind, the urgent order of business is for “leaders of rival and adversarial nations to be prepared to speak with one another regularly” while “Washington and its allies should organize themselves around interests and values that they identify as common, inherent and inviolable.”
The leading cyber and AI powers should then “identify the points of correspondence between their doctrines and those of rival powers” so each understands the motives, intentions and redlines of the others. Nuclear states should review their “fail-safe” systems to include ways to preclude “cyberattacks on nuclear command and control or early warning assets.” The major AI powers should “undertake a systematic nonproliferation effort backed by diplomacy and the threat of force” to block “aspiring acquirers” from pursing technology for “unacceptably destructive purposes.”
Keep Humans In The Loop
Above all, inveigh the authors, “We will need to overcome, or at least moderate, the drive toward automaticity before catastrophe ensues. We must prevent AIs operating faster than human decision-makers from undertaking irretrievable actions with strategic consequences. Defenses will have to be automated without ceding the essential element of human control.”
To that end, the authors propose that the major technological powers “should create robust and accepted methods of maximizing decision time during periods of heightened tension and in extreme situations. This should be the common conceptual goal, especially among adversaries, that connects both immediate and long-term steps to managing instability and building mutual security. In a crisis, human beings must bear final responsibility for whether advanced weapons are deployed. Especially adversaries should endeavor to agree on a mechanism to ensure that decisions that may be irrevocable are made at a pace conducive to human thought and deliberation — and survival.”
“In the era of artificial intelligence,” they conclude, “the enduring quest for national advantage must be informed by an ethic of human preservation.”
There is no time to lose in today’s rapidly shifting geopolitical and technological environment. “If a crisis comes, it will be too late to begin discussing these issues,” warn the unusual, but apt, mix of coauthors in this seminal primer on the challenges that lay ahead.