The U.S. and China Can Stop AI Warfare Before It Starts

A dialogue charts pathways toward a code of conduct.

Jonathan Zawada for Noema Magazine

Nathan Gardels is the editor-in-chief of Noema Magazine.

As I have written before in this space, conflict with China is as inexorable as cooperation is imperative.

Conflict is inexorable because the central project of modern China, and in particular President Xi Jinping’s “rejuvenation,” is to never again fall behind the West in technological achievement, as it did in the 19th and early 20th century, which invited imperial domination. Yet, today’s information technology is not just another factor of production like machine tools. It is about the use of data and control of information to often repressive ends. Though the West is itself engaged in an internal debate over privacy and surveillance capitalism, there is nonetheless a sharp political and cultural divergence over core values between East and West.

At the same time, mitigating accelerating climate change demands common action by the world’s two largest carbon emitters, as does cooperation to avoid future pandemics. Beyond this, China and the West are rapidly developing artificial intelligence systems based on learning from the data yielded by ubiquitous wired connectivity. The mounting concern here is to what extent AI should be integrated into military operations, not least nuclear command and control. It is in the profoundly prudent interest of powers on both sides of the divide to establish a code of conduct and safeguards to avoid being drawn into catastrophe by intelligent machines acting autonomously outside human control.

Under such circumstances, where conflict cannot be diminished in the near term even as cooperation is urgent on other fronts, the practical modus operandi is “a partnership of rivals.”

To this end, the Berggruen Institute, along with the Brookings Institution, the Minderoo Foundation and the Center for International Security and Strategy at Tsinghua University, has been sponsoring an informal dialogue between influential national security experts from the U.S. and China on reducing the risk of AI with war-fighting applications. The U.S. side has been led by John Allen, a retired four-star Marine Corps general and former head of NATO forces in Afghanistan, who is now president of the Brookings Institution. The Chinese side has been led by Fu Ying, who has served as vice-minister of foreign affairs and chairperson of the foreign affairs committee of the National People’s Congress.

As Gen. Allen reports in Noema this week, in the absence of any other sustained exchanges between the two governments, “both sides agreed that the primary value of the dialogue would be in developing a better understanding of how each side reaches decisions on AI-enabled weapons systems — and, if possible, to identify areas where both sides might be able to agree upon common norms.”

Both sides agreed a common set of concerns that must define any future negotiations to set international norms: What data and targets are “off-limits?” How can unintended escalation be avoided when applying proportionality of response to any attack through “AI-enabled platforms” by ensuring that human commanders are always in the loop?

As Mme. Fu put it in her contribution to Noema, “countries need to exercise restraint in the military field to prevent humankind from suffering catastrophic damages from the weaponization of AI technology. Countries should prohibit assisted decision-making systems that are not cognizant of responsibility or risk. When using AI-enabled weapons, the scope of damage by such strikes must be limited in order to prevent collateral damage and avoid the escalation of conflict.” A further concern is how to prevent hacking and limit proliferation, especially by non-state actors.

Mme. Fu notes the practicality of the approach taken by the dialogue partners: “Some experts and scholars from around the world simply advocate for a blanket ban on developing intelligent weapons that can autonomously identify and kill human targets, arguing that such weapons should not be allowed to hold power over human life,” she writes from Beijing. “As things stand, however, a global consensus on a total ban on AI weapons will be hard to reach, and discussions and negotiations, if any, will be protracted. Judging from the current rate of AI development, militarization is all but inevitable. It may be more viable to require the development of AI-enabled weapons to align with existing norms in international laws. Countries ought to work together on risk prevention for AI-enabled weapons, with the aim of finding consensus and building governance mechanisms together.”

Gen. Allen argues for seizing this historical moment: “As the leading producers of AI, the U.S. and China currently have a rare opportunity. Unlike with debates over cybersecurity, which started in earnest only after a global cyberinfrastructure had been built out and exploited, AI is still under development. Indeed, most AI-enabled weapons systems are still relatively immature and have not yet been widely deployed; AI has yet to approach its full potential in national security applications and in conflict. We have an opportunity to develop new norms, confidence-building measures and boundaries around acceptable uses of novel technologies.”

Of all the many conflicts that roil the U.S.-China relationship, this issue of AI and its military uses may be the most momentous. To the extent conflict remains inexorable, nothing is more important than finding ways to jointly govern these new means of warfare and limit the immense damage, intended or otherwise, they now make possible.