The Bumpy Road Toward Global AI Governance

Overheated rhetoric about a U.S.-China AI arms race need not distract us from common ground on how advanced technology can be regulated across cultural and national boundaries.

Christina S. Zhu for Noema Magazine
Credits

Miranda Gabbott is a writer based in Barcelona. She studied art history at Cambridge University.

Just about two and a half years ago, artificial intelligence researchers from Peking University in Beijing, the Beijing Academy of Artificial Intelligence and the University of Cambridge released a fairly remarkable paper about cross-cultural cooperation on AI ethics that received surprisingly little attention beyond the insular world of academics who follow such things. Coming to a global agreement on how to regulate AI, the paper argues, is not just urgently necessary, but notably achievable. 

Commentaries on the barriers to global collaboration on AI governance often foreground tensions and follow the assumption that “Eastern” and “Western” philosophical traditions are fundamentally in conflict. The paper, also published in Chinese, takes the unconventional stance that many of these barriers may be shallower than they appear. “There is reason to be optimistic,” according to the authors, since “misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed.” 

The narrative of a U.S.-China “AI arms race” sounded jingoistic and paranoid just a few years ago. Today, it is becoming institutionalized and borne out in policy in both countries, even as there has been growing recognition among researchers, entrepreneurs, policymakers and the wider public that this unpredictable, fast-growing and multiuse set of technologies needs to be regulated — and that any effective attempt to do so would necessarily be global in scope. 

So far, a range of public bodies, civil society organizations and industry groups have come forward with regulatory frameworks that they hope the whole world might agree on. Some gained traction but none have created anything like an enforceable global settlement. It seems possible that rivalry and suspicion between two great powers and their allies could derail any attempt at consensus. 

Possible — but not inevitable. 

Getting policymakers from China and the U.S. around a table together is just the largest of many hurdles to a global agreement. Europe is likely to play a decisive role in shaping discussions. Though an ideological ally of the U.S., there are significant ideological differences between the U.S. and the EU on strategic aims regarding AI regulation, the former prioritizing innovation and the latter risk minimization. 

More complex still, any global settlement on AI regulation that genuinely aspires to mitigate the negative consequences of this new technology must account for perspectives from regions often underrepresented in global discussions, including Africa, the Caribbean and Latin America. After all, it is overwhelmingly likely that the Global South will shoulder the brunt of the downsides that come with the age of AI, from the exploitative labeling jobs needed to train LLMs to extractive data mining practices. 

“Despite a thaw in the rivalry between Washington and Beijing remaining a distant prospect, there are still opportunities for dialogue, both at multilateral organizations and within epistemic communities.”

A global settlement on AI ethics principles has clear advantages for all, since the effects of a transformational general-use technology will bleed across national and geographical boundaries. It is too far-reaching a tool to be governed on a nation-by-nation basis. Without coordination, we face a splinternet effect, wherein states develop and protect their technological systems to be incompatible with or hostile to others. 

There are immediate dangers of technologists seeking an advantage by releasing new applications without pausing over ethical implications or safety concerns, including in high-risk fields such as nuclear, neuro and biotechnologies. We also face an arms race in the literal sense, with the development of military applications justified by great-power competitions: The principle of “If they’re doing it, we’ve got to do it first.” 

With stakes this high, there is — superficially at least — widespread goodwill to find common ground. Most national strategies claim an ambition to work together on a global consensus for AI governance, including policy documents from the U.S. and China. A paper released by the Chinese government last November called for an “international agreement” on AI ethics and governance frameworks, “while fully respecting the principles and practices of different countries’ AI governance,” and one of the strategic pillars of a Biden administration AI research, development and strategy plan is “international collaboration.” 

There are some prime opportunities to collaborate coming up this year and next, like the numerous AI projects under the U.N.’s leadership and next year’s G7, which Giorgia Meloni, the Italian prime minister and host, suggested would focus on international regulations of artificial intelligence. This July, the U.N. Security Council held its first meeting dedicated to the diplomatic implications of AI, where Secretary-General António Guterres reiterated the need for a global watchdog — something akin to what the International Atomic Energy Agency does for nuclear technology.

Yet the disruptive influence of fraught relations over everything from the war in Ukraine to trade in advanced technologies and materials show no sign of ending. U.S. politicians frequently and explicitly cite Chinese technological advancements as a national threat. In a meeting with Secretary of State Antony Blinken this June, top Chinese diplomat Wang Yi blamed Washington’s “wrong perception” of China as the root of their current tensions and demanded the U.S. stop “suppressing” China’s technological development. 

Which is why the first of four arguments from Seán ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu — that these problems are surmountable and a near-term settlement on international AI law is achievable — is so important. In times of geopolitical tension, academics can often go where politicians can’t. There are precedents for epistemic communities from feuding nations agreeing on shared solutions to mitigate global risks. “You can look back at the Pugwash Conference series during the Cold War,” ÓhÉigeartaigh told me. “There were U.S. and U.S.S.R. scientists sharing perspectives all the way through, even when trust and cooperation at a government level seemed very far away.” 

“Differences in ideas about governing ethics across cultural and national boundaries are far from insurmountable.”

There is evidence that Chinese and U.S. academics working on AI today are keen to cooperate. According to Stanford University’s 2022 AI index report, AI researchers from both countries teamed up on far more published articles than collaborators between any other two nations, though such collaborations have decreased as geopolitical tension between the two countries has increased. Such efforts, meanwhile, took place even amid threats to the lives and livelihoods of Chinese researchers living or visiting the U.S. — in 2018, the Trump administration seriously debated a full ban on student visas for Chinese nationals, and in 2021, according to a survey of nearly 2,000 scientists, more than 42% of those of Chinese descent who were based in the U.S. reported feeling racially profiled by the U.S. government. 

Although technology occupies a different place in Chinese society, where censorship has dominated since the early days, than in the U.S., which is still somewhat aligned with Californian libertarians and techno-utopians, ÓhÉigeartaigh and his colleagues’ second argument is that a these differences aren’t so great that no values are held in common at all. 

Western perceptions of the internet in China are frequently inaccurate, which can make invisible certain points of common ground. Take, for instance, the issue of data privacy. Many in the West assume that the Chinese state, hungry to monitor its citizens, allows corporations free reign to harvest users’ information as they please. But according to China’s Artificial Intelligence Industry Alliance (AIIA), a “pseudo-official” organization that includes top tech firms and research organizations, AI should “adhere to the principles of legality, legitimacy and necessity when collecting and using personal information,” as well as “strengthen technical methods, ensure data security and be on guard against risks such as data leaks.” In 2019, the Chinese government reportedly banned over 100 apps for user data privacy infringements. 

In the U.S., meanwhile, policies on data privacy are a mess of disparate rules and regulations. There is no federal law on privacy that governs data of all types, and much of the data companies collect on civilians isn’t regulated in any way. Only a small handful of states have comprehensive data protection laws.

This brings us to the third reason why a global settlement on AI regulation remains possible. Given the complexities of governing a multi-use technology, AI governance frameworks lean toward philosophical concepts, with similar themes emerging time and again — “human dignity,” “privacy,” “explainability.” These are themes that both countries share. 

As China’s AIIA puts it: “The development of artificial intelligence should ensure fairness and justice … and avoid placing disadvantaged people in an even more unfavorable position.” And the White House’s draft AI Bill of Rights reads, in part, that those creating and deploying AI systems should “take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.” 

This is not to say that incompatibilities genuinely rooted in divergent philosophical traditions can be wished away, nor that shallow accords are any foundation for lasting agreements. Rather, the point is that there is often scope to agree on specific statements, even while arriving at them from different places — and perhaps even while disagreeing on abstract principles. 

Here again, academia has a valuable role to play. Scholars are working to understand how different ethical traditions shape AI governance and uncover areas where consensus can exist without curtailing culturally divergent views. Sarah Bosscha, a researcher who studies how European and Chinese AI legislation differs, told me that with respect to the EU, the greatest point of divergence is the absence of a parallel to the Confucian value of “harmony” — often interpreted as the moral obligation of an individual to the flourishing of their community. In China, following norms derived from Confucius, a person is not primarily an individual, but a family member, part of a social unit. This order of prioritization may clearly come into conflict with the supremacy in Europe (and even more so in America) of the individual. 

But as Joseph Chan at the University of Hong Kong has argued, these are not mutually exclusive values. Chinese Confucianism, by his reading, can support multiple context-independent points of human rights. And the Universal Declaration of Human Rights contains collectivist aspects that contain similar meanings as the Confucian value of harmony: Human beings “should act towards one another in a spirit of brotherhood” (Article 1) and have “duties to the community” (Article 29).

This overlap is borne out in policy documents, with a 2019 EU document outline principles that emphasize community relations and contain a section on “nondiscrimination” against minorities. According to Bosscha, “the European Union would do well to name ‘harmony’ in its regulations and acknowledge its own investment in this value.” 

The Beijing AI Principles (2019), meanwhile, echo the language of human rights law, stating that “human privacy, dignity, freedom and rights should be sufficiently respected.” Though of course, China’s deployment of AI and surveillance technologies against minorities reveals this commitment is far from full implementation.


A fourth line of reasoning in the paper by ÓhÉigeartaigh and his colleagues is that a noteworthy amount of the mistrust between the West and East is due to a “rich history of misunderstandings.” This is due, at least in part, to an asymmetrical language barrier. Scholars and journalists in China often have a strong command of English, the lingua franca of Western academia, and can access the work of their counterparts. Meanwhile, those working in the West rarely master Chinese languages. As such, knowledge-sharing often only flows one way, with English-speaking scholars and politicians alike almost entirely reliant on translations to access policy documents from China.

Political language is usually nuanced — its subtleties rarely translatable in full. This is especially true in China. Translations of relatively ambiguous statements from Beijing on AI law have caused some high-stakes misunderstandings. For example, a 2017 Chinese AI development plan was largely interpreted by Western commentators as a statement of intent toward technological domination. This was partly thanks to a translation that was worded as a declaration of China becoming “the world’s primary AI innovation center” by 2030. However, according to Fu Ying, a former Chinese diplomat, that was a misreading of the intent of the plan. “What China wants to achieve,” she wrote, “is to become a global innovative center, not ‘the’ only or exclusive center” — clearly a gentler goal. 

But apprehension based on the translation of the Chinese plan reverberated in the American tech community nonetheless. As Eric Schmidt, a former executive chairman of Google parent Alphabet, put it at a summit in 2017: “By 2030, they will dominate the industries of AI. Just stop for a sec. The [Chinese] government said that.” 

“There is already an overlap in AI ethics frameworks between the two nations. And debunkable myths can inflate U.S. fears of China’s technology strategies.”

For ÓhÉigeartaigh, the reason global efforts to create shared regulation on AI are so vulnerable to derailment lies in asking who stands to benefit from crystallizing the narrative of a “U.S.-China tech race” from rhetoric to policy. “If there is a race,” he told me, “it’s between U.S. tech companies. I am concerned that the perspective of ‘needing to stay ahead of China’ is used to justify pushing ahead faster than would be ideal.”

In his view, many technologists are deliberately amplifying U.S.-China “race” rhetoric to justify releasing software as fast as possible, cutting corners on safety checks and ethical considerations. 

Schmidt is the head of the National Security Commission on Artificial Intelligence and a highly influential proponent of the “race” against China viewpoint. For years, Schmidt has pushed the Pentagon to procure smarter software and invest in AI research while maintaining a strong preference for technology deregulation. Meanwhile, his venture capital firm has invested in companies that won multimillion-dollar contracts from federal agencies. 

According to AI Now’s 2023 report, the crux of the problem is that AI products and the businesses behind them are increasingly perceived as national assets. The continued global dominance of America’s Big Tech companies (Google, Apple, Facebook, Amazon and Microsoft) is tied to U.S. economic supremacy. Any attempt to set limits on what those companies can develop or the data they can use risks ceding vital ground to Chinese companies, which are often presumed — falsely — to operate in a regulatory vacuum. 

This argument has proved remarkably influential, particularly with regard to privacy regulations. In 2018, shortly after the Cambridge Analytica scandal, Mark Zuckerberg applied this line of reasoning to warn against strengthening data rights. In particular, he stated at a Senate hearing that implementing certain privacy requirements for facial recognition technology could increase the risk of American companies “fall[ing] behind Chinese competitors.” Just last year, the executive vice president of the U.S. Chamber of Commerce argued that data privacy guidelines outlined within the AI Bill of Rights that intended to bring the U.S. closer to the EU’s GDPR were a bad idea when the U.S. is in a “global race in the development and innovation of artificial intelligence.” Needless to say, conflating deregulation with a competitive edge against China doesn’t bode well for attempts to cooperate with its policymakers to agree on global regulations. 

Fortunately, the U.S. government is not entirely batting on behalf of Big Tech. The Biden administration has taken clear steps to enforce competition with anti-trust laws — against the wishes of tech monopolists. A 2021 executive order declared that “The answer to the rising power of foreign monopolies and cartels is not the tolerance of domestic monopolization, but rather the promotion of competition and innovation by firms small and large, at home and worldwide.” 

So, despite a thaw in the rivalry between Washington and Beijing remaining a distant prospect, there are still opportunities for dialogue, both at multilateral organizations and within epistemic communities. As academics have shown, differences in ideas about governing ethics across cultural and national boundaries are far from insurmountable. There is already an overlap in AI ethics frameworks between the two nations. But unfortunately, durable myths continue to inflate U.S. fears of China’s technology strategies. 

Though the path to agreeing on a set of global ethical guidelines between rivals may be a bumpy road, there’s nothing inevitable about the future direction this technological rivalry will take.