The World Needs A Global AI Observatory 

Pooling AI knowledge, data and models is essential to guide policy and broader decision-making in order to harness the benefits of artificial intelligence and avoid its many dangers.

(Getty Images)
Credits

Geoff Mulgan is a professor of collective intelligence, public policy and social innovation at University College London. He is the author of numerous proposals for AI regulation and an adviser to various national and international government AI policy research groups.

Divya Siddarth is the co-founder of the Collective Intelligence Project, a political economist and social technologist at Microsoft’s Office of the CTO, a research director at Metagov and the RadicalXChange Foundation, a research associate at the Ethics in AI Institute at the University of Oxford and a visiting fellow at the Ostrom Workshop.

After years of neglecting the development of AI, governments are desperately trying to catch up and work out how to regulate it. At least four different groups are attempting to steer the arguments over how to establish governance priorities. 

The major corporations, not surprisingly, wish to take control of the agenda, promising that agreements between them can prevent the worst abuses. While publicly calling for regulation, they naturally want to minimise any restrictions that might impede their plans and are working on proposals that would just apply to the main incumbents. 

The second group, the leading technologists, have little to say that is practical. Though Americans apparently support a pause in the development of LLMs by a margin of roughly five to one, the technologists have few (if any) ideas about how such a thing might actually be implemented; so far, they have failed to seriously engage with the practical dilemmas of governance.

The third group consists of governments, which have at least moved beyond rhetoric. The European Union has worked on detailed laws that will categorize AI according to risk levels and also require LLMs to disclose their nature, distinguish deep fakes from real elements, block illegal content and require copyrighted material used for training to be identified. Most current models are set to fail these tests. China, meanwhile, has introduced strict rules, for example on deep fakes, and has created a potentially powerful regulator in the Cyberspace Administration of China. The U.K., however, is continuing to hope that existing regulators can cope without any new laws or institutions.

Finally, there are transnational gatherings and bodies, where there has been much vague hand-wringing and a striking lack of real proposals. 

“A Global AI Observatory would provide reliable data, models and interpretation to guide policy and broader decision-making about AI.

Future historians will wonder why so many powerful institutions and intelligent commentators have so dismally failed to generate plausible options. Inevitably, most commentary tries to squeeze the problem into familiar frameworks, whether seeing it as a problem of human or civil rights, copyright or competition law, privacy and data sovereignty, policing and security, or innovation-driven economic growth, with professional bodies wanting to emphasize training and accreditation. None have yet risen up to the scale of the challenge of managing a truly general-purpose technology that is already affecting many areas of daily life. 

And although AI has slowly become more politically visible — whether in the form of marches on the streets of London by students in 2020 or the crisis faced by the Netherlands government that same year thanks to a scandal over welfare payments, or the numerous and increasing examples of bias and distortion in algorithms being used to make often vital decisions — the world of politics is still struggling to frame its response.

So, what can be done? The landscape of global AI governance is bound to be quite complex, with many types of risk and opportunity, many domains and many possible governance responses. Recognizing the inherent complexity of a general-purpose technology is the starting point for action. One-dimensional ideas or solutions are bound to be inadequate.

This table captures a few of the dimensions. Imagine these as axes of a three-dimensional cube with many hundreds of cells, each of which might require a different governance response: 

Harm & threatDomainResponse type
MisinformationMediaLegal liability
BiasPoliticsTransparency, explainability,
provenance of data
Disruption HealthStandards, guard rails,
safety
DisasterWarBans
Economic
impoverishment
FinanceSandboxes, anticipatory
regulation methods
MonopolyTradeLicensing (e.g. foundational
models)
AbuseEducationSoft law, norms, voluntary
codes
DistrustPolicingData/knowledge, public
education

Within a few years, we are likely to have an equally complex lattice of responses, ranging from standards to monitoring capacities, regulations at different levels, legal norms, anti-trust measures and more. My expectation is that the world will create many different types of AI regulators, often with broad powers (since attempting to prescribe in detail won’t work against the pace of change), and often with a remit to discuss and explain the dilemmas to the public.

The great paradox of a field founded on data is that so little is known about what’s happening in AI — and what might lie ahead. No institutions exist to advise the world, assessing and analyzing both the risks and the opportunities.

To address this gap and illuminate a plausible step that the world could take now as a necessary condition for more serious regulation in the future, I have been working with colleagues at the Massachusetts Institute of Technology, the University of Oxford, the Collective Intelligence Project, Metagov and the Cooperative AI Foundation to design what we call a Global AI Observatory (GAIO) to provide the necessary facts and analysis to support decision-making. 

The world already has a model for this: the Intergovernmental Panel on Climate Change (IPCC). Set up in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information and pooled judgment of potential scenarios to guide the development of climate policies. Over the last few decades, many new institutions have emerged at a global level that focus on data and knowledge to support better decision-making — from biodiversity to conservation — but none exist around digital technologies.

The idea of setting up a similar body to the IPCC for AI that would provide a reliable basis of data, models and interpretation to guide policy and broader decision-making about AI has been in play for several years. But now the world may be ready thanks to greater awareness of both the risks and opportunities around AI.

A GAIO would have to be quite different from the IPCC in some respects, having to work far faster and in more iterative ways. But ideally, like the IPCC, it would work closely with governments to guide action.

Quite a few organizations collect valuable AI-related metrics. Some national governments track developments within their borders, there are businesses pulling together industry data, and organizations like the OECD’s Artificial Intelligence Policy Observatory map what is happening with national AI policies and trends. Yet much about AI remains opaque, often deliberately. It is impossible to sensibly regulate what governments don’t understand. GAIO could fill this gap through six main areas of activity. 

“The great paradox of a field founded on data is that so little is known about what’s happening in AI — and what might lie ahead.”

First, it could set up a global, standardized incident reporting database concentrating on critical interactions between AI systems and the real world. One pressing example is bio-risk, where there is an obvious danger of AI being used to create dangerous pathogens. We need better ways to monitor such incidents. Similarly, examples of misuse of algorithms — such as Rotterdam’s recent issues over welfare payments — would be mapped and documented. A shared database of incidents and risks would pull together the relevant facts about applications, their impacts and metadata. Standardized incident reports are a basic starting point for better global governance and could reduce risks of miscommunication and arms races over AI.

Second, the GAIO could organize a registry of crucial AI systems — again, a basic precondition for more effective governance. It would prioritize the AI applications with the largest social and economic impacts — the ones with the biggest numbers of people affected, person-hours of interaction and the highest stakes. It would ideally also set rules for providing access to models to allow for scrutiny. Singapore already has a registry of AI systems and the U.K. government is considering something similar, but at some point, similar approaches need to become global.

Third, the GAIO would bring together a shared body of data and analysis of the key facts about AI: spending, geography, key fields, uses, applications. There are many sources for these, but no one has brought them together in easily accessible forms, and much about investment remains opaque.

Fourth, the GAIO would bring together global knowledge about the impacts of AI on particular fields through working groups covering topics such as labor markets, education, media and healthcare. These groups would gather data and organize interpretation and forecasting, for example on the potential effects of LLMs on jobs and skills, which is becoming a crucial question across many countries. The GAIO would aim to gather data on both the positive and negative impacts of AI, ranging from the economic value created by AI products to the potentially negative effects of AI-enabled social media impact on mental health and political polarization. 

Fifth, the GAIO could offer options for regulation and policy for national governments and perhaps also legislative assistance, providing model laws and rules that could be adapted to different contexts.

Lastly, the GAIO would orchestrate global debate through an annual report on the state of AI that analyzes key issues, patterns that arise and choices governments and international organizations need to consider. As with the IPCC, this could include a rolling program of predictions and scenarios, with a particular emphasis on technologies that could go live, or come to market, in the next few years, building on existing efforts such as the AI Index produced by Stanford University.

“Shared knowledge and analysis are surely the preconditions for nations to decide their own priorities.”

To do its work the GAIO would need to innovate, learning from examples like the IPCC and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Service — but going further, including through the use of new collective intelligence methods to bring together inputs from thousands of scientists and citizens, which is essential in tracking emergent capabilities in a fast-moving and complex field. In addition, it could introduce whistleblowing methods similar to the U.S. government’s generous incentives for people to report on harmful or illegal actions.

To succeed, the GAIO would have to be legitimate, just as the IPCC has had to win legitimacy over the last few decades. Some of that legitimacy can come from the support of governments and some from the endorsement of scientists. But much of it will come from the quality of what it does and its ability to maintain a sharp focus on facts and analysis more than prescription, which would be left in the hands of governments. It would ideally also have formal links to other bodies that have a clear role in this space, like the International Telecommunication Union, the Institute of Electrical and Electronics Engineers, UNESCO and the International Science Council.

The AI community and businesses using AI tend to be suspicious of government involvement, often viewing it solely as a source of restrictions. But the age of self-governance is now over. What’s proposed here is an organization that would exist partly for governments but with the primary work done by scientists, drawing on successful attempts to govern many other technologies, from human fertilization and cloning to biological and nuclear weapons.

In recent years, the U.N. system has struggled to cope with the rising influence of digital technologies. It has created many committees and panels, often with grand titles, but generally with little effect. The greatest risk now is that there will be multiple unconnected efforts, none of which achieve sufficient traction. The media and politicians have been easily distracted by wild claims of existential risk, and few feel confident to challenge the major corporations, especially when they are threatened with cutting their citizens off from the benefits of OpenAI or Google. 

So, legitimating a new body will not be easy. The GAIO will need to convince key players from the U.S., China, the U.K., the EU and India, among others, that it will fill a vital gap, and will need to persuade the major businesses that their attempts at controlling the agenda, without any pooling of global knowledge and assessment, are unlikely to survive for long. The fundamental case for its creation is that no country will benefit from out-of-control AI, just as no country benefits from out-of-control pathogens.

How nations respond is bound to vary. China, for example, recently proposed a ban on LLMs with “any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.” The U.S. is likely to want maximum freedom. 

But shared knowledge and analysis is surely the precondition for nations to decide their own priorities. Unmanaged artificial intelligence threatens our ability to think, act and thrive, potentially making it impossible to distinguish truth from lies. Pooling knowledge in intelligent ways is the precondition for better harnessing the benefits of artificial intelligence and avoiding its many dangers.

With thanks to the group who collaborated with me on the first version of the GAIO proposal: Divya Siddarth and Saffron Huang from the Collective Intelligence Project; Thomas Malone at M.I.T.; Joshua Tan at the Metagovernance Project; and Lewis Hammond from Cooperative AI.