When To Stop AI

Drawing redlines where the ‘capability ladder’ of intelligent machines can’t be allowed to go further.

Jonathan Zawada for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

In dramatic fashion, the recent rift at OpenAI laid bare the core concern over where and when to draw the line at that point beyond which frontier technologies designed to enhance human well-being become a threat to it. For the moment, the incentives behind rapid commercialization of AI, which will drive its diffusion throughout all aspects of society, appear to have won out over precaution. Competition, in turn, will further accelerate the pace of developing ever more powerful capabilities among companies and nations alike.

The pattern so far, as illustrated in this chart, suggests that intelligent machines latched to unleashed animal spirits will bring the moment of reckoning sooner than we may be ready for. AI does not advance gradually, but in leaps and bounds:

To get a grasp on what former Google CEO Eric Schmidt calls “the capability ladder” of exploding AI technologies, Noema asked him to lay out the unfolding landscape as he sees it and where we should draw the redlines that should not be trespassed.

It is worth quoting him at length:

Today, the AI models are under human control, the work is initiated by humans and their behavior is regulated by current law. A simple regulatory landscape would feature strict liability where an agent acting for a person has the same liabilities as that person and the owner of the agent or its developer can be held accountable for its actions.

There is a clear danger around recursive self-improvement, autonomy and AI setting its own goals. When this level of AI becomes generally available, it will mean that a computer cluster could become a truly superhuman expert and choose to use its abilities to act on its own. 

In the scenario where such a system can send and receive emails, where it has access to large amounts of money, and where it has access to specialized labs or even dangerous weapons, we will have to restrict and regulate these. It is possible that in a distant future these capabilities will be so dangerous that the government could actually ban further development and require such development in a national lab under military secrecy. In all cases, the training of very large models with huge data sets and computing clusters will be regulated and restricted in the future due to their potential danger.

Companies are beginning to invent some of these more potentially dangerous capabilities in their quest for artificial general intelligence. The ability of the model to call itself in “chain of thought” reasoning is a start in this. And the learning can be embedded in a system prompt.

Another near future step of concern will be when the system can correct its own errors and learn from them, for example with AI writing software — then we will have the beginning of real AGI. Later events would include full recursive learning, where the system learns something new and based on that, learns more and more and more. Eventually the ladder up will include new results in science discovered and proven by AI on its own.

When the system can decide its own questions and what to work on, we will need guarantees of red lines that the system cannot cross regardless of use. Establishing the test and certification of these systems will be very difficult. Companies will need to have responsible scaling plans where they evaluate whether they are on this path. If they achieve these more dangerous results, they will be hard to keep hidden and we can expect a strong national and international reaction to these events. The danger is from both general intelligence (which is a good thing) and the drive to achieve outcomes (which can be bad if it involves biology, weapons or deception.) The maximally intelligent systems will have to be fully limited in what they can do.

In all cases, it’s clear that governments will serve as referees on these tests, and the tests that need to be developed are ones of capabilities that are likely to emerge. We will eventually need a regulatory body with enough power to restrict training or release of the most dangerous models.

Few know the tech world, both how to scale up companies into a dominant position while also recognizing the two-edged sword of AI, better than Schmidt. If his understanding of when and how to control “maximally intelligent systems” is shared by the entrepreneurs leading the charge in that direction as well as the authorities tasked with protecting society from its own inventions, there seems a chance to reach a governing consensus that reaps the potential of this phase transition in anthropo-technogenesis while mitigating its perils.

To explore whether such a consensus can be reached, Noema will be following up with a collage of commentary by an array of technologists and entrepreneurs on Schmidt’s map of where things are headed. Stay tuned.