‘Immortal’ AI Challenges The Mortal Computation Of Humans

Lesser smarts always lose out to superior intelligence.

Jonathan Zawada for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

Beyond the avid venture capitalists and digital giants promoting the rapid commercialization of generative AI in all its promise, more sober and critical voices, not least the pioneers of the very technology among them, worry that it can become an “existential threat to humanity.” But few of those in the know ever explain, in lay terms you and I might understand if we try, what that actually means and how it may come about.

Considered the “godfather of AI,” Geoffrey Hinton is more in the know than most — and thus more concerned than most over the dangers of fostering superintelligence smarter than we can ever be. When OpenAI’s ChatGPT4 was released last year, he experienced an “epiphany” that led him to defect from his research post at Google, expressing regret over much of his life’s work.

In the Romanes Lecture delivered at Oxford University last week, Hinton explained the logic of his fears with the same step-by-step rigor by which he helped devise the early artificial neural networks that are the foundation of the superintelligence that so concerns him.

To understand the reasoning behind his repudiation, it is worth watching the video of his 36-minute lecture. It is accessible to us non-experts — if you are attentive and patient.

Hinton’s fundamental claim is that deep learning digital computation is better optimized to acquire expansive knowledge than the analog computation of the biological mind, and thus will one day prevail.

Long story very short, analog computation runs on models tied to the idiosyncratic properties of one piece of hardware. “When that hardware dies, so does the learned knowledge,” which can then only be inefficiently conveyed — slowly and painfully taught to the next generation of hardware. In other words, it is mortal.

By contrast, digital computation “makes it possible to run many copies of exactly the same model on physically different pieces of hardware, which makes the model immortal. In this way, thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently. That is why chatbots like GPT-4 or Gemini can learn thousands of times more than any one person.” 

Such bots have the intensifying capacity to absorb all available information, process it at quadrillion calculations per second through the deep learning layers of artificial neural networks and then efficiently impart its distillation. This ever-compounding acquisition of virtual omniscience far surpasses any human potential.

As Hinton sees it, when these superintelligences compete with each other, a survival-of-the-fittest evolutionary culling will take hold. The one that can grab the most resources will be the smartest. The imperative of self-preservation under the condition of competition will incentivize the most intelligent systems to be the most aggressive, reaching for more control by seeking to ascertain the “subgoals” of their programmed orientation — putting together what they have learned to reconfigure on their own how to get from A to B in any given circumstance.

Once having attained that chain of thought reasoning, superintelligence would become autonomously able to prompt itself. It will have a mind of its own to set its own goals and orientation. The most powerful will win out over the rest.

The worst nightmare would be if bad actors, “like Trump or Putin” to use Hinton’s example, hack and orient the learning networks of the most powerful models.

“We were once smarter than animals; now machines are smarter than us,” Hinton concludes. There is little evidence in history, he further observes, that lesser intelligences were ever able to escape domination by some superior intelligence. “If digital super-intelligence ever wanted to take control, it is unlikely that we could stop it. So the most urgent research question in AI is how to ensure that they never want to take control.”