Nathan Gardels is the editor-in-chief of Noema Magazine. He is also the co-founder of and a senior adviser to the Berggruen Institute.
Across the sciences, we are coming to understand the self-organizing principle of “computation” as the building block of all forms of budding intelligence — from primitive cells to generative AI. This process involves learning from the environment, aggregating information and arranging it by sharing functional instructions through “copying and pasting” code that enables an organism to develop, reproduce and sustain itself.
As Blaise Agüera y Arcas and James Manyika write in Noema, “computing existed in nature long before we built the first ‘artificial computers.’ Understanding computing as a natural phenomenon will enable fundamental advances not only in computer science and AI, but also in physics and biology.” Agüera y Arcas is vice president and chief technology officer of Technology & Society at Google. Manyika is senior vice president of Research, Labs, Technology & Society at Google.
More than half a century ago, they note, pioneering computer scientists such as John von Neumann had the intuition that organic and inorganic intelligence follow the same set of rules for development.
“Von Neumann,” write the authors, “realized that for a complex organism to reproduce, it would need to contain instructions for building itself, along with a machine for reading and executing that instruction ‘tape.’ The tape must also be copyable and include the instructions for building the machine that reads it.” This insight into the technical requirements for that “universal constructor” in nature — the “tape-like” instructions of DNA — corresponds precisely to the technical requirements for the earliest computers.
As Manyika and Agüera y Arcas see it, “Von Neumann had shown that life is inherently computational. This may sound surprising, since we think of computers as decidedly not alive, and of living things as most definitely not computers. But it’s true: DNA is code — although the code is hard to reverse-engineer and doesn’t execute sequentially. Living things necessarily compute, not only to reproduce, but to develop, grow and heal.”
On this score, they also cite Alan Turing’s contribution to theoretical biology which emerged from his experience building what is widely recognized as the first computer. Turing described “how tissue growth and differentiation could be implemented by cells capable of sensing and emitting chemical signals … a powerful form of analog computing.”
The authors report that experiments by Google’s “Paradigms of Intelligence” team have shown how, in a simulated universe, a random “soup” of tapes with minimal programming language self-organizes after millions of interactions into “functional tapes” that begin to self-replicate, forming the basis for “minimal artificial life.”
To differentiate and develop further, computation requires a “purposeful structure at every scale” in which distinct functional parts must work together, each depending on other specified functions in a symbiotic fashion.
“How could the intricacy of life ever arise, let alone persist, in a random environment?” the authors ask. “The answer: anything life-like that self-heals or reproduces is more ‘dynamically stable’ than something inert or non-living because a living entity (or its progeny) will still be around in the future, while anything inanimate degrades over time, succumbing to randomness. Life is computational because its stability depends on growth, healing or reproduction; and computation itself must evolve to support these essential functions.”
The authors go on to explain the significance of this new understanding of the universality of computation. Grasping the correspondence with natural computing and learning from it, they believe, will render AI “life-like” as it further evolves along the path from mimicking neural computation to predictive intelligence, general intelligence and, ultimately, collective intelligence.
“We are coming to understand the self-organizing principle of ‘computation’ as the building block of all forms of budding intelligence — from primitive cells to generative AI.”
Building off the first stage of “natural computing,” these are the phases of AI advancement they see:
- Neural Computing — Redesigning the computers that power AI so they work more like a brain, “an exquisite instance of natural computing,” will greatly increase AI’s energy efficiency through data compression on ever-more-powerful chips and decentralized parallel processing among millions of nodes.
- Predictive Intelligence — “The success of large language models (LLMs) shows us something fundamental about the nature of intelligence: it involves statistical modeling of the future (including one’s own future actions) given evolving knowledge, observations and feedback from the past. This insight suggests that current distinctions between designing, training and running AI models are transitory; more sophisticated AI will evolve, grow and learn continuously and interactively, as we do.”
- General Intelligence — “Intelligence does not necessarily require biologically based computation. Although AI models will continue to improve, they are already broadly capable, tackling an increasing range of cognitive tasks with a skill level approaching and, in some cases, exceeding individual human capability. In this sense, “Artificial General Intelligence” may already be here.”
- Collective Intelligence — “Brains, AI agents and societies can all become more capable through increased scale. However, size alone is not enough. Intelligence is fundamentally social, powered by cooperation and the division of labor among many agents. In addition to causing us to rethink the nature of human (or ‘more than human’) intelligence, this insight suggests social and multi-agent approaches to AI development that could reduce computational costs, increase AI heterogeneity and reframe AI safety debates.” On this latter point, the authors argue that ever-more autonomous models will not so much tend to go rogue as necessarily reflect in a “friendly” manner their interdependence with other models in their own formation.
“After decades of meager AI progress,” Agüera y Arcas and Manyika conclude, “we are now rapidly advancing toward systems capable not just of echoing individual human intelligence, but of extending our collective more-than-human intelligence. … That will benefit humanity, advance science, and ultimately help us understand ourselves — as individuals, as ecologies of smaller intelligences and as constituents of larger wholes.”