Michael Levin is a distinguished professor and the Vannevar Bush chair in the biology department at Tufts University, as well as the director of the Allen Discovery Center at Tufts and associate faculty at the Wyss Institute for Bioinspired Engineering at Harvard University.
“There is nothing natural about classes, families and orders, the so-called systems are artificial conventions.” — Jean-Baptiste Lamarck
Never hire:
- an orthopedic surgeon who doesn’t think your body functions as a mechanical machine
- a psychotherapist who thinks it does
- an HVAC tech who doesn’t think thermostats have nano-goals
- a coder who thinks only physics, not incorporeal algorithms, makes electrons dance
- a bicycle-maker or virologist who delights in the novel, whimsical and unpredictable agential quality found in their creations
- an AI engineer or synthetic morphologist who thinks that “we know what it can do because we built it and understand the pieces”
Why? Because different contexts require us to adopt diverse perspectives as to how much mind, or mechanism, is before us. The continuing battle over whether living beings are or are not machines is based on two mistaken but pervasive beliefs. First, the belief that we can objectively and uniquely nail down what something is. And second, that our formal models of life, computers or materials tell the entire story of their capabilities and limitations.
Despite the continued expansion and mainstream prominence of molecular biology, and its reductionist machine metaphors, or likely because of it, there has been an increasing upsurge of papers and science social media posts arguing that “living things are not machines” (LTNM). There are thoughtful, informative, nuanced pieces exploring this direction, such as this exploration of “new post-genomic biology” and others, masterfully reviewed and analyzed by cognitive scientist and historian Ann-Sophie Barwich and historian Matthew James Rodriguez at Indiana University Bloomington. (A non-exhaustive list includes engineer Perry Marshall’s look at how biology transcends the limits of computation, computer scientist Alexander Ororbia’s discussion of “mortal computation,” biologist Stewart Kauffman and computer scientist Andrea Roli’s look at the evolution of the biosphere, and the works of philosophers like Daniel Nicholson, George Kampis and Günther Witzani.)
Many others, however, use the siren song of biological exceptionalism and outdated or poorly defined notions of “machines” to push a view that misleads lay readers and stalls progress in fields such as evolution, cell biology, biomedicine, cognitive science (and basal cognition), computer science, bioengineering, philosophy and more. All of these fields are held back by hidden assumptions within the LTNM-lens that are better shed in favor of a more fundamental framework.
In arguing against LTNM, I should put my cards on the table. I use cognitive science-based approaches to understand and manipulate biological substrates. I have claimed that cognition goes all the way down to the molecular level; after all, we find memory and learning in small networks of mutually interacting chemicals, and studies show that molecular circuits can act as agential materials. I take the existence of goals, preferences, problem-solving skills, attention, memories, etc., in biological substrates such as cells and tissues so seriously that I’ve staked my entire laboratory career on this approach.
Some molecular biology colleagues consider my views — that bottom-up molecular approaches simply won’t suffice, and must be augmented with the tools and concepts of cognitive science — to be an extreme form of animism. Thus, my quarrel with LTNM is not coming from a place of sympathy with molecular reductionism; I consider myself squarely within the organicist tradition of theoretical biologists like Denis Noble, Brian Goodwin, Robert Rosen, Francisco Varela and Humberto Maturana, whose works all focus on the irreducible, creative, agential quality of life; however, I want to push this view further than many of its adherents might. LTNM must go, but we should not replace this concept with its opposite, the dreaded presumption that living things are machines; that is equally wrong and also holds back progress.
Still, it is easy to see why the LTNM-lens persists. The LTNM framing gives the feeling that one has said something powerful — cut nature at its joints with respect to the most important thing there is, life and mind, by establishing a fundamental category that separates life from the rest of the cold, inanimate universe. It feels as if it forestalls the constant, pernicious efforts to reduce the majesty of life to predictable mechanisms with no ability to drive consideration or the first-person experiences that make life worth living.
“Many use the siren song of biological exceptionalism and outdated or poorly defined notions of ‘machines’ to push a view that misleads lay readers and stalls progress.”
But this is all smoke and mirrors, from an idea that took hold as a bulwark against reductionism and mechanism; it refuses to go away even though we have outgrown it. LTNM’s attractive coating comes in an unfortunate package:
- Many who support LTNM never specify whether they mean the boring 20th-century machines, today’s quite different artifacts, or the fruits of all possible engineering efforts in the deep future. By failing to answer the hard question of defining what a “machine” is — they neglect a point at the core of their claim.
- It locks its adherents into unsolvable pseudo-problems as to the status of cyborgs, hybrots, augmented humans and every possible kind of chimeric being that’s partly natural and partly engineered. An increasing number of mental contortions will be needed as these beings come online, to accommodate the many special cases that don’t fit into LTNM’s binary classification.
- It signals support for the power of evolution but fails to define its secret sauce and to explain why a process consisting of eons of trial and error by mutation and selection should have a monopoly on making minds. Why can’t engineers use those same techniques, augmented by rational design, to embody nature’s amazing properties in new ways and in other media?
- It sounds grandiose and universal, but rarely do its proponents say what it means for detecting life broadly, in the universe. Would they assess functional capabilities, composition or origin story as definitive evidence when evaluating the moral standing of an eloquent and personable alien visitor who is shiny and metallic-looking, and possibly came into being with the help of other minds?
It’s also disingenuous to say that the mechanistic approach to life has not contributed in major ways to knowledge and capabilities — of course it has, from orthopedic surgery to vaccines, synthetic biology and much more. On the other hand, many knowledge gaps and functional outcomes remain unaddressed; it’s likely that the mechanistic approach has already picked much of the low-hanging fruit in many aspects of science and now must be augmented by top-down approaches, such as collaborating with the learning capacity and goal-directed intelligence of living material — communicating with it, instead of micromanaging genes and proteins. So, what are we to make of claims that life can be understood using the machine metaphor? There is currently little beneficial crosstalk between the organicist and mechanist camps, who differ so strongly in their claims of what life is.
Resolving The Debate by Committing To Metaphor
“Whatever you might say the object ‘is,’ well it is not.” — Alfred Korzybski
My proposed solution is to lean into the realization that nothing is anything and drop the literalism that mistakes our maps for the totality of territory. Let’s stop presuming our formal models (and their limitations) are the entirety of the thing we are trying to understand and pretending that one universal objective metaphor is a genuine representation of “living things” while all others are false. In other words, let’s reject the one thing organicists and mechanists agree on — the assumption that there is a single accurate and realistic picture of systems if we could only discover which one is right.
I propose instead that it’s all about perspective and context. In some scenarios, certain formalisms and tools appropriate for some kinds of machines will pay off; in other scenarios, they are woefully inadequate. If we give up the idea that there needs to be one correct answer and we get comfortable with having to specify context and prospective payoff, we can make real progress. On the one hand, this pluralistic idea is simple, unsurprising and ancient. On the other hand, failure to absorb this lesson is at the root of many of today’s disagreements and obstacles to progress.
But my proposal isn’t that anything goes and that everyone can simply choose the frame they want. Instead, open-mindedness can enable us to undertake empirically grounded research to investigate each perspective’s advantages and limitations and whether better metaphors might facilitate and enrich our interactions with complex systems of all kinds.
That is because all terms — cognitive, computationalist and mechanistic — are not making claims about what the system is; rather, they are each a statement of a proposed protocol for effectively interacting with the system. Depending on whether you are dealing with a mechanical clock, a thermostat, a dog or a rational metacognitive being, appropriate techniques will include physical rewiring, direct re-writing of the system’s goals, training, teaching, psychoanalysis, love and many more possibilities.
“Let’s reject the one thing organicists and mechanists agree on — the assumption that there is a single accurate and realistic picture of systems if we could only discover which one is right.”
We should not make pronouncements about what systems (cells, biobots, animals and perhaps someday exobiological phenomena) “really are,” but instead, test out the unique toolkits from different disciplines for how to predict, control, create and perhaps be changed by a given system. Each has its own assumptions, ways of thinking and practical tools that provide powerful leverage but also blind spots. It’s a wide spectrum, and multiple approaches will pay off in diverse ways (or not, but that’s the empirical game we’ve taken on as scientists). Many things can be true at once, and all are about us and our intentions as much as they are about the system itself.
The “machines or not” (or “intelligent or not” or “purposeful or not,” etc.) framing is a sure path to unresolvable pseudo-problems if we take these terms as binary, objective categories that exist in nature; are cyborgs (let’s say 50% human cells, 50% engineered circuitry) machines or life? At what stage of the smooth, slow, continuous process of embryogenesis (or evolution) do the supposedly mechanical dynamics of biochemistry become the workings of a purposeful mind? These questions are unsolvable if we look for a clean, bright line separating these categories.
I propose an engineering (writ large) approach: What we are really saying when we make those claims is, for example, “Here is the bag of tools — e.g., rewiring, cybernetics, behavior-shaping or psychoanalysis — that I propose to use to relate to this system. Let’s all see how well that turns out.” Then, we can see that all these terms indicate rich continua rather than binary categories and that multiple observers’ viewpoints can be effective (insightful, powerful) in their contexts because no one is exclusively right. What places all these systems — living or not — on the same spectrum is the fact that all have aspects that are amenable to the mechanistic and agentic lenses, all exhibit surprises and competencies that our formal models do not capture, and none wear their capabilities and limitations on their sleeve (they must be determined by experiment).
An orthopedic surgeon should see your body as a simple, mechanical machine — they’ve got hammers and chisels, and their approach works very well for their remit. In contrast, a psychoanalyst should emphasize and help augment your growth as a free agent in search of meaning.
So, what should a worker in regenerative medicine see in your cells? Or an evolutionary developmental biologist? That is an empirical question to be settled by trying the various tools and seeing how far one can get. The data so far suggest significant advantages to paying attention to the minds of cells and their collectives.
But it’s not just the products of eons of trial and error by evolution that are appropriate subjects for the tools of behavior science. “Machine” now covers an incredible variety of approaches (including ones that make use of evolutionary dynamics, cybernetic goal-directedness, self-construction and self-reference, open-ended reasoning and lack of separation of data from hardware, etc.).
We have left the age where “machines” were easy to delineate because we were so limited in our understanding of the tools that can be used to understand and make machines (it turns out, some of the same tools behavioral scientists and biologists have been using for a long time, such as manipulating memories, beliefs, attention and the autonomous alignment of parts toward system-level goals). We must give up the comforting notion that we understand matter well enough to say that the limits of our models are equivalent to the limits of the “non-living” world.
The Problem Of Misplaced Confidence
“All models are wrong, but some are useful.” — George E. P. Box
We believe we understand machines and inanimate materials. Most people, upon finding computer chips and wires under their skin, would be upset; in fact, many who support LTNM would feel they had been robbed of their essential, ineffable quality. But why not conclude, instead, that apparently computer chips and wires can give rise to the rich inner life, freedom and responsibility we know we enjoy? Instead, this is taken as a massively disruptive fact about their own existence because of a story told to us since childhood about how wet chemicals are the only things that enable true minds. In this ingrained “reality,” people will denigrate “machines” in every context, even if it means denying their own reality.
“We must give up the comforting notion that we understand matter well enough to say that the limits of our models are equivalent to the limits of the ‘non-living’ world.”
This misplaced confidence in our models of the nonliving world is an incredibly effective piece of propaganda; the reductionist physicalist worldview is pretty universal. Even many religious people seem 100% committed to biochemistry as the only substrate that can do the trick even though no one has a convincing story for why robots cannot become ensouled, and escape the limitations of their material, as biochemical embryos do.
I think the magic that makes old machine metaphors too limited for living systems applies likewise to even minimal systems we intuitively think should be well-described by our formal models. I propose that the better path forward is based on pluralism and pragmatism; not confusing our formal models (and their limitations) for things themselves, living or not; and being as open to the surprising emergence of proto-cognition (not just complexity) in unconventional places as we are to its emergence in natural biology because we still don’t know enough to assume we know where it can and cannot be found.
The days of being loose with colloquial terminology and of pretending we have binary, easy-to-recognize categories of machines and living beings are over. They’re not coming back, given the advances in bioengineering and active matter research and the obvious realization that evolution is not magical creation — that inside our cells is not fairy dust but the same kind of matter that engineers can manipulate. That’s good because such conceptions barely sufficed in prior ages due to the limitations of both technology and imagination. Using the term “machine” to conjure outdated visions of boring, deterministic, “we know what it does” objects simply masks our ignorance and holds back progress on some of the most fascinating puzzles of the century.
Let’s also abandon the view that there are metaphorical turns of phrases and then there are real scientific explanations. All we have are metaphors, some better than others, to help us get to the next, more empirically interesting and generative metaphor. Good and bad metaphors are not detectable from our philosophical armchairs as errors that run afoul of some classic category; metaphors facilitate (or hold back) discovery to various degrees, as categories (life, machine, intelligence, goal, etc.) flexibly change with new discoveries in science.
And the science is clear — we now have non-magical ways to understand goals, downward causation, self-reference, plasticity and much more, including hardware driven by algorithms, as explained by physicists George Ellis and Barbara Drossel; cybernetics as discussed by Francis Heylighen and his colleagues; and other approaches to the study of physically-embodied minds (such as those by mathematician Evo Busseniers, cybernetics pioneer Arturo Rosenblueth and cognitive scientists Douglas Hofstadter and Randall Beer).
The reductionist/mechanist camp will have to adjust to the fact that cognitive tools, applied to things that aren’t brainy animals, are not “just” metaphors; like “molecular pathways,” they are legitimate hypotheses that will live or die by their consequences at the lab bench. The organicist camp will have to live with the fact that computational perspectives (with respect to the re-programmability of life, abstraction layers, etc.) are also simply metaphors, useful in some circumstances of biology — not essentialist denigrations of life’s majesty.
If, in our complacency, we are pretty sure that rocks are inanimate — not within a crisp, binary set defining “cognitive” — we will not, for example, be motivated to look for memory and learning in abiotic materials. Binary categories are excellent gatekeepers, preventing the tools of one discipline from being used to benefit another; for example, the tools of behavioral and cognitive science from enriching the fields of active matter and bioengineering. Binary thinking and unwavering commitment to ancient categories slow down progress.
Let’s get on with the good science of being very specific about our metaphors and what they facilitate versus constrain. Let’s specify, every time, precisely where on the spectrum of persuadability one plans to approach a system and be clear that a particular claim applies to a particular research effort, not a universal and objective category, and acknowledge that we are all in the business of generating and testing metaphors.
None of this should be shocking. Shedding such a lens would have massive — if not widely beloved — implications that rest on rather reasonable, well-trodden philosophical positions. What hampers progress now is a lack of humility. We tend to believe that because we’ve made something and know its parts, we understand its capabilities and limitations and what materials and algorithms can and cannot do. But we do not. We’re just scratching the surface. It is ironic that in denying precious magic (agency, cognition, etc.) to “machines,” organicists have bought into the reductionists’ most central claim: that knowing the properties of a machine’s parts, enables you to know its complete nature.
“What hampers progress now is a lack of humility. We tend to believe that because we’ve made something and know its parts, we understand its capabilities and limitations.”
In an influential piece, the Australian philosopher David Chalmers framed the “hard problem” of consciousness as: “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” This same assumption pervades numerous fields: that we have enough knowledge and the right cognitive architecture to have a well-calibrated intuition about what is reasonable and what kinds of systems have (proto)cognitive properties. I think we do not have any of these, and thus caution and an open mind are our best guides.
Biological exceptionalism and the materialist framing that attaches significance and moral import to composition and origin stories create incredibly compelling and comfortingly simplistic mythologies. And it’s not just in the Western world. One might think that Eastern and Indigenous traditions are less constrained by physicalist assumptions. Yet my debates with Indic scholars, Buddhists, rabbis, etc., show that these traditions may be just as committed to poorly supported assumptions. They are fine with immaterial spirits being the substrate of mind but are often surprisingly certain that spirits aren’t allowed to incarnate in robotic constructs produced by the efforts of thoughtful engineers, only in squishy wet ones produced by the trial and error of mutation and selection.
There are many reasons to reject naïve computer and machine frameworks in the study of life and mind. Of course, computationalism (the idea that life and mind occur due to a specific algorithm) or simple machine metaphors cannot fully encompass living things. But neither are abiotic or engineered systems encompassed by machine metaphors. That’s because they are NOT machines any more than living things are, because the term “machines” usually refers to a package of expectations and simplifications that inevitably miss key aspects of reality.
Not even simple algorithms are fully encompassed by our picture of what the algorithm is doing — they do side-quests and have unexpected competencies. Minimal, deterministic systems of interacting chemicals are found to learn and make inferences when we commit to checking for intelligence in unfamiliar places. We need to come to grips with the fact that all our frames will miss important aspects of things, that it’s OK to say something about a system without claiming you’ve said everything, and that even the simplest of systems can exert surprising effects that reach higher on the continuum of agency than mere emergence of complexity or unpredictability.
Synthetic systems, which we might think are following an algorithm, may or may not have a degree of true mind, but that determination should not be based on their following an algorithm (any more than the reality of human minds isn’t revealed by their following of the laws of chemistry). The emergence of cognition, in a strong way, that is facilitated but not circumscribed by the embodiment on which it supervenes, is the research frontier for the next century, and it applies equally well to designed, evolved and hybrid systems. If, as Belgian surrealist René Magritte pointed out in “The Treachery of Images,” not even a pipe is encompassed by the limitations of our representation of it, how much less so are dynamic creations, living and otherwise?
I call upon the organicist community to take their own view more seriously: The reason that living things are not entirely described by mechanist metaphors is the exact same reason that “machines” are not entirely described by them either. Organicism gives us a great tool — respect for the surprising emergence of higher-order aspects of cognition (not just complexity or unpredictability); take this tool seriously and apply it fearlessly. Minds and the respect they are due are not a zero-sum game. It’s OK to see “machines” as somewhere on the same spectrum as us — we won’t run out of compassion (a common driver of the scarcity mindset with respect to attributing cognition) if we extend the possibility of emergent minds beyond its most obvious proteinaceous examples.
This view isn’t popular with either side; stark categories and crisp distinctions between viewpoints are more comfortable than continua — they make everything simpler. But when pushed to sharpen their claims and explain the secret sauce that separates life from mere machines, some people will back off from LTNM to claim something like “fine, it’s the machines we’ve made thus far that are nothing like life.” And with this, I largely agree, though those kinds of claims have a very short shelf-life.
“Minds and the respect they are due are not a zero-sum game. It’s OK to see “machines” as somewhere on the same spectrum as us.”
Unfortunately, “Life is not Like Today’s Machines” is not as catchy and magnetic a title, so no one leads with this more defensible view. People outside the field read the more grandiose claim and assume there’s good theory behind it, while those within the field know its limitations but often don’t make it explicit in their writing. The reality is that most of today’s machines indeed do not use the architecture of life’s self-interpreting agential material at all scales, but there is no reason we cannot use rational engineering to extend these ideas to novel substrates.
The machine metaphor has worked fine for biologists who never expected a single metaphor to capture everything. On the other hand, it has failed some engineers, synthetic biologists and computer scientists because it turns out that there are no machines anywhere — living or not – that are totally encompassed by their materials and algorithms, and that enable total bottom-up control while ignoring autonomy. Given the progress in the fields of Diverse Intelligence and Artificial Life, the term “machine” now conveys largely misleading assumptions. We are better off being explicit about which metaphor we plan to use in any given scenario, the tools that that metaphor enables us to apply, and what we will necessarily miss by choosing that one frame over others.
To summarize, the approach I am advocating for is anchored by the principles of pluralism and pragmatism: no system definitively is our formal model of it, but if we move beyond expecting everything to be a nail for one particular favorite hammer, we are freed up to do the important work of actually characterizing the sets of tools that may open new frontiers.
As scientists and philosophers, we owe everyone realistic stories of scaling and gradual metamorphosis along a continuum — not of magical and sharp transitions — and a description of the tools we propose to use to interact with a wide range of systems, along with a commitment to empirical evaluation of those tools. We must battle our innate mind-blindness with new theories in the field of Diverse Intelligence and the facilitating technology it enables, much as a theory and apparatus for electromagnetism enabled access to an enormous, unifying spectrum of phenomena of which we had previously had only narrow, disparate-seeming glimpses. We must resist the urge to see the limits of reality in the limits of our formal models. Everything, even things that look simple to us, are a lot more than we think they are because we, too, are finite observers — wondrous embodied minds with limited perspectives but massive potential and the moral responsibility to get this (at least somewhat) right.