Barton Friedland is the founder of Luminous Group, where he builds architecture that sustains human judgment as AI scales. For his doctoral research at Warwick Business School, he studied how computational technologies participate in leadership and organizational decision-making.
Editor’s note: Noema is transparent about any AI use in its pieces. We publish original, human-generated ideas but allow authorized, disclosed use of AI in certain cases. Please see details and our policy at the end of this piece.
Consciousness is a property of life, not computation. Brains are not Turing machines made of meat. Simulation does not instantiate. The electromagnetic fields, the self-sustaining organization, the constant biological work of maintaining order against decay — none of this can be faithfully reproduced in silicon.
This is the argument that neuroscientist Anil Seth makes in his Berggruen Prize-winning essay, “The Mythology of Conscious AI.” The reader finishes feeling reassured: AI is not conscious. The liberal humanist order is intact, with humans on top, machines as instruments, consciousness safely contained within biological membranes. Nothing has been disturbed except the claims of a few overexcited technologists.
But what does this tell us about what to do on Monday morning?
The essay does not tell us what the human-AI arrangement produces. It does not tell us what that arrangement is worth. It does not tell us what is lost when we get it wrong. And, critically, it does not tell us whether we are destroying the very conditions under which human intelligence compounds, while debating whether machines can feel.
Intelligence is about doing, Seth notes, while consciousness is about being — a distinction that is too often glossed over. The bundling of intelligence with consciousness — the assumption that anything sufficiently clever must also be aware — is precisely the conceptual error that produces fantasies of sentient chatbots and anxieties about robot suffering.
But having dissolved one binary, Seth installs another. His essay operates within a framework where properties belong either to biological organisms or to computational systems. Consciousness is biological. Computation is algorithmic. The question is whether one can give rise to the other. The answer, Seth argues persuasively, is probably not.
There are three publications that saw past this binary more than 50 years ago: Douglas Engelbart’s “Augmenting Human Intellect,” J.C.R. Licklider’s “Man-Computer Symbiosis” and Ted Nelson’s “Computer Lib/Dream Machines.” Each envisioned computational structures designed to deepen rather than bypass human thought, and asked precisely the question that Seth’s framework cannot reach: What emerges when human capability and computational power are arranged to compound rather than to substitute? The field that claims to augment human capability has, it seems, not read the work that defines what augmentation means.
When a human works with an AI system — when a radiologist reads scans with a diagnostic tool, when an analyst constructs a financial model with a computational partner, when an architect tests structural variations with a generative system — something emerges that exists in neither participant. It is not consciousness. The machine does not feel anything. But it is not mere computation, either. It is enacted intelligence: situated, distributed, directional and irreducible to either party.
The cognitive science that Seth invokes — the 4E tradition of embodied, embedded, enacted and extended cognition — points directly at this territory. Experts across the consciousness domain have demonstrated that cognition is distributed across people, tools and environments; that intelligent action is not the execution of prior plans, but a continuous response to unfolding situations; and that the mind extends beyond the skull into the tools and technologies it couples with. These are not marginal positions. They represent the dominant movement in contemporary cognitive science.
Seth uses this literature to argue that consciousness cannot be substrate-independent — that the body matters, that you cannot abstract mind from the living system that produces it. He is right to draw that conclusion. But he never walks through the door it opens. If cognition is distributed, enacted and extended, then the relevant unit of analysis is not the individual brain (biological or artificial), but rather the configuration in which intelligence operates. The question is not whether the machine is conscious. The question is what the configuration produces — and whether we are preserving or destroying the conditions under which it produces well.
If AI is not conscious — if it offers no inherent meaning, possesses no intrinsic orientation, maintains no autonomous understanding — then every act of human-AI collaboration places a specific demand on the human participant. The human must continuously project, test and stabilize meaning within the collaborative field. Context drifts. Coherence flattens. Outputs that appeared aligned with the human’s intention subtly diverge. The human must notice, re-anchor and redirect. This is not a bug of current AI systems that better engineering will resolve. It is a permanent structural feature of working with entities that process without understanding.
“The bundling of intelligence with consciousness is precisely the conceptual error that produces fantasies of sentient chatbots and anxieties about robot suffering.”
Anyone who works seriously with AI systems knows this well. The experience is cognitively demanding. It requires sustained attention, interpretive judgment and a particular kind of presence — the willingness to hold meaning steady across interactions that do not hold it for you. It is, in the precise sense of the word, work: the continuous cognitive labor of maintaining coherence in a field that offers no guarantee of it.
And here is what the consciousness debate conceals: This work is economically valuable.
In a randomized clinical trial conducted remotely and in-person at Stanford, Beth Israel Deaconess Medical Center and the University of Virginia, physicians who were given access to GPT-4 alongside conventional diagnostic resources were no more accurate in their diagnoses than physicians without it — even as GPT-4 alone outperformed both groups by more than 15%. The same AI. The same clinical task. No significantly measurable benefit — because the arrangement was naive: The technology was bolted onto existing workflows with no designed interaction, no structured dialogue, no preservation of the clinician’s independent reasoning.
When the collaboration was redesigned — requiring clinician and AI to generate independent assessments, then structuring a dialogue that surfaced disagreements and held the clinician’s reasoning in the loop — diagnostic accuracy rose from 75% without AI to 82 to 85% with the collaborative AI. The difference was not in the data. It was in the quality of the human-AI arrangement: whether the human’s judgment was preserved, amplified and compounded by the collaboration, or bypassed, flattened and ultimately eroded.
Researchers at the Stockholm School of Economics and the University of Geneva and others deployed the same AI system to pharmaceutical sales professionals. The results were stark. When the system was tailored to the expert’s cognitive style — structuring authority, workflows and incentives to preserve expert judgement — on average, client meetings rose more than 40% and sales rose 16%. When the same system was imposed without regard for how the human thinks, sales fell about 20% below the no-AI baseline. Worse than no AI at all.
In strategic consulting, the evidence acquires a sharper edge. Researchers at Harvard, MIT, Wharton and Warwick studied 758 Boston Consulting Group (BCG) consultants working with and without AI. For tasks the AI could handle well, those working with AI, on average, had a 12% higher completion rate, finished 25% faster and produced 40% higher quality results. But for tasks requiring the kind of judgment that AI processes without possessing, AI-assisted consultants performed significantly worse than those working alone. The technology did not fail. The arrangement did. When humans deferred to computational fluency on tasks where human judgment was needed most, the collaboration became a liability.
The military has a name for this. Defense strategists distinguish “centaur” systems, where the human directs and the machine executes, from “minotaur” systems, where the AI directs and the human carries out its recommendations. The vocabulary is vivid, but it conceals a false choice. AI researcher Ethan Mollick, an author on the BCG study, observes on his Substack that centaurs, who maintained a clear division of labor, performed well on AI-assisted work where the centaur had more expertise, and for tasks requiring skills on the edge of what AI could do, “cyborgs,” who integrated so deeply with AI that the boundary between human and machine contribution dissolved, performed well also.
The worst outcomes came from those who, in Mollick’s words, “fell asleep at the wheel,” ceding judgment to the system precisely when judgment was most needed. Lucy Suchman, who studies human-computer interactions, has argued that autonomy is not a property of either humans or machines, but of the configurations in which they operate. The centaur-minotaur dichotomy collapses the moment you take this seriously. The question is not who is in charge. The question is whether the configuration preserves the conditions under which human judgment remains active, directional and capable of intervening when the system drifts.
The absence of consciousness in AI is not merely a philosophical finding. It is a design condition with measurable economic consequences. It means that every human-AI system must be architected to preserve the human’s capacity for meaning-making, judgment and coherence-holding, because no one else in the arrangement will do it.
Seth names one mythology: the overattribution of consciousness to machines. The belief that large language models (LLMs) are sentient, that chatbots have inner lives, that we stand on the cusp of artificial awareness — this is, as Seth argues, a confusion born of anthropomorphism, pareidolia and the seductive power of language to simulate interiority it does not possess.
“If AI is not conscious, then every act of human-AI collaboration places a specific demand on the human participant.”
But there is a second mythology that is equally dangerous and far more prevalent in the rooms where decisions about AI deployment are made. This is the mythology of automation: the belief that removing the human from the loop is always an efficiency gain. That judgment is a cost to be eliminated. That capability is a fixed input rather than a compounding asset. That the purpose of AI is to perform tasks currently performed by people, only faster and cheaper.
This mythology does not announce itself as mythology. It arrives dressed in the respectable garments of return-on-investment calculations, headcount reduction targets and productivity dashboards. Increasingly, it arrives dressed as augmentation itself. All of the major AI companies now use the word “augmentation” to describe their respective LLMs’ services while building infrastructure that moves in the opposite direction.
Every significant platform release of the past two years — agent frameworks, coding agents, computer use, research agents — follows the same trajectory: Make the AI do more, make the human do less. None of them instruments the arrangement between human and AI. None of them measures whether human capability is growing or atrophying. The vocabulary of augmentation has been captured. The practice it names has not. The mythology asks: What can we automate? It never asks: What are we destroying?
In February 2025, the inaugural Anthropic Economic Index analyzed more than 1 million conversations between people and the AI assistant Claude, categorizing each by whether the person was using AI to do the work for them — automation — or to think alongside them — augmentation. It found that 57% of the time, people were thinking with AI, not delegating to it.
Subsequent reports have tracked shifts in this balance as AI capabilities evolve. Automation briefly overtook augmentation in mid-2025, then fell back. But the pattern is telling: Even as platforms ship increasingly autonomous tools, the ratio has never moved decisively toward automation among individual users. And yet organizations consistently default to the other direction — not because automation delivers superior returns, but because they possess the institutional muscles for cost reduction and lack the muscles for capability cultivation, and the platforms they purchase are engineered to reward exactly that default.
The economist Carl Benedikt Frey’s historical analysis in “The Technology Trap” reveals this to be a recurring pattern. The first Industrial Revolution produced what economists call “Engels’ Pause,” a period where output per worker grew by 46% while wages rose by a mere 12%. The Second Industrial Revolution, dominated by enabling technologies, produced broadly shared prosperity. The third, dominated by replacing technologies, has coincided with stagnant wages and rising inequality. The pattern is not technological but institutional: When societies build frameworks for augmentation, wealth distributes; when they default to automation, it concentrates.
Seth’s essay, by focusing exclusively on what AI lacks, inadvertently reinforces this second mythology. It tells us that machines do not feel. It does not tell us that the human’s presence in the arrangement is where the value is created. Both mythologies — overattributing consciousness to machines and underattributing value to human presence in the collaborative field — serve the same interest. They keep attention fixed on what AI is rather than on what it preserves and compounds in human capability.
This is not innocent. The object-oriented question — What is AI? — is comfortable, containable and philosophically rewarding. The relational question — What does the human-AI arrangement produce, and under what conditions does it produce well? — is uncomfortable, uncontainable and demands that we reorganize our metrics, our institutions and our conception of value itself.
We need to build institutions capable of recognizing where the value lies — not inside the machine, not inside the human skull, but in the arrangement between them. In the conditions under which judgment compounds. In the quality of attention that sustains coherence across discontinuous interactions. In the human capacity to project meaning, test it against reality and revise it under constraint. In the irreducibly relational intelligence that emerges when humans and machines coordinate with shared orientation.
Anyone who has worked well with an AI system recognizes this as the moment when thought moves differently than it could alone. This is not a philosophical question. It is an economic question. It is a design question, and an institutional imperative with consequences that will compound across the coming decades in ways that the consciousness debate, for all its intellectual fascination, cannot touch.
“We need to build institutions capable of recognizing where the value lies — not inside the machine, not inside the human skull, but in the arrangement between them.”
The European Union has begun to recognize this. Article 14 of the EU AI Act requires that high-risk AI systems be designed so they “can be effectively overseen by natural persons.” The regulation mandates that humans must remain capable of understanding, interpreting and intervening. It does not say how. It does not specify what infrastructure would make such oversight possible, what metrics would demonstrate its presence or how an organization would know the difference between meaningful oversight and its performance. And the enforcement deadline has already been pushed back — from August 2026 to December 2027 — before a single organization has been asked to demonstrate compliance.
The phrase “human-centric AI” appears throughout European policy as though naming the aspiration were equivalent to building it. It is not. A foothold has been established in regulation. The architecture to stand on it has barely been imagined. It will likely not come from the companies whose revenue grows with every token consumed.
The more urgent mythology is not the fantasy of conscious machines. It is the quiet, pervasive, economically devastating assumption that human presence in the loop is a cost rather than the source of compounding value. That capability is consumed rather than cultivated. That the point of intelligence is to eliminate the need for judgment rather than to deepen its exercise.
That mythology does not need a Berggruen Prize to be dangerous. It operates every Monday morning, in every organization that automates what it should augment, that measures what it eliminates rather than what it enables, that mistakes efficiency for intelligence and confuses the absence of friction with the presence of thought.
The consciousness question is settled, or settling. The question that remains — the one that will determine whether AI becomes an engine of institutional intelligence or an accelerant of institutional decay — is whether we will learn to see, measure and preserve the conditions under which human capability compounds. Not what AI is. What it makes possible. And what we lose — irrecoverably, invisibly, while debating the wrong question — when we fail to protect it.
Editor’s note: Noema is transparent about any AI use in its pieces. We publish original, human-generated ideas but allow authorized, disclosed use of AI in certain cases. The initial submitted draft of this piece utilized Anthropic’s Claude as an editorial assistant.
Specifically, it was used for research analysis (comparing the source essay against the author’s existing body of work to identify convergences and gaps), structural development (refining the essay’s architecture and mapping which arguments carried primary weight), source verification (tracing and confirming empirical citations against published studies) and iterative drafting and revision under the author’s direction. It was not used to originate the argument, intellectual frameworks or interpretive claims. The argument was developed by the author — informed by his published research and professional practice — and directed throughout by his editorial judgement.
This draft has since received multiple human edits. Noema verified the piece’s conceptual originality using various scanners and review processes and conducted a detailed human fact-check. See our AI policy here.
