Nathan Gardels is the editor-in-chief of Noema Magazine. He is also the co-founder of and a senior adviser to the Berggruen Institute.
VATICAN CITY — For Martin Heidegger, the German philosopher of existence, the advent of cybernetics was the last straw for what it means to be human.
As he despairingly saw it, the integral nature of Being would be extinguished through a system of feedback loops that self-reinforce calculating reason to the exclusion of any spiritual dimension or philosophical frame to elevate or govern it. What he called the “technicity” of instrumental means with no substantive end would inexorably prevail over the diminished soul.
In a famous last interview in Der Spiegel in 1966, Heidegger declared, “Only God can save us,” because the planetary domination of modern technology had deprived humanity of the wherewithal to change course and rescue itself. All our civilization could now do was prepare not only “for the appearance of a God,” but also “for the absence of a God in his decline, for the fact that we fall before the absent God.” In short, he feared what the lethal brew of nihilism and technological prowess might bring.
These thoughts came sharply to mind last week as I participated in a gathering at the Vatican’s Pontifical Academy of Social Sciences in Rome, convened to contemplate the “responsible, ethical, and human-centered use of artificial intelligence” — cybernetics on computational steroids.
Like Pope Francis before him, who cautioned against becoming “rich in technology, but poor in humanity,” Pope Leo XIV has sought to remain open to the marvelous advances promised by AI but worries that it will subvert Christian humanism, which grounds the inviolable dignity of every person in their likeness to God.
“Artificial intelligence, biotechnologies, data economy and social media are profoundly transforming our perception and our experience of life,” he said in a speech to Italian bishops in June. “In this scenario, human dignity risks becoming diminished or forgotten, substituted by functions, automatism, simulations. But the person is not a system of algorithms: he or she is a creature, relationship, mystery.”
The Pope is concerned that “the digital world will follow its own path and we will become pawns, or be brushed aside.” He has gone so far as saying that it is going to be “very difficult to discover the presence of God in AI.”
Alarmed that the tech race between China and Silicon Valley is getting out of control, the new pontiff has broadened Pope Francis’ call for the “audacity of disarmament” on nuclear weapons to now include the competitive proliferation of AI.
Neither Techno-Utopia Nor Techno-Apocalypse
Coming from California, where hundreds of billions of dollars are being fervently invested in the no-holds-barred development of frontier models of superintelligence, I could not shake the sense that all the pleas at my Vatican discussions for the preservation of the noblest human values — dignity, autonomy, solidarity, equity — sounded almost quaint. Like all nostalgic yearnings, such sentiments only arise when their time appears about to pass.
In my mind’s eye, I could see some Silicon Valley titan dismissively asking, “How much compute does the Pope have?” — just as Stalin once dismissed the Church for having no military divisions and thus lacking the power to make a difference in the direction of history.
Most participants in the Vatican seminar strove to avoid looking like naïve Luddites rejecting the wonders of technology, seeking instead a path to square the circle by “thinking outside the opaque box of algorithms” for a way AI could augment, empower and amplify human agency rather than displace it.
This approach rejects the binaries of “techno-utopia” or “techno-apocalypse,” recognizing that technology is not alien to humanity but is part and parcel of what makes us human. Rather than regretting the future, it embraces the evolutionary potential of a symbiotic companionship between human and inorganic intelligence. Such “integral human development” — to put it in the terms of religious discourse — describes a relationship of mutuality in which each shapes the other.
The hope in Rome was that this new ground could be found through what Pope Leo XIV calls “an ethical AI framework” that keeps the new species of intelligent machines — a “what not a who” — within the bounds of human control.
Rage Against The Race To Superintelligence
A wave of appeals against unchecked AI accelerationism is mounting daily. Regulation is also beginning to bite.
“This pushback against the AI juggernaut … nonetheless provides necessary ballast against the heretofore asymmetric contest between precaution and accelerationist momentum.”
In September, a group of thinkers that included Yuval Noah Harari, the Vatican’s own AI expert, Paolo Benanti, and foundational AI scientists Stuart Russell, Yashua Bengio and Gregory Hinton met at the Vatican to chart out the principles that should guide the development of superintelligence. Their appeal was addressed to the Pope and world leaders.
These principles include:
- AI must never be developed or used in ways that threaten, diminish or disqualify human life, dignity or fundamental rights. Human intelligence — our capacity for wisdom, moral reasoning and orientation toward truth and beauty — must never be devalued by artificial processing;
- AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited;
- Only humans have moral and legal agency. AI systems are and must remain legal objects, never subjects;
- AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, in law enforcement, border control, healthcare or judicial decisions;
- Developers must design AI with safety, transparency and ethics at its core, not as an afterthought. AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction or loss of autonomy.
As previously discussed in Noema, the European Union has led the precautionary efforts. Its latest set of guardrails, the “AI Regulation Act,” took effect at the beginning of August.
In September, California Gov. Gavin Newsom signed into law the most consequential regulation on AI to date in the U.S. Called the “Transparency in Frontier Artificial Intelligence Act,” it aims to “balance innovation and safety.” Significantly, it was negotiated directly with the state’s Big Tech companies, such as Meta and Google.
The new law includes the following:
- Requires large frontier developers to write, implement, comply with and publish on their websites a ‘frontier AI framework’ to manage, assess and mitigate catastrophic risks, incorporating national and international standards as well as industry consensus on best practices;
- Establishes a new consortium within the state’s Government Operations Agency to develop a framework for creating a public computing cluster. The consortium, called CalCompute, will advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by fostering research and innovation;
- Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services;
- Protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.
More recently, an open letter coordinated by the Future of Life Institute called for “a prohibition on the development of superintelligence, not lifted before there is (a) broad scientific consensus that it will be done safely and controllably, and (b) strong public buy-in.”
It was endorsed by a broad array of signatories ranging from the Hollywood actor and producer Joseph Gordon-Levitt to Britain’s Prince Harry to Obama-era national security official Susan Rice to Trump guru Steve Bannon and Apple co-founder Steve Wozniak, among many others.
The AI pioneers who drafted the Papal appeal also signed on. From that group, Russell added a comment to his signature: “This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
Even to the extent this pushback against the AI juggernaut is in some ways more alarmist than warranted, it nonetheless provides necessary ballast against the heretofore asymmetric contest between precaution and accelerationist momentum.
The Abiding Resonance Of Humanism
The contemporary German philosopher, Jürgen Habermas, has argued that the core values of secular modern democracies, such as human rights and dignity, remain nourished to this day by the religious sources of their origins.
This abiding resonance may lack the hard power of “compute” in the same sense that the Church lacked military divisions during the Cold War. But we should remember it was the soft power of faith behind the Solidarity movement in Poland that, in the end, triumphed over the armed Soviet bloc.
Whether this suggests that the God Heidegger invoked might still save us, or whether Western civilization is, after all, mustering the wherewithal within to save itself through the legacy of Christian humanism, is a distinction without a difference.

