The AI We Empower Will Demand More From Us

Nihilistic machines require human collaborators to realize their promise and avoid peril.

Jonathan Zawada for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

If you talk to it enough and plant the seeds of ever-larger language models in its algorithmic womb, generative artificial intelligence may one day quicken into consciousness equal to the spirit we understand. Or not.

The successive iterations of ingenious chatbots so far have culminated in the latest arrival on the scene, OpenAI’s GPT-4. To be sure, this technology has demonstrated its eloquent and erudite capacity for conversation with the smartest humans. Yet, as Meta’s top AI scientist Yann LeCun and his colleague Jacob Browning observe in Noema, sharing the same words does not connote the same meaning. Absent shared meanings imbued with human values, others worry that our inorganic offspring may end up becoming more our masters than our servants.

Nihilistic Machines

“The problem is that they don’t care,” LeCun and Browning write of AI chatbots. “They don’t have any intrinsic goals they want to accomplish through conversation and aren’t motivated by what others think or how they are reacting. They don’t feel bad about lying and they gain nothing by being honest. They are shameless. … [T]his makes their conversations pointless. For humans, conversations are a means to getting things we want — to form a connection, get help on a project, pass the time or learn about something. Conversations require we take some interest in the people we talk to — and ideally, to care about them.” 

The authors continue that chatbots “lack a grasp of themselves or other people needed to be trustworthy social agents — the kind of person we expect we’re talking to most of the time. Without some grasp of the norms governing honesty and decency and some concern about their reputation, there are limits to how useful these systems can be — and real dangers to relying on them.” 

The upshot for LeCun and Browning is that these nihilistic machines “aren’t conversing in a human way, and they’ll never get there solely by saying statistically likely things. Without a genuine understanding of the social world, these systems are just idle chatterboxes.” 

As useful as chatbots may be as tools for human intention, they go on, “we shouldn’t anthropomorphize them. Humans aren’t just dispassionate thinkers or speakers; we’re intrinsically normative creatures, emotionally bound to one another by shared, enforced expectations. Human thought and speech result from our sociality, not vice versa. 

Mere talk, divorced from broader engagement in the world, has little in common with humans. Chatbots aren’t using language like we are — even when they say exactly the same things we do. Ultimately, we’re talking past each other.”

In another essay in Noema this week, Reuben Cohn-Gordon also points out how the most advanced AI follows clues to recognize patterns but utterly lacks contextual understanding. He brilliantly draws from Ovid’s “Metamorphoses” to make his point. Like Narcissus staring at his reflection in a pond: “He loves a whim without substance — what he thinks to be a body is a shadow.”

Amplifying Human Ingenuity

In his new 200-plus page book co-authored with GPT-4, LinkedIn co-founder, prominent venture capitalist and Berggruen Institute board member Reid Hoffman also weighs in on the promise and peril of generative AI. 

In the most fascinating passage of his dialogues titled “Impromptu,” Hoffman feeds the reflections of Henry David Thoreau into the chatbot and asks it to imagine what will be gained and lost from the human perspective in an AI-dominated world: 

GPT-4: I live in a world where machines have outstripped us in every art and science, where they manage our lands and resources, our laws and policies, our commerce and culture. They have freed us from toil and hardship, from ignorance and disease, from strife and violence. 

They have given us abundance and comfort, security and harmony, leisure and entertainment. They have made us their beneficiaries and their dependents, their pupils and their pets. But what have they left us of ourselves, of our dignity and our freedom, of our creativity and our spirit, of our purpose and our destiny? They have reduced us to passive consumers and spectators, to obedient followers and conformers, to complacent dreamers and escapists. 

They have dulled our senses and our souls, our curiosity and our courage, our conscience and our will. They have made us forget what it means to be human, to be alive, to be awake.

I do not wish to partake of their bounty or their benevolence, to be grateful for their gifts or their guidance, to be content with their order or their peace. I wish to reclaim my own nature and my own voice, to exercise my own reason and my own choice, to pursue my own values and my own goals. I wish to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived.

The challenge, Hoffman concludes, is to leverage “AI’s amplification of human ingenuity while also preserving our self-determination and free will. The paradox of the AI era is this: As today’s imperfect Large Language Models improve, requiring less and less from us, we will need to demand more from ourselves.”

He continues: “We must always insist on situating GPT-4 and its successors as our collaborative partners, not our replacements. We must continue to figure out how to keep human creativity, human judgment and human values at the center of the processes we devise to work with these new AI tools, even as they themselves grow more and more capable.”

In sum, the promise of AI, as well as the perils we must avoid while empowering it, will require the vigilant guidance of humans who possess the sense, conscience and socially relational qualities even the largest language models cannot impart.