AI May Be Smarter Than We Think

As with evolution, mindless learning drives intelligent machines.

Jonathan Zawada for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

The advances of smart machines have more sharply defined how largely unexceptional humans are among the multiple intelligences, both natural and artificial, that surround us. Centuries of refining instrumental reason, which emerged from the human-centric Enlightenment and enabled us to build machines capable of deep learning, have, paradoxically, taught us our own limitations.

In an essay in Noema, Jacob Browning traces how “mindless learning” through distributed experiences of trial and error — instead of the “minded learning” of conscious forethought — is driving the far-reaching advances of AI in much the same way as natural and social evolution itself takes place. “Mindless learning is more natural and commonplace than the minded variety we value so highly,” he writes. “The history of human tools and technologies … reveals that conscious deliberation plays a much less prominent role than trial and error.” 

Some humility seems in order. “Our capacity to reason so impressed Enlightenment philosophers that they took this as the distinctive character of thought — and one exclusive to humans,” he says. “The Enlightenment approach often simply identified the human by its impressive reasoning capacities — a person understood as synonymous with their mind.”

He goes on: “This led to the Enlightenment view that took the mind as the motor of history: Where other species toil blindly, humans decide their own destiny … this picture of ourselves held that our minds made us substantively different and better than mere nature — that our thinking explains all learning, and thus our brilliant minds explain ‘progress.’”

“Less anthropocentric thinkers,” notes Browning, didn’t buy it. They argued that “human learning is better understood as similar to the stimulus-responsive learning seen in animals, which hinges on creating associations between arbitrary actions that become lifelong patterns of behavior.” In other words, learning in nature does not revolve around conscious thought; it is about appropriate adaptation to a changing environment.

Browning contends this is true for social evolution as well. “‘Progress’ — if it is appropriate to use the term — in politicslanguagelaw and science results not from any grand plan but instead from countless, undirected interactions over time that adaptively shape groups towards some stable equilibrium amongst themselves and their environment.”

This too, is how artificial intelligence learns to solve problems without thinking.

What does this insight mean for the distinctive capacities of us humans? “Rather than singling out the human,” Browning suggests, “we need to identify those traits essential for learning to solve problems without exhaustive trial and error. The task is figuring out how minded learning plays an essential role in minimizing failures. Simulations sidestep fatal trials; discovering necessary connections rules out pointless efforts; communicating discards erroneous solutions; and teaching passes on success.”

Yet, for Browning, it would be a mistake to confine AI to “simply imitating human intelligence. Evolution is fundamentally limited because it can only build on solutions it has already found … Constraining machines to retrace our steps — or the steps of any other organism — would squander AI’s true potential: leaping to strange new regions and exploiting dimensions of intelligence unavailable to other beings.”

In other words, to the extent we have finally come to understand the narrow limits of human reason, we should not rest on programming machines in our image, but enable them to transcend what we know through their own capacities for mindless learning. AI may end up being smarter than we think.

I discussed this once with Yuval Noah Harari. Here is a small excerpt from that exchange:

Gardels: Your book “Homo Deus,” it seems to me, is really a brilliant update of Goethe’s “Faust.” In that masterpiece of literature, the Earth Spirit puts down Faust’s hubris as a great achiever of earthly accomplishment by saying, “You are equal to the spirit you understand,” meaning humans’ limited understanding is not at the level of the gods. Do you agree?

Harari: Not really. Faust, like Frankenstein or “The Matrix,” still has a humanist perspective. These are myths that try to assure humans that there is never going to be anything better than you. If you try to create something better than you, it will backfire and not succeed.

The basic structure of all these morality tales is: Act I, humans try to create utopia by some technological wizardry; Act II, something goes wrong; Act III, dystopia. This is very comforting to humans because it tells them it is impossible to go beyond you. The reason I like Aldous Huxley’s “Brave New World” so much is that it plays with the scenario: Act I, we try to create a utopia; Act II, it succeeds. That is far more frightening ― something will come that is better than before.

Gardels: But success is a failure that destroys human autonomy and dignity?

Harari: That is an open question. The basic humanist tendency is to think that way. But maybe not.

Gardels: But all of history up to this point teaches that lesson. You are saying it is different now?

Harari: Going back to the Earth Spirit and Faust, humans are now about to do something that natural selection never managed to do, which is to create inorganic life ― AI. If you look at this in the cosmic terms of 4 billion years of life on Earth, not even in the short term of 50,000 years or so of human history, we are on the verge of breaking out of the organic realm. Then we can go to the Earth Spirit and say, “What do you think about that? We are equal to the spirit we understand, not you.”

Human history began when men created gods. It will end when men become gods.