Cognizant Machines: A What Is Not A Who

AI takes its place among and may conjoin with other multiple intelligences.

Ana Yael for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

In the phenomenology of Edmund Husserl, noema is the meaning we assign to the object of a thought through a directed “sense” of perception that represents reality in our minds. This mental act of intent — to take what we know and apply it to unfolding experience — is, for him, the core of consciousness. 

To further explore that notion in our contemporary context, Noema has curated a debate over whether and how this human way of apprehending the world around us can become a quality of machines through general artificial intelligence.

Three recent essays by leading technologists and philosophers register where we are in pursuit of general AI — at least to the extent we understand what we’ve discovered about what differentiates our own mind from intelligent artifice of our own creation. 

“At the heart of this debate,” write Yann LeCun, the chief AI scientist at Meta, and NYU postdoc Jacob Browning, “are two different visions of the role of symbols in intelligence, both biological and mechanical: One holds that symbolic reasoning must be hardcoded from the outset, and the other holds it can be learned through experience by machines and humans alike. As such, the stakes are not just about the most practical way forward, but also how we should understand human intelligence — and, thus, how we should pursue human-level artificial intelligence.”

From Neural Networks To Symbolic Reasoning

As the authors point out, “the dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data.”

Critics of this approach argue that its “insurmountable wall” is “symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.).” 

Such reasoning would enable logical inferences that can apply what has been learned to unprogrammed contingencies, thus “completing patterns” by connecting the dots. LeCun and Browning argue that, as with the evolution of the human mind itself, in time and with manifold experiences, this ability may emerge as well from the neural networks of intelligent machines. 

“Contemporary large language models — such as GPT-3 and LaMDA — show the potential of this approach,” they contend. “They are capable of impressive abilities to manipulate symbols, displaying some level of common-sense reasoning, compositionality, multilingual competency, some logical and mathematical abilities, and even creepy capacities to mimic the dead. If you’re inclined to take symbolic reasoning as coming in degrees, this is incredibly exciting.”

Hybrid Intelligence

“The Algebraic Mind” author Gary Marcus has been the chief doubter of the deep learning approach. He argues this week in Noema that LeCun and Browning have in their essay come over to his point of view: Reaching general AI can only be realized as a hybrid of neural networks and symbolic reasoning.

Where they continue to disagree is how symbolic reasoning arises:

Either symbol manipulation itself is directly innate, or something else — something we haven’t discovered yet —  is innate, and that something else indirectly enables the acquisition of symbol manipulation. All of our efforts should be focused on discovering that possibly indirect basis. …

In the 2010s, symbol manipulation was a dirty word among deep learning proponents; in the 2020s, understanding where it comes from should be our top priority. With even the most ardent partisans of neural nets now recognizing the importance of symbol manipulation for achieving AI, we can finally focus on the real issues at hand, which are precisely the ones the neurosymbolic community has always been focused on: how can you get data-driven learning and abstract, symbolic representations to work together in harmony in a single, more powerful intelligence?

When Speech Yielded To The Written Word

As a purely speculative aside, there is an intriguing parallel suggested by this debate with those thinkers who have contemplated the origins of what Karl Jaspers labeled “the Axial Age,” when all the great religions and philosophies were born over two millennia ago — Confucianism in China, the Upanishads and Buddhism in India, Homer’s Greece and the Hebrew prophets. 

The philosopher Charles Taylor associates the breakthroughs of consciousness in that era with the arrival of written language. In his view, access to the stored memories of this first cloud technology enabled the interiority of sustained reflection from which symbolic competencies evolved. 

This “transcendence” beyond oral narrative myth narrowly grounded in one’s own immediate circumstance and experience gave rise to what the sociologist Robert Bellah called “theoretic culture” — a mental organization of the world at large into the abstraction of symbols. The universalization of abstraction, in turn and over a long period of time, enabled the emergence of systems of thought ranging from monotheistic religions to the scientific reasoning of the Enlightenment.

Not unlike the transition from oral to written culture, might AI be the midwife to the next step of evolution? As has been written in this column before, we have only become aware of climate change through planetary computation that abstractly models the Earthly organism beyond what any of us could conceive out of our own un-encompassing knowledge or direct experience.

Anthropomorphic Projection

In another essay, philosopher Benjamin Bratton and Blaise Agüera y Arcas, a vice president at Google research, warn us not to get ahead of ourselves through what they call “motivated anthropomorphic projection” that, like a Hollywood sci-fi screenwriter imagining aliens, assigns human qualities to an entirely different kind of intelligence than we possess.

Their reflections come in the wake of a recent proclamation by another Google engineer, Blake Lemoine, that the LaMDA chatbot, which is built on a large language model, is “conscious, sentient and a person.” They agree that AI may be conscious in some way, but that does not make “a what” into “a who” like us. A light sensor is not the same as human vision, to cite their example.

For Bratton and Agüera y Arcas, it comes down in the end to language as the “cognitive infrastructure” that can comprehend patterns, referential context and the relationality among them when facing novel events.

“There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real-world referents,” they observe. 

As an “executable” translation of human language, code does not produce the same kind of intelligence that emerges from human consciousness, but is intelligence nonetheless.  What is most likely to emerge in their view is not “artificial” intelligence when machines become more human, but “synthetic” intelligence, which fuses both.

As AI further develops through human prompt or a capacity to guide its own evolution by acquiring a sense of itself in the world, what is clear is that it is well on the way to taking its place alongside, perhaps conjoining and becoming synthesized with, other intelligences, from homo sapiens to insects to forests to the planetary organism itself.