Human History ‘Will End When Men Become Gods’

Credits

Yuval Noah Harari, an Israeli historian and the internationally best-selling author of Sapiens, has a new book out about the future of humanity, called Homo DeusHe recently sat down with The WorldPost at a Berggruen Institute salon in Los Angeles. In the following interview, he discusses the new authority of “dataism” and godlike powers of science to redesign humanity and create an inorganic, new species ― artificial intelligence.

WorldPost: In your previous book, Sapiens, you observed that humans are the only species that can organize themselves around abstract ideas or codes ― myth, religion, ideology. In your new book, Homo Deus, you argue that a new ideology has arisen ― “dataism” ― that is the new organizing principle of humanity.

When big data is married to biology ― happening as we speak ― you worry that it will reduce the biological organism to a set of information that can be organized by programmed algorithms to seek a desired outcome. Those who subscribe to this view that the organism is an algorithm believe that the genome of humans and other species can be designed to order and that, if computers can process and place into patterns more information than the human brain can, then we can also create a new non-biological species ― artificial intelligence.

To be sure, deciphering a deadly virus to stem a spreading plague is something humanity would welcome. But what does it mean to be human in the age of the algorithm if all that it means to be human ― love, empathy, creativity, agony ― falls between lines of code? Are such godlike powers then a great benefit to humanity, or do they portend a dark future?

Yuval Noah Harari: Like every major invention, it has both a good and bad potential. But the scale is completely different. I titled the book Homo Deus because we really are becoming gods in the most literal sense possible. We are acquiring abilities that have always been thought to be divine abilities ― in particular, the ability to create life. And we can do with that whatever we want.

You talked earlier about how humans create networks of cooperation around abstractions. I don’t like the word “abstractions” very much because most people don’t think in abstractions. That is too difficult for them. They think in stories. And the best stories are not abstract; they are concrete. If you think about the great religions that have united large parts of humankind, people believe gods are very concrete ― there is an angry old man in the sky, and if I do something wrong, he will punish me.

In the book, I use the term “fiction,” not abstraction, because what really unites humans are fictional stories. That is also the case with the new revolution that is now unfolding. It is not going to be an abstract revolution but a very concrete one.

‘If you have a problem in life, you don’t ask God, you ask Google or Facebook.’

The basic idea of dataism is a shift in authority. Previously, authority resided above the clouds and descended down to the pope, the king or the czar. Then for the last two or three centuries, authority came down from the clouds and took up residence in people’s hearts. Your feelings became the highest source of authority. The emotions of the voters in a democracy, not his or her rationality, became the number one authority in politics. In the economics of the consumer society, it is the feelings of the customer that drive every market. The feelings of the individual are the prime authority in ethics. “If it feels good, do it” is the basic ethical ideal of humanism.

So authority came down from the clouds, moved to the human heart and now authority is shifting back to the Google cloud and the Microsoft cloud. Data, and the ability to analyze data, is the new source of authority. If you have a problem in life, whether it is what to study, whom to marry or whom to vote for, you don’t ask God above or your feelings inside, you ask Google or Facebook. If they have enough data on you, and enough computing power, they know what you feel already and why you feel that way. Based on that, they can allegedly make much better decisions on your behalf than you can on your own.

WorldPost: Is that the ultimate objectivization of reality ― that which reduces your identity to only what data is known or collected? Or is it the opposite: subjectivization as the pure reflection of personal choices and preferences fed back to you? Or, compounded by the subjective bias of the algorithm inputs, is it both: subjective objectification?

Harari: Do you mean is it true?

WorldPost: What I’m getting at is that there seems to be a double movement going on simultaneously. Data-absorbing, peer-driven social media enables the collection of massive information on a person organized into the ultimate objectification of reality through mathematical algorithms. At the same time, we are seeing an explosion of the “subjectivization of facts” ― alternative facts, fake news ― that is unmoored from any objective reality other than the likes or dislikes of your very similar peers.

Harari: I don’t think the “subjectivization of facts” is anything new in what is happening now. This has been going on for thousands of years. All the big religions have been organized around fake news. Just think of the Bible. Fake news lasts forever in some cases.

WorldPost: Eternal fake news…

Harari: In big historical struggles, history does not go to the truth. It goes to the most effective story. And very often, the most effective story is not true. The idea that people sooner or later will discover that something is untrue usually doesn’t happen, as in the case of all the big religions.

With regard to the algorithms, there is a good chance, too, that this will be just a myth that they are the highest source of authority with all the answers. But people will believe that. They will voluntarily, consensually, give the algorithm that kind of authority. And that will be the reality in which we live.

We see it happening all around us. If you apply to the bank for a loan or for a job at a big corporation, very likely your application is being processed by an algorithm and not by a human being. Let’s say the algorithm refuses you, and you are not hired. You go to the company and ask why, and they say, “Because the algorithm said no.” And then you ask, “Why did it say no?” And they will say, “We don’t know. If we thought we could get a good reading by ourselves, we wouldn’t need an algorithm.”

The thing about the new generation of computer algorithms is that machines are now able to learn by themselves. They sift through immense piles of data and they, at least allegedly, find patterns that humans are unable to find, including whether you are a good fit for that job. And we trust that more and more.

Harari says dataism represents a shift in authority, from kings and popes and one's own feelings to data via Facebook anHarari says dataism represents a shift in authority, from kings and popes and one’s own feelings to data via Facebook and Google.

There are some very good things about this, but also some big dangers. In the 20th century, we had this big fight over statistical discrimination against entire groups of people ― African Americans, women, gays or Jews ― based on faulty information.

People now look back to those days and say, “We must refight those battles.” Yes, perhaps some of them need to be refought. But as a military strategist, I know that people tend to prepare themselves for the previous war, and they miss the coming war. The much bigger danger in the coming decades won’t be this group discrimination, but something far more Kafkaesque ― discrimination against individuals. It doesn’t give you a loan. It doesn’t hire you. The algorithm doesn’t like you. The algorithm is not discriminating against you because you are Jewish, Muslim or gay, but because you are you.

There is something about your data that the algorithm doesn’t like. It is not about some category you fall into you. It is only you. There is something that is different about you versus everyone else that raises some warning sign. And you don’t even know what it is. And even if you know what it is, you can’t create a political movement around it because there is no one else in the world who suffers from this particular discrimination.

The other side of the coin that is being talked about widely these days is the capacity to individualize. You can write a book for one person. You can compose music or a movie just for one person. So we are developing the capacity to create for one person but also the capacity to oppress just one person. The Israeli military is extremely excited about the potential of having the first total surveillance system, to be used in the occupied territories. They will actually be able to follow each and every person instead of relying on statistics. 

‘We are developing the capacity to create for one person but also the capacity to oppress just one person.’

WorldPost: Here, too, we have the same dialectic: by missing all those intangibles that make each of us a person, all those things that fall between lines of code that don’t fit into the pattern being searched, individuation by an algorithm is actually a form of depersonalization.

Doesn’t this kind of depersonalization ― particularly when big data and the algorithm merge with biology to reduce being to nothing more than an immune system ― prepare the way for a “Brave New Biocracy” that will manage human life from “sperm to worm, womb to tomb?” In short, individuation by an algorithm diminishes, not advances, human autonomy, no?

Harari: Yes. But again, there is both a danger and a promise. There are many good things about these medical algorithms. Today, you have hundreds of millions of people around the world who have no health care. They don’t have a doctor to diagnose a disease and to recommend treatment. Within a very short time, you will be able to have a much better AI doctor on your smartphone in a village in Colombia than the president of the U.S. has today from human doctors.

The big battle in this regard in the 21st century will be between privacy and health. And health will win. Most people will be willing to give up their privacy in exchange for much better health care, based on 24-hour monitoring of what’s happening inside their bodies.

Very soon people will walk around with biometric sensors on or even inside their bodies and will allow Facebook, the Chinese government or whomever to constantly monitor what’s happening in their bodies. The day the first cancer cell starts to multiply and spread, someone at Google or at the health authority will know and will be able to very easily nip the cancer in the bud. The day a flu epidemic starts, they will immediately know who are carrying it, and they can take very effective, quick and cheap action to prevent it. So the promises are enormous.

In the future, we'll have "biometric sensors that will allow Facebook, the Chinese government or whomever to constantly monitIn the future, we’ll have “biometric sensors that will allow Facebook, the Chinese government or whomever to constantly monitor what is going on inside [us],” says Harari.

The dangers are also enormous. Just think of a place like North Korea. People will be walking around with biometric bracelets. If you see a picture of Kim Jong Un on a wall and your blood pressure elevates, which the algorithm correlates with some emotion like anger, then that is the end of you.

WorldPost: China already is developing a system of “social credit” that correlates all your observable behavior ― what you buy, who you talk to, whether you throw trash on the ground ― and gives you a score that will follow you through your life as you apply for college or a home loan. It will also be used to assess political loyalty and monitor official corruption.

Harari: We will see more and more of that everywhere. With all the genuine objections and worries that you have expressed, what will ram such a future through the wall is health. People will voluntarily give up their privacy.

WorldPost: Health care is the idol that confirms belief in the god of dataism.

Harari: Exactly.

‘The big battle of the coming century will be between privacy and health. And health will win.’

WorldPost: How does your idea of dataism relate to the notion of the “singularity”? Do you see singularity as a kind of scientific Tower of Babel of hubris, a kind of Anthropocene surge, an algorithmic imperialism over all life? Ecology, on the other hand, proposes an equilibrium between nature and human potential. Where does your idea fit within that matrix?

Harari: Dataism is very close to singularity. I see singularity as the point beyond which our imagination completely fails because our imagination itself is only the manipulation of what we so far know. There are many things that can bring about the shift to singularity. It could be advances in bioengineering, in machine intelligence or a combination of the two. It could be some completely new technology not yet on the horizon. The key point is that you reach a certain level of technological development that renders all of our assumptions about everything we know about humans and the world irrelevant, because all that can be changed.

WorldPost: The ecological perspective is more about the equilibrium it would seek to balance the promise and perils of dataism so we get more of the benefit and less of the darker downside. You seem to be saying we ought to just go with the flow and commit to our mutation.

Harari: I’m not saying singularity or dataism are good. I am only looking at the long trajectory of human history. Humans have been getting more and more out of equilibrium as we advance in time. When you try to manipulate the system even more to bring back balance to an earlier state, you solve some of the problems, but the side effects only increase the disequilibrium. So you have more problems. The human reaction then is that we need even more control, even more manipulation.

Go back to the 19th century and read Marx and the Communist Manifesto ― he says, “All that is solid melts into air.” His reading of history is that the key characteristic of modern society is that it requires constant change and disruption. The implication is that you cannot live in equilibrium. For modern society, equilibrium is death. Everything collapses if you reach a point of equilibrium. In the case of the economy, it depends on constant growth. If we reach a point of zero growth and continue with that for more than a few years, the entire system will probably collapse.

WorldPost: Your book Homo Deus, it seems to me, is really a brilliant update of Goethe’s Faust. In that masterpiece of literature, the Earth Spirit puts down Faust’ hubris as a great achiever of earthly accomplishment by saying, “You are equal to the spirit you understand,” meaning human’s limited understanding is not at the level of the gods. Do you agree?

Humans, says Harari, are on the verge of achieving what natural selection couldn't -- creating inorganic life or AI.Humans, says Harari, are on the verge of achieving what natural selection couldn’t — creating inorganic life or AI.

Harari: Not really. Faust, like Frankenstein or “The Matrix,” still has a humanist perspective. These are myths that try to assure humans that there is never going to be anything better than you. If you try to create something better than you, it will backfire and not succeed.

The basic structure of all these morality tales is: Act I, humans try to create utopia by some technological wizardry; Act II, something goes wrong; Act III, dystopia. This is very comforting to humans because it tells them it is impossible to go beyond you. The reason I like Aldous Huxley’s Brave New World so much is that it plays with the scenario: Act I, we try to create a utopia; Act II, it succeeds. That is far more frightening ― something will come that is better than before.

WorldPost: But success is a failure that destroys human autonomy and dignity?

Harari: That is an open question. The basic humanist tendency is to think that way. But maybe not.

WorldPost: But all of history up to this point teaches that lesson. You are saying it is different now?

Harari: Going back to the Earth Spirit and Faust, humans are now about to do something that natural selection never managed to do, which is to create inorganic life ― AI. If you look at this in the cosmic terms of 4 billion years of life on Earth, not even in the short term of 50,000 years or so of human history, we are on the verge of breaking out of the organic realm. Then we can go to the Earth Spirit and say, “What do you think about that? We are equal to the spirit we understand, not you.”

Human history began when men created gods. It will end when men become gods.

This interview has been edited for clarity.

Earlier on the Berggruen Institute: