How AI Will Advance In The Next Two Decades

Kai-Fu Lee explains how intelligent machines will master context, enable precision medicine — and use vast amounts of energy for computation.

Enrico Nagel for Noema Magazine
Credits

Kai-Fu Lee, author of “AI Superpowers: China, Silicon Valley and the New World Order” and a former Google executive in China, has joined up with “Waste Tide” author Chen Qiufan to tell the story of AI in the near future. Lee was recently interviewed by Noema Editor-in-Chief Nathan Gardels about their new book, “AI 2041.”

Gardels: You and your coauthor, the celebrated Chinese science-fiction writer Chen Qiufan, have created a new genre with your book, “AI 2041” by conjoining speculative fiction with analysis of realizable technologies. You call this “scientific fiction,” and Qiufan’s work has been labeled “science fiction realism.” Your take is quite positive, if not a practical utopian perspective, instead of the dystopian view of AI promoted, for example, by Elon Musk, who thinks super-intelligent machines will one day end up ruling us.

The book lays out 10 scenarios for the future. Can you give me one of the most compelling?

Lee: One area that’s poised for a major breakthrough is AI for health care: New drug discovery and, ultimately, new diagnosis and treatment, could revamp the entire health care system in a way that improves lives over the next few decades.

Drug discovery is the easy, low-hanging fruit because it doesn’t require any substantial disruption to current medical practices. Clinical trials will be the same, pharmaceuticals will be the same. The doctor will still prescribe drugs and the results will be measured the same way as before. That makes progress go faster. AI can sift through possible molecules, targets and potential diseases. It can sift through previous experiences of how drugs have worked, or not worked. It can discover the molecular structures of drugs that work on different types of people. In doing so, AI can infer and propose new candidates for clinical trials.

Some companies have even started using AI alongside scientists without jobs being displaced or anyone claiming machines are superior to humans; it’s true symbiosis. And the costs for pharmaceutical companies researching treatments for rare diseases that were previously too expensive to justify can go way down because of AI. That means rarer diseases can be targeted. It also means that, for common diseases, multiple drugs can be proposed for different types of people who have different family histories or allergies and so on, thereby potentially improving the rate at which people can recover from their sicknesses. 

The overall opportunity for AI in medicine is for it to become a full assistant to the doctor: proposing diagnosis and treatment for specific cases. That’s precision medicine. There’s no doubt in my mind that eventually, with enough data, AI will outperform the great majority of physicians. 

The process will take a long time because there’s personal data that is sensitive, the treatment process may cause disruptions and there are legal, ethical and moral implications about using software to treat people. What are the consequences if there’s malpractice, for example? So, the AI presence needs to be carefully crafted as a mere assistant to the doctor, who will make the final call. In this sense, recommendations and information from AI should be considered a form of input to a human decision: just another piece of data. 

In the end, the human makes all the decisions. But over the next 20 to 25 years, doctors will realize that AI tools are becoming really sharp. And then they will start rubber-stamping decisions made by AI — they might even become afraid of reversing a decision made by AI or disagreeing with it. 

When that moment comes, humans in general will face a key decision. Are we willing to entrust our lives to AI? 

“There’s no doubt in my mind that eventually, with enough data, AI will outperform the great majority of physicians.”

Gardels: Do you think the applications in the next couple of decades will make it faster to develop vaccines for COVID-like pandemics?

Lee: That is possible but less likely because we don’t yet have a sufficient amount of historical data on COVID to train AI — all the success and failures of clinical trials, reliable information on what kinds of people with what kinds of underlying illnesses and family histories were infected, whether treatment succeeded or not. The SARS outbreak in 2002 provided too few samples. The Spanish flu was too long ago. 

But in time, AI might soon play an almost equal role side by side with scientists. They might propose a vaccine template and then use software to verify it, for example. 

Gardels: So, AI assistance in this realm depends on the availability of data, but the technology is there already.

Lee: Yes, that’s right. AI can already be useful in some parts of the problem. Look at DeepMind’s AlphaFold, for example, a deep learning system that makes predictions about protein structures. That is a subset of the problem of coming up with vaccines. With a tool like that, a scientist could come up with vaccines, or at least insights, that speed discovery. Tools like that are secondary for now, but they are improving. 

Dematerialization

Gardels: In your analysis of Chen Qiufan story, “Dreaming of Plenitude,” you mention the idea of “dematerialization” — that compressed capacity increasingly enables smaller and smaller devices, such as cell phones, to do vast computations, while the internet of things will provide services virtually for free.

This is challenged by the materials scientist Vaclav Smil, who pointed out to me recently that, while cell phones might weigh less than they used to, there are billions more of them around the world now. So, he argued: “The total amount of materials going into cellphones has gone up, not down. People always make a fundamental mistake between relative and absolute dematerialization. What matters is the absolute energy intensity and use of materials.” 

He was referring not only to the energy intensity of mining and manufacturing all those devices, but also that the servers that store data for computation devour massive amounts of energy. Many of those server farms are located on cheap land in places like Kazakhstan and powered by fossil fuels.

In the end, doesn’t the positive development of AI in a holistic sense depend on the transition to renewable energy, and some constraints on the consumption of services that drive the spread of such devices?

Lee: I agree with this concern. If we want to continue to develop advanced forms of AI, computation capacity must grow rapidly. Just to train an AI algorithm or deep learning model can cost millions of dollars and put a lot of stress on server farms.

I think we’re a little stuck right now because the state of the art requires so much computation. As ordinary companies seek to make breakthroughs like Microsoft and Google and OpenAI have done, the amount of computation power needed will go up. It is therefore important to invest in efficient software and tools that can make top-end technologies more practical, perhaps slightly reducing performance while still providing the computation they need. 

Aside from this, of course, we need to move towards clean energy sources, which are also hopefully a lot cheaper and more plentiful. In the book, I talked about distributed energy that combines solar with advanced battery technologies beyond lithium. Optimistically, we might reduce energy costs to 10% of what they are today over the next few decades. Even that may not be enough, but we have to balance the two. 

“Eventually, doctors may start rubber-stamping decisions made by AI — they might even become afraid of reversing a decision made by AI or disagreeing with it.”

Gardels: So there are two dimensions to this — improving the efficiency of computation while shifting to cleaner energy and storage capacity? 

Lee: Right. It helps tremendously that deep learning AI and its descendants get better with more data and more computational capacity. So if you tweak the algorithm a little bit each time, then add more data and more compute, performance can quickly become much better and more efficient. We have never seen a technology that worked like that. Inventions like electricity or the internet were zero and one: When you didn’t have it, you didn’t have it; when you had it, you had it. They don’t improve at the geometric pace that AI does. 

There is the flip side as well. We have to consider how to make AI smarter without just throwing more data and computing power at it. Unless we figure out how to do that, we may never reach a true artificial general intelligence. 

Gardels: What are the limits to deep learning AI? Some critics argue that while intelligent machines can outperform humans in manifold tasks, as well as learn new ones, they literally do not “understand” what they are doing — “unthinking intelligence.”

Understanding comes from context. The uniquely human labor of filling in the cracks between bits of data with unprogrammable awareness is what creates meaning and constitutes a whole reality. 

Furthermore, some say, the more our minds are trained by daily interactions with digital technologies to think like algorithms that lack understanding, the less intelligent and more artificial we ourselves will also become.

“Maybe we should consider the reality that humans and AI are completely different.”

Lee: I think deep learning has demonstrated it can master some notion of context, though not in the same way as human understanding. But if you’re looking at self-supervised learning as we see with GPT-3 and other technologies, they’re basically trained without supervision by drawing on the context of the data they have available. That is, you don’t tell the robot, “this is a dog, this is a cat, this is a person.” You don’t tell it the ground truth. You just say “here’s a lot of text, learn what you can.”

Let’s say you’re reading the last chapter in a book. There are deep learning algorithms now that can predict the next sentence or answer a question about something that happened previously by using context. Today’s technology will, much of the time, produce an answer as good or better than mine might be, while sometimes it would be just nonsense. That cannot be done without some notion of context. It’s not just about memorizing millions of words. You need to know which ones matter. An AI built by Microsoft and Alibaba even outperformed humans on the Stanford Question Answering Dataset in 2018. That’s pretty impressive. It shows that AI can detect some context. 

That said, I don’t contend that AI has a soul or self-awareness, knows what it’s doing or has emotions and beliefs and communicates intently. I don’t think it’s doing that. It’s just figuring out a way to find context.

But considering how much progress we’ve already made, think about where we’ll be in 20 or 30 years. There will be many more improvements on top of today’s self-supervised learning. 

Self-supervised learning overcomes the previous problem of deep learning that required an accurate expert label on everything. And that put a limit on how much data could be processed. The fact that AI can be trained with no human labeling supervision suggests just how powerful deep learning technology is. If we throw more data and compute at it, it gets better and better. 

But AI being able to recognize context doesn’t mean it will supersede humans or reach singularity or AGI. Can it be us? Can it do everything we can do? At each stage of development, we ask questions that have been framed in terms of what we humans can do: Can it play chess? If it can play chess, can it play Go? If it can achieve reading comprehension, will it be able to reach self-awareness such as we have? 

It’s somewhat narcissistic, if natural, for humans to compare everything to us. That’s why, in most science fiction, every object you see — whether it’s aliens or pets or a robot — somehow has a familiar form. We want to see everything in our image. Maybe we should consider the reality that humans and AI are completely different. 

Advancements in AI are not necessarily creating a superset of human qualities. We do the things we do because of the way we apprehend the world, because of the way our physiology functions. Maybe, encoded in our DNA, there is something called a soul — distinctly human self-awareness and emotions that formed instinct and helped us survive. But that doesn’t mean there could not be another form of organism that is just as real. We should be open to that. What we should care more about is what AI can do that we never thought people could do, and how to make use of that. That is a much more constructive angle.

“Maybe, encoded in our DNA, there is something called a soul — distinctly human self-awareness and emotions that formed instinct and helped us survive.”

Gardels: Your last book, “AI Superpowers,” expressed a hope for cooperation between the two leading nations in developing this technology, the U.S. and China. Now, competition between them has grown fierce. Will that inhibit or spur the kind of advances you see taking place by 2041?

Lee: The perceived competition at a geopolitical level is problematic because it’s potentially separating the world into two sets of technologies and standards that are not interoperable. That is clearly inefficient. On the other hand, that dynamic creates more funding for the technology in both countries, which is a good thing. I would argue that Sputnik helped advance both American and Soviet space efforts in this way. 

When I wrote “AI Superpowers,” I didn’t anticipate that competition would be like this. I had hoped that AI advances in the West would wake China up. And then as China developed its internet companies infused with AI in ways more advanced than the West, it would wake the West up. That mutual awareness, I hoped, would forge new ways of working together. In health care, for example. China has more data, the U.S. more advanced medical technologies. Joining these assets in a common direction would be a big lift for humanity as a whole.

Well, that has not happened. The best we can hope for in the coming few years is that there are problems identified by both countries that are important enough to collaborate on, like climate or health care. 

One bright point is that, despite the geopolitical challenges, academics and scientists are still working together. At large AI conferences, you can see American, European and Chinese researchers continuing to share their ideas so that people can stand on the shoulders of others. Hopefully, we’ll reach a state where China and the U.S. will be very competitive in certain areas and collaborate in other areas. We’ll have to see if we can get there.