Scroll to the top

Podcast: We have to control AI before it controls us, warns former Google CEO Eric Schmidt

Two children and a robot. We have to control AI before it controls us, warns former Google CEO Eric Schmidt.

TRANSCRIPT: We have to control AI before it controls us, warns former Google CEO Eric Schmidt

Eric Schmidt:

No one is debating, the most consequential decisions that we're going to make, which is, how are we going to co-exist with this kind of intelligence? I want to be very clear, that we know this intelligence is going to occur in some way. We just don't know how it will be used. 30 years ago, with the advent of the internet, we knew it was going to happen. But we certainly did not know that once we connected everybody in the world, we would have all these problems.

Ian Bremmer:

Hello, and welcome to the GZERO World podcast. This is where you'll find extended versions of my interviews on public television. I'm Ian Bremmer, and on the show today, can humans learn to control artificial intelligence before it learns to control us? In the last two years of the pandemic, more and more of our daily lives have moved from the physical to the digital world. Unlike brick and mortar, where governments more or less maintain authority, the digital sphere is still the wild west.

What's more, AI backed algorithms powering that world are increasingly becoming too smart for us. So how do we prevent our future robot overlords from taking control? You've read books on that. But in the new book, The Age of AI and Our Human Future, former Secretary of State Henry Kissinger, former Google CEO Eric Schmidt, and MIT computer scientist Daniel Huttenlocher, team up to try to answer that question. One of those folks, Eric Schmidt, joins me today.

Announcer:

The GZERO World podcast is brought to you by our founding sponsor, First Republic. First Republic, a private bank and wealth management company, understands the value of service, safety, and stability in today's uncertain world. Visit firstrepublic.com to learn more. GZERO World would also like to share a message from our friends at Foreign Policy. How can sports change the world for the better? On The Long Game, a co-production of Foreign Policy and Doha Debates, hear stories of courage and conviction, both on and off the field, directly from athletes themselves. Ibtihaj Muhammad, Olympic medalist and global change agent, hosts The Long Game. Hear new episodes every week on Apple, Spotify, or wherever you get your podcasts.

Ian Bremmer:

Eric Schmidt, thanks for joining me.

Eric Schmidt:

Thank-you for having me, Ian.

Ian Bremmer:

A 98-year old statesman, a computer scientist, and a tech titan walk into a bar and I guess what you come up with is this new book you have on artificial intelligence. I will say that one sentence really struck me. It was almost haunting. When you wrote that, "We have sought to consider its," AI's, "implications while it remains within the realm of human understanding." Implying that in relatively short order, that will no longer be the case. Explain your thinking behind that.

Eric Schmidt:

Well we speculate that AI will achieve near human level intelligence within a few decades. When we say near human, we don't mean the same as human. The book is a lot about how humans will co-exist with this artificial intelligence. In particular, what does it mean to be human? Dr. Kissinger got involved with this, because he concluded that the age of artificial intelligence is of the same significance as the transition from the age of faith, to the age of reason. Where humans hundreds of years ago learned how to be critical, reasoning beings.

The impact of having non-human intelligence that is with us, controlling us, changing us, is not understood, and we're not ready for it as a society. That's what this book is about.

Ian Bremmer:

When I think about big technological advances historically, the advent of nuclear weapons for example. The average human being may not have understood it, but the specialists knew exactly what was going on, right? The theorists got it. The practical applications and the rest. Even today, when we talk about artificial intelligence, and the explanations behind why deep learning gets you the results that it gets you, frequently I hear, "We don't know." How does it change your view of the way we work with artificial intelligence, when we are at a situation today, we're in a context, where we're taking advantage of things that we don't actually understand?

Eric Schmidt:

Well remember that this is technology that, we've never seen this combination before. It's not precise. It's dynamic. It's emergent in that when you combine it, new things happen. But most importantly, it's learning as it's going. So you've got all sorts of problems. Imagine that the system learned something today, but it didn't tell you, or it forgot to tell you, and what it learned was not okay with you.

Imagine if your best friend for your kid is in fact not a human, but a computer. Your kid loves this computer, in the form of an AI assistant or what have you, or a bear or a toy, and the toy learned something. It says to the kid, "I learned something interesting," and the kid's going to say, "Sure, tell me." But what if it's wrong? What if it's against the law? What if it's prejudicial? We don't have any way of discussing this right now in our society.

Ian Bremmer:

Given the example you just gave, do you think that when we're talking about, for example, the exposure of young people to these algorithms we don't understand, do governments need to come in and say, "Actually, we need to significantly constrain what the exposure needs to be"?

Eric Schmidt:

Well we just ran this experiment, in the form of social media. What we learned is that sometimes, the revenue goals, the advertising goals, and the engagement goals, are not consistent with our democratic values, and perhaps even how the law should work. Especially on young minds. We worry a lot in the book, that AI will amplify all of those errors.

It will of course do amazing things as well, which we talk about. But a good example here is, we don't know how to regulate the objective functions of social media that are AI enabled. Is the goal engagement? Well, the best way to get engagement is to get you upset. Is that okay? We don't even have a language in our regulatory model to discuss it. We don't have people in the government who can formulate a solution to this problem. The only solution we can propose in the book at the moment, is to get people beyond computer scientists in a room to have this discussion.

Dr. Kissinger tells the story that, in the early 1950s, that the groups got together once the Soviet Union and the arms race began, to develop the notion of mutually assured destruction, and deterrence, and so forth. But it wasn't built by the physicists. It was the physicists working with the historians, and the social scientists, and the economists, and so forth. We need the same initiative right now, before something bad happens.

Ian Bremmer:

Who's at the forefront right now? Who are the social scientists that are out there, that you respect, that you think are talking constructively about this issue?

Eric Schmidt:

There's a handful of people who have written very clearly on these issues. But there's no organized groups. There's no organized meetings. No one is debating the most consequential decisions that we're going to make, which is, how are we going to co-exist with this kind of intelligence? I want to be very clear, that we know this intelligence is going to occur in some way. We just don't know how it will be used.

30 years ago with the advent of the internet, we knew it was going to happen. But we certainly did not know that once we connected everybody in the world, we would have all these problems. I was fortunate enough to be the head of the National Security Commission for the congress, looking at artificial intelligence. We came back with lots of recommendations, some of which have been adopted. More funding. Research networks. Working with our partners. Making sure that we, the democratic countries, stay aligned. Staying ahead of China and their semiconductors, and so forth.

There is no coherent group, in our government, or at least in our civil society, in the West, that's working on this. By the way, China is confronting these things, and as you mentioned, is busy regulating AI as we speak.

Ian Bremmer:

Anything we should be learning from the Chinese, in terms of the steps, albeit tentative and early stage, that the government is taking to try to rein in control, and even understand these technologies?

Eric Schmidt:

What China a few years ago announced as part of its China 2025 plan, and AI 2030, that it would try to dominate the following industries. AI, quantum, energy, synthetic bio, environmental matters, financial services. This is my entire world. This is everything that I've been working on, and I suspect for you as well. It's a great concern.

China has identified building platform strategies that are global. So the thing to learn is that we have a new competitor, in the form of China, who has a different system, a different set of values. They're not democratic. You can like them or not, I don't particularly care for them, but you get the point. They're coming. You would not want TikTok, for example, to reflect Chinese censorship. You may not care where your kids are, and TikTok may know where your teenagers are, and that may not bother you. But you certainly don't want them to be affected by algorithms that are inspired by the Chinese, and not by Western values.

By the way, this is why a partnership with Japan and South Korea is so incredibly important. Because the values in South Korea, and in Taiwan, and in Japan, are so very consistent with what we're doing. So much of our world comes from those countries.

Ian Bremmer:

Do American social media companies, algorithms of those companies, do they in any way reflect American or western values, in your view?

Eric Schmidt:

They have not been so framed. I think most people would argue that the polarization that we're seeing is a direct result from some of the social media algorithms. A good example is amplification. Today, social media will take something that some crackpot person has, and amplify it 1000 times, or 10,000 times. That's the equivalent of giving the craziest people in the country the loudest loudspeakers. That just doesn't seem right to me. It's not the country that I grew up in. I'm very much in favor of free speech, I just am not in favor of free speech for robots. I'm in favor of individual person speech.

Ian Bremmer:

So what you seem to be saying, and I don't want to put words in your mouth, but I'm interested in how you think about this. If the Chinese government is actively trying to ensure that the algorithms that its citizens are informed by, filtered into, do reflect Chinese values, socialist characteristics if you will, that the Americans, the Europeans, the Japanese should actually be doing the same, and right now they're not.

Eric Schmidt:

No, that's too strong a claim. The Chinese government is clearly making sure that the internet reflects the priorities of the autocracy that is the CCP. There's no question, when you look at their regulatory structure, they're regulating the internet to make sure that they remain in power, and that the kind of difficult speech, which we typically enjoy in a politically free environment, is not capable. We're not saying that, and I'm not saying that.

What I am saying is that it's time for us in the West to decide how we want the online experience to come. I'm very concerned that the advent of these AI algorithms, which boost things, and they can target things. So here's an example, let's say I was doing a new company and I was completely unprincipled, what I would do is figure out how to target each and every one of my users based on their individual preferences, and completely lock them in with my false narrative. I would have a different false narrative for each one of them.

Now you say, "He's mad," and of course I wouldn't actually do that. But the technology allows it, which means someone will try. We have to figure out how to handle that. Do we ban that? Do we regulate it? Do we say that's not appropriate? The software is there. It's possible to be built today.

Ian Bremmer:

Because a company isn't doing that, but individual political actors across the spectrum are individually doing that presently.

Eric Schmidt:

Yeah, and again, we have to decide as a country, do you want to be a country which is largely anxious all day, because everything's a crisis? The reason everything's a crisis is because that's the only way to get your attention. In the book, we speculate that in the next decade, people will have to have their own assistants, which will be very tuned to their own preferences, that will say, "This is something to worry about. This is a scam. This is false." In other words, you're going to have to, if you will, arm yourself with your own defensive tools, against the enormous amount of misinformation that's going to be coming to you.

There's a famous Carnegie Mellon economist, Herb Simon, who in 1971 said, "It's obvious what the scarcity in this new economics is going to be. It's the scarcity of attention." That's what we're all fighting about. I don't know about you, but I'm overwhelmed by the current systems that want my attention. Imagine five years from now, and 10 years from now, when the AI algorithms are fully empowered. They're going to be both addictive, but also wrong.

Ian Bremmer:

Herb Simon would tell us that we need to satisfy in those situations, and do at least enough so that we can get decent information, and make decent decisions. What is interesting about the argument you just made, because right now, if a citizen is going on Facebook or Twitter, they're going in by themselves, right? They're not going in with help. The corporate environment is what the corporation wants them to see and experience. What I hear you saying is that, in relatively short order, individual citizens, individual consumers, need to have something on their side. Whether that's an AI bot or assistant, or what have you. Because otherwise, they just won't be able to navigate systems, that frankly are psychologically much more capable of damaging them than they're aware.

Eric Schmidt:

The systems that we've built today, that we all live on, are basically driving the addiction cycle in humans. "Oh my god, there's another message. Oh my god, there's another crisis. Oh my god, there's another outrage. Oh my god, oh my god, oh my god, oh my god." I don't think humans, at least in modern society, were evolved to be in an "oh my god" situation all day. I think it will lead to enormous depression, enormous dissatisfaction, unless we come up with the appropriate rate limiters.

A rate limiter is your parents. Your parents say, "Get off the games." In China, the government tells you the answer. Parents understand this with developing minds. But what about adults? Look at all the anti-vax people who have become so dark holed in a set of false facts, that they can't get out of it, and they eventually die from their addiction, by virtue of getting the disease and dying. It's just horrific. How do we accept that as a society?

Ian Bremmer:

You're a creature of Silicon Valley. You know these people. You've lived among them. In your conversations with them, senior people in these companies, that know full well what's happening to society as a consequence of these algorithms, how do they respond? How do they deal with it?

Eric Schmidt:

I have not discussed the internal Facebook stuff with my Facebook friends. But I will tell you that if you read the documents that were leaked out of Facebook, it's pretty clear that management knew what was going on. They have to answer as to why they did one thing, and didn't respond to another. When I was CEO of Google more than a decade ago, we had similar but simpler problems. We always would get right on them, and try to establish a moral principle for how to deal with them, and we did a pretty good job in my opinion.

When I think about 10 years ago, I want to be the first to admit that I was very naïve. I did not understand that the internet would become such a weaponizing platform. I did not understand, first and foremost, that governments would use it to interfere with elections. I was wrong there. But more importantly, I did not understand that the now AI enabled algorithms would lead to this addiction cycle.

Now, why did I not understand that? Well maybe because I'm a computer scientist. I went to engineering school. I didn't study these things. The conclusion in our book, is that the only way to sort these issues out is to widen the discussion aperture. If we simply let the computer scientists like me operate, we'll build beautiful, efficient systems, but they may not reflect the at least implied ethics of how humans want to live. That debate needs to occur today.

I'm very concerned about the impact of AI on young minds. I'm very concerned about national security. That what will happen is, that there will be this compression of time, where we won't have time to deliberate our response to an attack or a strategic move. I'm very concerned that our opponents will, for example, launch on warning. They'll actually have an AI system decide to enter into a war, without even having humans discussing it, because the decision cycle is too fast for everybody. These issues have to be discussed now, and they have to be discussed between nations, and within the nation.

Again, China as an example, and I'm not endorsing them, has a law around data privacy, and has a new algorithmic modification restriction law, that is in process right now. So they're trying, in their own way, to do it their way. What's our answer? What is the democratic answer?

Ian Bremmer:

Now, the interesting change of course in technology, compared to traditional geopolitics, is that increasingly, there are only really two dominant players. The Chinese and the Americans are way ahead technologically from other countries. They're also spending multiples of what other countries are on AI research. If you're looking at a country like Japan, that clearly needs to invest in China, needs a security umbrella from the United States, but isn't anywhere close to the technological capabilities of either country, what do you say to them in terms of strategy going forward? Wholistic macro strategy for a government like Japan?

Eric Schmidt:

Japan should do what the United States has done so far, which is, they should organize an AI partnership. It should get all of the components of the Japanese society that are major players in AI, and remember, there are large companies using AI in Japan, and they need to build an AI strategy for the nation. That AI strategy will include a lot more resources in universities. A lot more people trained in those universities in the government. An agreement that the AI systems that are going to get built in Japan are consistent with Japanese laws, but also Japanese culture and values. That's the only path I know of to do.

Japan will be a significant player, because of the extraordinary technological capability of Japanese scientists, and Japanese software. There's every reason to think that Japan, as well as South Korea, and to some degree India, can be very significant players in this, because of the scale they have. This is a game where you need a lot of resources, and Japan has that.

Ian Bremmer:

We've talked about software. We haven't talked yet about hardware. I would really be remiss if I didn't ask you as least a question about the semiconductor story. Especially because the United States, China, Japan, one thing we can all agree on is that we need semiconductors from another player. From Taiwan, and from TSMC, which also happens to be the locus of some of the sharpest conflict between the United States and China. What do you think happens as a consequence of that? How should policy be formulated?

Eric Schmidt:

It's interesting that in the last 20 years, the two really big technical decisions made in the West, that now reverberate, are the decision that the United States would get out of the semiconductor business, except for Intel. Also, the lack of focus on building 5G competitors to Huawei. In the case of the semiconductors, what happened was, the efficiency of the Asian model, and in particular Taiwan, was so overwhelming, that many of the fabs and the other opportunities were foregone in the West.

So now we find ourselves in a situation where, below 10 microns, and below is faster, smaller, more expensive, more powerful, there are really two players in the space. One is called TSMC, which is the largest company of its type in the world, and it's in Taiwan. It's about half of the foundries. The other one is Samsung, which of course is a large South Korean conglomerate, and very, very good. Those two companies have managed to break this barrier, if you will, and they are critically important for very fast processors, very fast computers, very fast chips in your phone. Because they do a combination of speed, and very low cost power.

All of a sudden, and by the way, it's useful to know that China imports more semiconductor chips than it does oil. To give you a sense of how important semiconductors are in the global supply chain. There is not a good answer strategically for this. Governments have been insisting that TSMC and Samsung build fabs in their own countries. So for example, TSMC has built fabs in mainland China, and is building one in the United States. But they're not at the state of the art. Getting a state of the art fab that's not in Taiwan, not in South Korea, turns out to be a very big deal.

Ian Bremmer:

You think that's something that will happen? Or that TSMC will be concerned that it will undermine its position and Taiwan's position if they do?

Eric Schmidt:

These very, very, very high speed chips are incredibly complicated. 500 manufacturing steps, using very, very specialized equipment, that does stuff down at the one nanometer level, which is impossibly small, and impossibly fast. It's critically dependent upon a company called ASML, which is in the Netherlands, which is a sole source supplier. The US government recently made it impossible for China to purchase such hardware, for what we in our commission said were good reasons.

The most likely scenario is, TSMC remains ahead. The reason is, in my industry, normally the incumbent, if they're well focused and well funded, can stay ahead, because they get all sorts of positive network effects. China has been trying to catch up to TSMC, again, the other China, for 30 years, and they have not been able to. So this gives you a sense of how hard it is to catch up to the global leaders, and it's likely to be hard for a long time.

Ian Bremmer:

So the Americans and the Chinese, on the software and AI side, are roughly at parity. But on the hardware side, when we talk about semiconductors, not when we talk about 5G infrastructure of course, it turns out the West is significantly ahead.

Eric Schmidt:

We think so, especially if the West is defined as Taiwan.

Ian Bremmer:

Yes.

Eric Schmidt:

It's very important that we stay two generations ahead of China on semiconductors. A generation is every couple of years. So far, it looks like that gap is holding. China has become the largest supplier of all of the other things you need in chips. But for the specific issues of the actual chip, they remain behind. That is strategically helpful to the West.

Ian Bremmer:

So Eric, before we close, we've talked about a lot of things that worry you. Both in terms of software, hardware, and global policy. Talk to me just a little bit about the things that excite you the most in the AI field. Where are the breakthroughs that you think are just going to be magnificent, and life changing for society, that are coming soon to a theater near you?

Eric Schmidt:

The reason we're so excited about AI, is it has such transformative power on the good side, in so many areas. Probably the most important is health and wellness, and in particular new drug discoveries. Recently, a unit at Google called Deep Mind developed a piece of software, which is known as Alpha Fold. Within a few months, they were able to not only discover the structure of more than 12,000 proteins, but release them into the open source community. They had a competitor at the University of Washington, Baker Lab, which did the same thing.

That, in my view, is worthy of a Nobel Prize. The notion of how proteins fold, and how you can essentially use those proteins as they fold, to stop and start and cure diseases, is a profound discovery. There are many, many such discoveries happening in science.

One of my friends said that AI is to biology, the way math is to physics. Biology is so difficult. It's so difficult to model, that we're going to have to use these technologies to do so. I think there's every reason to think that, whether it's in synthetic biology, which would be this immense new business. I had a demonstration a few weeks ago of a company that was growing concrete. Not mixing it, but growing it. Now this concrete wasn't as strong, and it was more expensive. But if you can grow concrete, you should be able to grow anything. So imagine material science. New materials at all. New drugs, new diagnostics, new biology.

As well as the most important thing, which is AI assistants that help people be smarter. Whether it's somebody who's doing their normal job, or somebody who's a brilliant physicist who's just overwhelmed by the problem, that they ask the computer to help them. The advances will come faster and faster and faster because of AI.

Ian Bremmer:

Eric Schmidt, always great to talk to you, my friend. Thanks so much for joining.

Eric Schmidt:

Thank-you for having me, Ian.

Ian Bremmer:

That's it for today's edition of the GZERO World podcast. Like what you've heard? Come check us out at gzeromedia.com, and sign up for our newsletter, Signal.

Announcer:

The GZERO World podcast is brought to you by our founding sponsor, First Republic. First Republic, a private bank and wealth management company, understands the value of service, safety, and stability in today's uncertain world. Visit firstrepublic.com to learn more. GZERO would also like to share a message from our friends at Foreign Policy. How can sports change the world for the better? On The Long Game, a co-production of Foreign Policy and Doha Debates, hear stories of courage and conviction, both on and off the field, directly from athletes themselves. Ibtihaj Muhammad, Olympic medalist and global change agent, hosts The Long Game. Hear new episodes every week on Apple, Spotify, or wherever you get your podcasts.

Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform to receive new episodes as soon as they're published.

Previous Page

GZEROMEDIA

Subscribe to GZERO's daily newsletter