Skip to content
Search

Latest Stories

The United States will no longer play global policeman, and no one else wants the job. This is not a G-7 or a G-20 world. Welcome to the GZERO, a world made volatile by an intensifying international battle for power and influence. Every week on this podcast, Ian Bremmer will interview the world leaders and the thought leaders shaping our GZERO World.

Presented by

Podcast: The future of artificial intelligence with tech CEO Kai-Fu Lee

Podcast: The future of artificial intelligence with tech CEO Kai-Fu Lee

Listen: Artificial intelligence is changing the way we live — and very soon it'll go beyond medical breakthroughs and the algorithms that control your social newsfeeds. Will AI become the biggest technological disrupter since the Industrial Revolution, replacing workers with robots? On this week's GZERO World Podcast, Ian Bremmer discusses the future of AI with scientist Kai-fu Lee, who's just come out with a book about what our AI-driven world may look like 20 years from now.

TRANSCRIPT: The future of artificial intelligence with tech CEO Kai-Fu Lee

Kai-Fu Lee:

The companies kind of can't help but to keep doing what they're doing because they've got this powerful AI engine that they can tweak a knob and says, "Get more people to spend more minutes with me." And then they turn into dollars.

Ian Bremmer:

Hello and welcome to the GZERO World Podcast. This is where you can find extended versions of my interviews on public television. I'm Ian Bremmer, and today we look at how artificial intelligence, AI, is changing the way we live. From medical breakthroughs to the algorithms that control your newsfeed, AI is touching nearly every aspect of human life, and there's more on the way, isn't there always? In fact, many experts believe AI is the biggest technological disruptor since the Industrial Revolution. But is a robot coming for your job or your brain? How about your soul? Take the red pill and stay right here. When I speak to AI scientist Kai-Fu Lee, he's CEO of Sinovation Ventures and a former head of Google China. Let's get to it.

Announcer:

The GZERO World Podcast is brought to you by our founding sponsor, First Republic. First Republic, a private bank and wealth management company understands the value of service, safety and stability in today's uncertain world. Visit firstrepublic.com to learn more. And GZERO World also has a message for you from our friends at Foreign Policy. COVID-19 changed life as we know it, but as the world reopens, this moment also presents an opportunity. On Global Reboot, Foreign Policy looks at old problems in new ways. From US-China relations to gender inequality and racial discrimination, each week, Ravi Agrawal speaks to policy experts and world leaders and thinks through solutions to our world's toughest challenges. Check out Global Reboot wherever you get your podcasts.

Ian Bremmer:

Kai-Fu Lee is author of the latest book, AI 2041: Ten Visions for the Future. I want to start because you've spent your life working on artificial intelligence and it's a term that's used very broadly. Try to explain it if you can in just a couple of moments for an audience that's heard an awful lot of promise of artificial intelligence, but we don't necessarily know where we are today.

Kai-Fu Lee:

Well, artificial intelligence started as an effort to emulate human intelligence, but where it has evolved now is that it can do so many things that humans cannot do, but they still can't do a few things that humans can because artificial intelligence, or actually more precisely machine learning, is learning on a huge amount of data and then making accurate decisions about what to do. And the more data it has, the better it gets. So in domains in which it has a lot of data, whether it's internet or financial applications or chatbots or generation or recognition or machine translation is already beating people by quite a bit. But there's still many things that humans can do that AI cannot do.

Ian Bremmer:

Beating people, because with all of this extraordinary amount of data that we are giving off on a real time basis, that that computers are now able to integrate and sift and recognize causality, recognize patterns, and as a consequence to a degree even predict the future. What's the area that you think, right now, not in 2041, but today where AI is having the most dramatic impact on how society functions?

Kai-Fu Lee:

Clearly in the internet space. So many people don't realize it, but when you watch videos on YouTube or TikTok or your newsfeed on Facebook or Snapchat, these are ordered by AI. AI understands you based on what you have watched and read and clicked and opted out in the past, and it knows who your friends are so that it knows what videos you're likely to like. So using this algorithm, a company like Facebook could say, "How do I get Ian to spend the most time on my app every day?" And then that will help them make a lot of money. So I would say internet companies have the most data and therefore they can make use of AI the best way and that they use it to make money for them, and sometimes at our expense, then their revenues and profit will grow. So as a result, internet companies today, the Google, Facebook, for example, have the best AI teams and make the most money from AI.

Ian Bremmer:

Now, how problematic is it that these algorithms can tell us what we are most likely to click on, but they can't tell us why we want to do it?

Kai-Fu Lee:

Today, they can't tell us why, but I think there's research going on that will make it possible. Now to be very precise, an algorithm that shows me a video, if I say, "What's the exact reason you're showing me this video," it will show me a giant math equation, perhaps including 500 variables, and I can't understand that. But I could push back and say, "Well, sum it up in a way that we dumb humans can understand." Then it would come out and say, "Oh, the top five reasons are A, because you've watched a similar video, B, because it has your favorite actor and so on." So I do think with some work we can get to an approximate explainability at a level that humans can understand, accepting that it's not the perfect mathematical answer.

Ian Bremmer:

So we've always been asking before artificial intelligence, you and I, when we were kids, there was a big debate about nature versus nurture. How much of who we are comes from our genetic substance and how much of who we are comes from the way we're raised, our society around us, our education? I wonder as a human being is subjected, if I can use that term, to algorithms that are pumping out all of this information, how much of it is about what we want as human beings ex-ante and how much of it is predicting what we're going to want to see? Because we've already been subjected to all of these algorithms. In other words, how much of it is it's just a reflection back of this feed that is continual informing?

Kai-Fu Lee:

Well, if we live our lives by just watching what YouTube or TikTok shows us, I think we'll become very narrow people. Because while they're very good in understanding things we want, and they do show us many things that we want, but they're not optimizing our want, they're optimizing our minutes spent. Our minutes may be spent, we may click on the video or watch a video because we want it or maybe because we find it to be gross or maybe because it appeals to some dark part of us. And the more they show dark videos, the darker we can get. So they can influence to go in a very bad slippery slope to become people that actually we may not want to be. So this is what, in the book, AI 2041, I call, "misalignment of interest," that what we want isn't what these giant internet companies want. And until this problem is fixed, we'll continue to have this friction today.

Ian Bremmer:

Now the book you mentioned 2041 and you talk about things that you believe you say in the book have an 80% chance likelihood of coming to pass just in the next 20 years. Talk a little bit about where you think artificial intelligence is in that time and how it affects human beings, how it affects society, both individually and broadly.

Kai-Fu Lee:

So in 20 years it will disrupt every industry. Just as it has disrupted and enabled the internet industry, it will do the same to the financial industry, essentially doing a much better job than most people in the investment, insurance or banking industries. And it will change the future of healthcare. Inventing and helping scientists invent new drugs at 1/10th the time and price helping us to live healthier and longer. It will change the future of education, allowing each child to have an AI companion teacher that teaches exactly in the way that helps the child get better in an interest alignment. And this goes on for every possible industry.

Transportation, there will be autonomous vehicles. We'll stop buying cars. Cars will be Uber that comes to us exactly when we need it, in the size and shape we need it and it will be 90% safer than it is today. So I think all of these disruptions will happen, but it will also have significant implications on the environment and society. It will take away a large number of jobs and create many more jobs, but they are different jobs. It will cause tension between the large companies that have the data and the people who are sometimes helpless against it. It will widen the gap of inequality between the haves and have-nots, increasing wealth inequality between people and between companies and between countries.

Ian Bremmer:

So the one you started with, you said the financial industry. So clearly you don't think that in 20 years time there's a future for stock pickers because artificial intelligence is just going to do that job so much better.

Kai-Fu Lee:

I think there will be several types of stock picking. For the people who do deep research work, analyze the data, read all the texts, and then decide, ah, this is a stock to buy, no, AI will do that. No human can compete with AI on that. But there are people who really read the people and who really talk to the management team and watch their micro expression when their trust get them to say a little bit more, while they could still definitely have an edge.

There will also be people who have strategic thoughts that cannot be quite captured on text and data, that they perhaps working symbiotically with AI and data can still do great trading and results. So in conclusion, for secondary market, I think there will be a much smaller number of super, good, smart, strategic people who will work with the AI to become the best funds of the future and anything that can be quantitative or largely be dominated by AI. But last thing I would say is for my profession, which is early stage investing, where for the companies we don't have revenue or numbers, we just have people and the prototype. I would still think that people like us in my company still have an edge over AI.

Ian Bremmer:

It's funny because almost everyone I talk to that AI focus has reasons for why their individual job will end up being fine going forward, but of course we can see that there will be certain tactical skills that will still be relevant, which clearly AI doesn't have an edge on or a capacity for. And what I hear you saying in all sorts of fields that right now are confined mostly to the internet, but soon will be in education and will be in health and will be in accounting and law and finance and driving and all of this, is that for those that have skill sets that are comparatively easily replicated, those jobs are gone. Is that right?

Kai-Fu Lee:

That's right. I would call them routine jobs. Any job that where the tasks you do require no more than five seconds of thought, that's a very dangerous sign. Any job where the work that you do is repetitive or a routine, that's a dangerous sign. The one caveat is routine jobs that require a human to human connection. So let's say elderly care, I would argue that will stay because even if a robot could wash and shower an elderly person, the elderly person isn't going to want a robot to do that and certainly not going to want the robot to be a companion and talk to. So I still think many service oriented and human connection oriented jobs will remain, even though they're routine. But other than that, most other routine jobs will be gone.

Ian Bremmer:

Yeah. Now the dystopian version of that is that a lot of jobs that capitalists would describe as routine, people that receive that service would actually describe as having an interaction. If I go into McDonald's and talk to a waiter or a waitress or my local restaurant, not an expensive one, but you still have human interaction, go into a taxi and you're picked up from JFK, senior citizens would much rather have engagement with a human being to be taken care of, but that sounds expensive. If I take both the AI improvements that are coming that you are describing, and this is not 50, 100 years out, this is most people watching the show are going to see this, they're going to experience this, God willing, and then combine that with the sort of growth of inequality that we would expect to see, that implies that an awful lot of human engagement that most of us presently benefit from will have less of it. Isn't that right?

Kai-Fu Lee:

I hope not. I hope not. And here's the reasoning. People will have a hard time getting routine jobs. So there will be many people displaced looking for new things to do, yet they won't be skilled to do complex projects and new skills. These are going to be 50 year olds who've been doing routine jobs all their lives. They can't just suddenly become an AI programmer or become a brain surgeon. So the one big category of jobs that they will be able to be retained to do are the service jobs. And those are jobs that we don't want robots to do. So that will be one force that will push forward the human connection, and then to the extent these people can do a good job, more people will pay more money for it. And in fact, I think the wealthy people will have even more money. And so they're more willing to spend and many goods will become commoditized.

So I think more wealthy people will want to spend the money on services, not products, maybe a concierge planned vacation to Europe or a bartender who mixes fancy drinks or someone who comes to your home and makes your closets beautiful. These are, I think, services that people will pay for and new professions may get created to provide those services. And so I hope that more people shift to services. In fact, I think if-

Ian Bremmer:

So a small number of relatively wealthy people will have a lot more individuals that will be providing services directly for them. Is that where we think society will be heading?

Kai-Fu Lee:

No. I think also middle class people use physical training and a coach or a psychiatrist or someone as a companion to talk about things. These can all be services because I think we're going to need for the large number of people who want a interesting job that doesn't require huge amounts of training, service is really the only path. So I'm hoping that happens, and I'm also hoping that it creates more spark in people to have greater connections because that actually is known to make people more self-satisfied than doing the same thing, routine work, every day.

Ian Bremmer:

So let me ask you, as you see, we just had this report in the Wall Street Journal that Facebook has done a lot of research and learning that these teenage girls have had all sorts of self-esteem issues precisely because of the business model of Instagram. What do you think the appropriate response is for the company and for the government given the dangers of immersion of a human being in the information space that is provided by these algorithms?

Kai-Fu Lee:

Well, the companies can't help but keep doing what they're doing because they've got this powerful AI engine that they can tweak a knob that says, "Get more people to spend more minutes with me." And then they turn into dollars. And as long as they have a list of stock, their shareholders, quarterly reports, expectations, they'll continue this behavior. So something needs to rock them out of the current state. And in my book, I talk about a couple of possibilities. One is government regulations, and I think some are more effective than others. There might be, for example, government audits. When a government has reasons to believe there is bad behavior in a certain company, then they can go audit the AI. Just like there are IRS audits. You can't afford to audit every company, but maybe you can audit a small percentage, and that becomes a deterrence for companies to not behave badly. So that's one possibility. There are other types of-

Ian Bremmer:

Before you give others, let me ask about that because we know that when there's an IRS audit, it is meant to understand whether or not you are breaking fundamental rules about taxes you're supposed to pay. For an AI audit, what would be the baseline rule that you could have broken? How would you even start to think about that? Because that, of course, doesn't exist right now. There is no equivalent of the tax that you're supposed to pay in terms of what the AI is doing with the customers that are online. It's a walled garden.

Kai-Fu Lee:

There would need to be certain things established as things that are unacceptable, things like knowingly creating a system that is biased or knowingly allowing two larger percentage of fake news or deep fake, just as examples. I think we have to come up with these. And another related idea is there could be a third party watchdog that publishes on a monthly basis how companies do on these metrics and does standardized tests on them. And the ones that don't do well are shamed and the brand loses value.

Ian Bremmer:

What would the metric be? Disinformation, of course. What one person considers disinformation, another person considers to be just an opinion about something. How does one even begin to create metrics in a space that is so obviously subject to the perspectives of different human beings?

Kai-Fu Lee:

That's what social media always says. So we don't have to catch every borderline deep fake or fake news. There are some obvious ones. So let's set the bar low and just say, "This is a definition of an absolute fake news, that 99.9% of you would agree, then that's what it is." But really, the long-term answer is what I want to talk about is can we possibly align the interest of the internet company and us? So my long-term interest, I think, is to become more knowledgeable, more likable, and happier, so let's say. I think most people would want these things. Can we possibly measure these and build applications that are aligned with our goal?

So imagine an application that's for a subscription that I would have to pay on a monthly basis and it shows me content by measuring whether it's making me smarter or experienced or knowledgeable or happier. And if that were available, people would pay money for it. Then we would get out of the vicious cycle of having advertising supported eyeball metric that causes them to use AI to basically take over our eyeballs. It's like the Netflix model versus the Facebook model, if you will.

Ian Bremmer:

Now is a company more likely to do that or in your view, is a big government more likely to do that?

Kai-Fu Lee:

Oh, most likely a company, because there is a profit motive, because I'm not talking about fulfilling some desire that I have that I'm not willing to pay for. I'm more than happy to pay for it. Netflix is highly profitable, and we have a known model of the capital system of investment in a company that builds a product with users and users pay for it. So we just have to get that first company going. I think then we'll see how it works.

Ian Bremmer:

Do you think given what you know about artificial intelligence and where you believe the future... You have more insights over where the future of this industry and these technologies are going than almost anyone on the planet. Do you believe as a consequence of that that we are going to need as society much more paternalistic governance as a consequence?

Kai-Fu Lee:

I think regulations are clearly needed, given the large number of temptations of companies to make money and bad people to use AI for bad purposes. But I don't think regulation is the only thing that will harness the technology. Historically, the single best way to harness technology is with more technology. So electricity could electrocute people and then circuit breakers were invented to prevent that. Internet connected to a PC could bring a virus, but antivirus software prevented that. So I think we're concerned about bad content or AI manipulating our minds or fairness and bias or if we're concerned about personal data getting lost, there are technological efforts working on each of these.

Ian Bremmer:

The US and the EU have adopted the beginnings, at least, of ethical standards for the use of AI in the military. And I wonder on the national security front, do you think we need a global standard or Geneva Convention for that?

Kai-Fu Lee:

I actually think autonomous weapons is an area that more countries should pay more attention to because it is not only deadly, but inexpensive, easy to build, can be used by terrorists and should be treated like chemical weapons or biological weapons. And I hope more countries will see the danger and find a way to ban or regulate it.

Ian Bremmer:

Kai-Fu Lee, the book is AI 2041. Thanks so much for joining me, my friend.

Kai-Fu Lee:

Thank you.

Ian Bremmer:

That's it for today's edition of the GZERO World Podcast. Like what you've heard? Come check us out at gzeromedia.com and sign up for our newsletter, Signal.

Announcer:

The GZERO World Podcast is brought to you by our founding sponsor, First Republic. First Republic, a private bank and wealth management company understands the value of service, safety and stability in today's uncertain world. Visit firstrepublic.com to learn more. And GZERO World also has a message for you from our friends at Foreign Policy. COVID-19 changed life as we know it, but as the world reopens, this moment also presents an opportunity. On Global Reboot, foreign policy looks at old problems in new ways from US-China relations to gender inequality and racial discrimination, each week, Ravi Agrawal speaks to policy experts and world leaders and thinks through solutions to our world's toughest challenges. Check out Global Reboot wherever you get your podcasts.

Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.

Prev Page

More from GZERO World Podcast

The human cost of AI, with Geoffrey Hinton


Computer scientist and Nobel laureate Geoffrey Hinton joins Ian Bremmer on the GZERO World podcast to talk about artificial intelligence, the technology transforming our society faster than anything humans have ever built. The question is: how fast is too fast? Hinton is known as the “Godfather of AI.” He helped build the neural networks that made today’s generative AI tools possible and that work earned him the 2024 Nobel Prize in physics. But recently, he’s turned from a tech evangelist to a whistleblower, warning that the technology he helped create will displace millions of jobs and eventually destroy humanity itself.

Keep reading... Show less

Computer scientist and Nobel laureate Geoffrey Hinton joins Ian Bremmer on the GZERO World podcast to talk about artificial intelligence, the [...]

More >

Gaming out a US-Venezuela war with ambassador James Story

The Trump administration is ramping up pressure on Venezuela, with the USS Gerald R. Ford deployed to the region, CIA covert operations approved by the White House, and strikes on suspected narco‑trafficking vessels attributed to Caracas. Many analysts now see regime change as the ultimate goal. On the GZERO World Podcast, Ian Bremmer and former US Ambassador James Story game out what a US intervention in Venezuela might look like—and more importantly, how the US would manage the aftermath.

Keep reading... Show less

The Trump administration is ramping up pressure on Venezuela, with the USS Gerald R. Ford deployed to the region, CIA covert operations approved by [...]

More >

Andrew Ross Sorkin says the next financial crisis is coming

In 1929, unchecked speculation and economic hype helped fuel the worst financial crash in modern history. Nearly a century later, New York Times journalist and CNBC anchor Andrew Ross Sorkin sees troubling parallels. On the GZERO World podcast, he joins Ian Bremmer to talk about his new book, "1929: Inside the Greatest Crash in Wall Street History—and How It Shattered a Nation," and whether today’s economy is headed for another cliff.

Keep reading... Show less

In 1929, unchecked speculation and economic hype helped fuel the worst financial crash in modern history. Nearly a century later, New York Times [...]

More >

China has become an "engineering state," with Dan Wang

What can the US learn from the benefits–and perils–of China’s quest to engineer the future? Tech analyst and author Dan Wang joins Ian Bremmer on the GZERO World Podcast to discuss his new book "Breakneck," China’s infrastructure boom, and the future of the US-China relationship. Over the last two decades, China has transformed into what Wang calls an “engineering state,” marshaling near unlimited resources to build almost anything–roads, bridges, entire cities overnight. That investment has created astounding growth, but also domestic challenges and soaring debt.

Keep reading... Show less

What can the US learn from the benefits–and perils–of China’s quest to engineer the future? Tech analyst and author Dan Wang joins Ian Bremmer on the [...]

More >

GZERO Podcasts

Computer scientist and Nobel laureate Geoffrey Hinton joins Ian Bremmer on the GZERO World podcast to talk about artificial intelligence, the [...]

More >

Listen: What does global energy transition look like in a time of major geopolitical change, including rebalancing of trade? In this special episode [...]

More >

Listen: Creating artificial human retinas in zero gravity. Mining rare minerals on the moon. There seems to be no limit to what could be possible if [...]

More >

In this episode of "ask ian," Ian Bremmer breaks down Notre Dame football. [...]

More >

Transcript Listen: “The equivalent of what we spent in World War II was spent in the course of a year and a half to support the US economy, and that [...]

More >

Listen: Investing in health and science research isn’t just about curing diseases. It has huge impacts across society, from creating jobs to driving [...]

More >

Listen: As populations grow and communities evolve, transportation authorities and urban infrastructure are seeking ways to modernize.In this episode [...]

More >