Scroll to the top

Azeem Azhar explores the future of AI

​Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, is seen here at the Digital Life Design innovation conference.

Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, is seen here at the Digital Life Design innovation conference.

Matthias Balk/dpa via Reuters Connect

AI was all the rage at Davos this year – and for good reason. As we’ve covered each week in our weekly GZERO AI newsletter, artificial intelligence is impacting everything from regulatory debates and legal norms to climate change, disinformation, and identity theft. GZERO Media caught up with Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, for his insights on the many issues facing the industry.


GZERO: Whether The New York Times’ lawsuit against OpenAI on copyright grounds is settled, or found for or against OpenAI, do you think large language models are less feasible in the long term?

Azeem Azhar: Copyright has always been a compromise. The compromise has been between how many rights should be afforded to creators, and ultimately, of course, what that really means is the big publishers who accumulate them and have the legal teams.

And harm is being done to research, free exchange of knowledge, cultural expression by creating these enclosures around our intellectual space. This compromise, which worked reasonably well perhaps 100 years ago doesn't really work that well right now.

And now we have to say, “Well, we've got this new technology that could provide incredibly wide human welfare and when copyright was first imagined, those were not the fundamental axioms of the world.”

GZERO: Can you give me an example of something that could be attained by reforming copyright laws?

Azhar: Take Zambia. Zambia doesn't have very many doctors per capita. And because they don't have many doctors, they can't train many doctors. So you could imagine a situation where you can have widespread personalized AI tutoring to improve primary, secondary, tertiary, and educational outcomes for billions of people.

And those will use large language models dependent on a vast variety of material that will fall under the sort of traditional frame of copyright.

GZERO: AI is great at finding places to be more efficient. Do you think there's a future in which AI is used to decrease the world's net per capita energy consumption?

Azhar: No, we won't decrease energy consumption because energy is health and energy is prosperity and energy is welfare. Over the next 30 years, energy use will grow higher and at a higher rate than it has over the last 30, and at the same time, we will entirely decarbonize our economy.

Effectively, you cannot find any countries that don't use lots of energy that you would want to live in and that are safe and have good human outcomes.

But how can AI help? Well, look at an example from DeepMind. DeepMind released this thing called GNoME at the end of last year, which helps identify thermodynamically stable materials.

And DeepMind’s system delivered 60 years of stable producible materials with their physical properties in just one shot. Now that's really important because a lot of the climate transition and the materiality question is about how we produce all the stuff for your iPods and your door frames and your water pipes in ways that are thermodynamically more efficient, and that's going to require new materials and so AI can absolutely help us do that.

GZERO: In 2024, we are facing over four dozen national-level elections in a completely changed disinformation environment. Are you more bullish or bearish on how governments might handle the challenge of AI-driven disinformation?

Azhar: It does take time for bad actors to actually make use of these technologies, so I don't think that deep fake video will significantly play a role this year because it's just a little bit too soon.

But distribution of disinformation, particularly through social media, matters a great deal and so too do the capacities and the behaviors of the media entities and the political class.

If you remember in Gaza, there was an explosion at a hospital, and one of the newswires reported immediately that 500 people had been killed and they reported this within a few minutes. There's no way that within a few minutes one can count 500 bodies. But other organizations then picked it up, who are normally quite reputable.

That wasn't AI-driven disinformation. The trouble is the lie travels halfway around the world before the truth gets its trousers on. Do media companies need to put up a verification unit as the goalkeeper? Or do you put the idea of defending the truth and veracity and factuality throughout the culture of the organization?

GZERO: You made me think of an app that's become very popular in Taiwan over the last few months called Auntie Meiyu, which allows you to take a big group chat, maybe a family chat for example, and then you add Auntie Meiyu as a chatbot. And when Grandpa sends some crazy article, Auntie Meiyu jumps in and says, “Hey, this is BS and here’s why.”

She’s not preventing you from reading it. She's just giving you some additional information, and it's coming from a third party, so no family member has to take the blame for making Grandpa feel foolish.

Azhar: That is absolutely brilliant because, when you look back at the data from the US 2016 election, it wasn't the Instagram, TikTok, YouTube teens who were likely to be core spreaders of political misinformation. It was the over-60s, and I can testify to that with some of my experience with my extended family as well.

GZERO: As individuals are thinking about risks that AI might pose to them – elderly relatives being scammed or someone generating fake nude images of real people – is there anything an individual can do to protect themselves from some of the risks that AI might pose to their reputation or their finances?

Azhar: Wow, that's a really hard question. Have really nice friends.

I am much more careful now than I was five years ago and I'm still vulnerable. When I have to make transactions and payments I will always verify by doing my own outbound call to a number that I can verify through a couple of other sources.

I very rarely click on links that are sent to me. I try to double-check when things come in, but this is, to be honest, just classic infosec hygiene that everyone should have.

With my elderly relatives, the general rule is you don't do anything with your bank account ever unless you've got one of your kids with you. Because we’ve found ourselves, all of us, in the digital equivalent of that Daniel Day-Lewis film “Gangs of New York,” where there are a lot of hoodlums running around.

GZEROMEDIA

Subscribe to GZERO's daily newsletter