Russia-Ukraine: Two years of war
READ NOW
Scroll to the top

{{ subpage.title }}

Jess Frampton

Tracking anti-Navalny bot armies

In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.

Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.

Still, he came home.

Read moreShow less

Al Gore's take on American democracy, climate action, and "artificial insanity"

Listen: In this episode of GZERO World podcast, Ian Bremmer sits down with former US Vice President Al Gore on the sidelines of Davos in Switzerland. Gore, an individual well-versed in navigating contested elections, shared his perspectives on the current landscape of American politics and, naturally, his renowned contributions to climate action.

While the mainstage discussions at the World Economic Forum throughout the week delved into topics such as artificial intelligence, conflicts in Ukraine and the Middle East, and climate change, behind the scenes, much of the discourse was centered on profound concerns about the upcoming 2024 US election and the state of American democracy. The US presidential election presents substantial risks, particularly with Donald Trump on the path to securing the GOP nomination.

Read moreShow less

Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, is seen here at the Digital Life Design innovation conference.

Matthias Balk/dpa via Reuters Connect

Azeem Azhar explores the future of AI

AI was all the rage at Davos this year – and for good reason. As we’ve covered each week in our weekly GZERO AI newsletter, artificial intelligence is impacting everything from regulatory debates and legal norms to climate change, disinformation, and identity theft. GZERO Media caught up with Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, for his insights on the many issues facing the industry.

GZERO: Whether The New York Times’ lawsuit against OpenAI on copyright grounds is settled, or found for or against OpenAI, do you think large language models are less feasible in the long term?

Azeem Azhar: Copyright has always been a compromise. The compromise has been between how many rights should be afforded to creators, and ultimately, of course, what that really means is the big publishers who accumulate them and have the legal teams.

And harm is being done to research, free exchange of knowledge, cultural expression by creating these enclosures around our intellectual space. This compromise, which worked reasonably well perhaps 100 years ago doesn't really work that well right now.

And now we have to say, “Well, we've got this new technology that could provide incredibly wide human welfare and when copyright was first imagined, those were not the fundamental axioms of the world.”

GZERO: Can you give me an example of something that could be attained by reforming copyright laws?

Azhar: Take Zambia. Zambia doesn't have very many doctors per capita. And because they don't have many doctors, they can't train many doctors. So you could imagine a situation where you can have widespread personalized AI tutoring to improve primary, secondary, tertiary, and educational outcomes for billions of people.

And those will use large language models dependent on a vast variety of material that will fall under the sort of traditional frame of copyright.

GZERO: AI is great at finding places to be more efficient. Do you think there's a future in which AI is used to decrease the world's net per capita energy consumption?

Azhar: No, we won't decrease energy consumption because energy is health and energy is prosperity and energy is welfare. Over the next 30 years, energy use will grow higher and at a higher rate than it has over the last 30, and at the same time, we will entirely decarbonize our economy.

Effectively, you cannot find any countries that don't use lots of energy that you would want to live in and that are safe and have good human outcomes.

But how can AI help? Well, look at an example from DeepMind. DeepMind released this thing called GNoME at the end of last year, which helps identify thermodynamically stable materials.

And DeepMind’s system delivered 60 years of stable producible materials with their physical properties in just one shot. Now that's really important because a lot of the climate transition and the materiality question is about how we produce all the stuff for your iPods and your door frames and your water pipes in ways that are thermodynamically more efficient, and that's going to require new materials and so AI can absolutely help us do that.

GZERO: In 2024, we are facing over four dozen national-level elections in a completely changed disinformation environment. Are you more bullish or bearish on how governments might handle the challenge of AI-driven disinformation?

Azhar: It does take time for bad actors to actually make use of these technologies, so I don't think that deep fake video will significantly play a role this year because it's just a little bit too soon.

But distribution of disinformation, particularly through social media, matters a great deal and so too do the capacities and the behaviors of the media entities and the political class.

If you remember in Gaza, there was an explosion at a hospital, and one of the newswires reported immediately that 500 people had been killed and they reported this within a few minutes. There's no way that within a few minutes one can count 500 bodies. But other organizations then picked it up, who are normally quite reputable.

That wasn't AI-driven disinformation. The trouble is the lie travels halfway around the world before the truth gets its trousers on. Do media companies need to put up a verification unit as the goalkeeper? Or do you put the idea of defending the truth and veracity and factuality throughout the culture of the organization?

GZERO: You made me think of an app that's become very popular in Taiwan over the last few months called Auntie Meiyu, which allows you to take a big group chat, maybe a family chat for example, and then you add Auntie Meiyu as a chatbot. And when Grandpa sends some crazy article, Auntie Meiyu jumps in and says, “Hey, this is BS and here’s why.”

She’s not preventing you from reading it. She's just giving you some additional information, and it's coming from a third party, so no family member has to take the blame for making Grandpa feel foolish.

Azhar: That is absolutely brilliant because, when you look back at the data from the US 2016 election, it wasn't the Instagram, TikTok, YouTube teens who were likely to be core spreaders of political misinformation. It was the over-60s, and I can testify to that with some of my experience with my extended family as well.

GZERO: As individuals are thinking about risks that AI might pose to them – elderly relatives being scammed or someone generating fake nude images of real people – is there anything an individual can do to protect themselves from some of the risks that AI might pose to their reputation or their finances?

Azhar: Wow, that's a really hard question. Have really nice friends.

I am much more careful now than I was five years ago and I'm still vulnerable. When I have to make transactions and payments I will always verify by doing my own outbound call to a number that I can verify through a couple of other sources.

I very rarely click on links that are sent to me. I try to double-check when things come in, but this is, to be honest, just classic infosec hygiene that everyone should have.

With my elderly relatives, the general rule is you don't do anything with your bank account ever unless you've got one of your kids with you. Because we’ve found ourselves, all of us, in the digital equivalent of that Daniel Day-Lewis film “Gangs of New York,” where there are a lot of hoodlums running around.

Democratic presidential candidate US Representative Dean Phillips greets supporters at a campaign event ahead of the New Hampshire presidential primary election in Rochester, New Hampshire, on Jan. 21, 2024.

REUTERS/Faith Ninivaggi

AI has entered the race to primary Joe Biden

For a brief moment this week, there were two Dean Phillips – the man and the bot. The human is a congressman from Minnesota who’s running for the Democratic nomination for president, hoping to rise above his measly 7% poll numbers to displace sitting President Joe Biden as the party’s nominee.

But there was also an AI chatbot version of the 55-year-old congressman.

Read moreShow less
Annie Gugliotta

Graphic Truth: Davos doomsdayers

The World Economic Forum asked 1,490 experts from the worlds of academia, business, and government, as well as the international community and civil society to assess the evolving global risk landscape.

These leaders hailed from 113 different countries and the results show a deteriorating global outlook over the next 10 years, with the number of people who responded that the “global catastrophic risks [are] looming” jumping from 3% over the next 2 years to 17% over the next 10.

But after a year of lethal conflicts from Gaza and Ukraine to Sudan, record-breaking heat, with both droughts and wildfires, and polarization on the rise, can you blame them for being worried?

Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation


Listen: In this edition of the GZERO World podcast, Ian Bremmer speaks with sociologist and all-around-brilliant person, Zeynep Tufekci. Tufekci has been prescient on a number of issues, from Covid causes to misinformation online. Ian caught up with her on the sidelines of the Paris Peace Forum outside, so pardon the traffic. They discuss what people are missing when they talk about artificial intelligence today. Listen to find out why her answer surprised Ian because it seems so obvious in retrospect.

Read moreShow less
Courtesy of Midjourney

Can watermarks stop AI deception?

Is it a real or AI-generated photo? Is it Drake’s voice or a computerized track? Was the essay written by a student or by ChatGPT? In the age of AI, provenance is paramount – a fancy way of saying we need to know where the media we consume comes from.

While generative AI promises to transform industries – from health care to entertainment to finance, just to name a few – it might also cast doubt on the origins of everything we see online. Experts have spent years warning that AI-generated media could disrupt elections and cause social unrest, so the stakes couldn’t be higher.

Read moreShow less

A Beatles superfan holds the first copy of the newly released last Beatles song, "Now and Then," at HMV Liverpool, on Nov. 3, 2023.

PA Images via Reuters Connect

Hard Numbers: Beatles drop "new" tune, Open AI's fortunes, Britain's supercomputer, Voters' misinformation fears

1995: Last week, the Beatles released their first song since 1995. The group’s two remaining members, Paul McCartney and Ringo Starr, and their producers relied on machine-learning technology to isolate vocal and piano tracks from a poor-quality cassette recording of a song John Lennon partially recorded decades ago. McCartney and Starr provided fresh instrumentals and finished the song, called “Now and Then.”

100 million: At OpenAI’s developer conference on Monday, the company announced that its popular chatbot ChatGPT has 100 million weekly users. It also said 2 million developers are building on its platform – including 92% of Fortune 500 companies.

$273 million: During its big AI extravaganza last week, the British government announced it would invest $273 million in a new AI-powered supercomputer built by Hewlett Packard Enterprise using chips made by NVIDIA – two American firms.

58%: A new poll by the Associated Press and the University of Chicago shows 58% of Americans think AI will amplify the spread of misinformation around the 2024 presidential election. Last week, we wrote about candidates taking a pledge to not use AI deceptively in their campaigning. Well, the results of this poll reveal that 62% of Republicans and 70% of Democrats support a pledge for candidates to avoid the technology altogether in their electioneering.

Subscribe to our free newsletter, GZERO Daily

Latest