Scroll to the top

{{ subpage.title }}

Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
Social media's AI wave: Are we in for a “deepfakification” of the entire internet? | GZERO AI

Social media's AI wave: Are we in for a “deepfakification” of the entire internet?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.

Read moreShow less

A view of the Georgia State Capitol in Atlanta, Georgia, U.S., May 11, 2021. Picture taken May 11, 2021.

REUTERS/Linda So

Deepfake recordings make a point in Georgia

A Georgia lawmaker used a novel approach to help pass legislation to ban deepfakes in politics: he used a deepfake. Republican state representative Brad Thomas used an AI-generated recording of two of his bills opponents—state senator Colton Moore and activist Mallory Staples—endorsing the bill.

Read moreShow less
Voters beware: Elections and the looming threat of deepfakes
2024 02 17 Global Stage Clip Brad Smith 03 FINAL

Voters beware: Elections and the looming threat of deepfakes

With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.

Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.

“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.

“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.

Watch the full conversation here: How to protect elections in the age of AI

Deepfakes and dissent: How AI makes the opposition more dangerous
Did AI make Navalny more dangerous? | Fiona Hill | Global Stage

Deepfakes and dissent: How AI makes the opposition more dangerous

Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.

After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.

Read moreShow less
CFOTO/Sipa USA via Reuters Connect

Hard Numbers: NVIDIA rising, the magician’s assistant, indefensible budget lags, Make PDFs sexy again

3: NVIDIA is now the third-most valuable company in the U.S. after reporting rosy financial returns. The AI-focused chipmaker’s market capitalization is now $1.812 trillion, surpassing Google parent company Alphabet, and trailing only Microsoft and Apple. How things change: just one year ago, NVIDIA’s market cap was a paltry $580 billion.

Read moreShow less
Tech accord on AI & elections will help manage the ‘new reality,’ says Microsoft’s Brad Smith
Tech accord on AI & elections will help manage the ‘new reality' | Brad Smith | Global Stage

Tech accord on AI & elections will help manage the ‘new reality,’ says Microsoft’s Brad Smith

At the Munich Security Conference, leading tech companies unveiled a new accord that committed them to combating AI-generated content that could disrupt elections.

During a Global Stage panel on the sidelines of this year’s conference, Microsoft Vice Chair and President Brad Smith said the accord would not completely solve the problem of deceptive AI content but would help “manage this new reality in a way that will make a difference and really serve all of the elections… between now and the end of the year.”

As Smith explains, the accord is designed to bring the tech industry together to preserve the “authenticity of content,” including via the creation of content credentials. The industry will also work to detect deepfakes and provide candidates with a mechanism to report them, says Smith, while also taking steps to “promote transparency and education.”

The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.

Watch the full conversation here: How to protect elections in the age of AI

Deepfakes are ‘fraud,’ says Microsoft's Brad Smith
Deepfakes are ‘fraud,’ says Microsoft CEO Brad Smith | Global Stage

Deepfakes are ‘fraud,’ says Microsoft's Brad Smith

The rapid rise of AI has presented a wide array of challenges, particularly in terms of finding a balance between protecting the right to free expression and safeguarding democracy from the corrosive effects of misinformation.

But Microsoft Vice Chair and President Brad Smith says freedom of expression does not apply to deepfakes — fake images or videos created via AI, which can involve using someone else’s face and/or voice without their permission. During a Global Stage panel on AI and elections at the Munich Security Conference, Smith unequivocally decried deepfakes as a form of “fraud.”

“The right to free expression gives me the right to stand up and say what is on my mind,” says Smith, adding, “I do not have the right to steal and use your voice. Your voice belongs to you and you alone… Let's give people the right to say what they think. Let's not steal their voice and put words in their mouth.”

The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.

Watch the full conversation here: How to protect elections in the age of AI

FILE PHOTO: Smoke and steam billows from the coal-fired power plant owned by Indonesia Power, next to an area for Java 9 and 10 Coal-Fired Steam Power Plant Project in Suralaya, Banten province, Indonesia, July 11, 2020.

REUTERS/Willy Kurniawan/File Photo

Hard Numbers: It’s electric, OpenAI’s billions, AI-related legislation, Fred Trump ‘returns,’ Multiplication problems

1,300: Training a large language model is estimated to use about 1,300 megawatt hours of electricity. It’s about the same consumption of 130 US homes for one year. But that’s for the last generation of LLMs, like OpenAI’s GPT-3. The potential electricity usage for GPT-4, the current model, and beyond could be much, much greater.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest