Scroll to the top

{{ subpage.title }}

AI policy formation must include voices from the global South
AI policy formation: The dire need for diverse voices | GZERO AI

AI policy formation must include voices from the global South

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.

This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.

Read moreShow less
Israel's Lavender: What could go wrong when AI is used in military operations?
Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI

Israel's Lavender: What could go wrong when AI is used in military operations?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces' use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.
Read moreShow less
OpenAI is risk-testing Voice Engine, but the risks are clear
- YouTube

OpenAI is risk-testing Voice Engine, but the risks are clear

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she says that while OpenAI is testing its new Voice Engine model to identify its risks, we've already experienced the clear dangers of voice impersonation technology. What we need is a more independent assessment of these new technologies applying equally to companies who want to tread carefully and those who want to race ahead in developing and deploying the technology.
Read moreShow less
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
Social media's AI wave: Are we in for a “deepfakification” of the entire internet? | GZERO AI

Social media's AI wave: Are we in for a “deepfakification” of the entire internet?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.

Read moreShow less
Should we regulate generative AI with open or closed models?
Title Placeholder | GZERO AI

Should we regulate generative AI with open or closed models?

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.

Read moreShow less
AI and Canada's proposed Online Harms Act
Canada wants to hold AI companies accountable with proposed legislation | GZERO AI

AI and Canada's proposed Online Harms Act

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.

Read moreShow less
Voters beware: Elections and the looming threat of deepfakes
2024 02 17 Global Stage Clip Brad Smith 03 FINAL

Voters beware: Elections and the looming threat of deepfakes

With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.

Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.

“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.

“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.

Watch the full conversation here: How to protect elections in the age of AI

Deepfakes and dissent: How AI makes the opposition more dangerous
Did AI make Navalny more dangerous? | Fiona Hill | Global Stage

Deepfakes and dissent: How AI makes the opposition more dangerous

Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.

After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest