VIDEOSGZERO World with Ian BremmerQuick TakePUPPET REGIMEIan ExplainsGZERO ReportsAsk IanGlobal Stage
Site Navigation
Search
Human content,
AI powered search.
Latest Stories
Start your day right!
Get latest updates and insights delivered to your inbox.
GZERO AI Video
GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
Presented by
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she says that while OpenAI is testing its new Voice Engine model to identify its risks, we've already experienced the clear dangers of voice impersonation technology. What we need is a more independent assessment of these new technologies applying equally to companies who want to tread carefully and those who want to race ahead in developing and deploying the technology.
About a year ago, I was part of a small meeting where I was asked to read a paragraph, sort of random text to me, it seemed. But before I knew it, I heard my own voice very convincingly, saying things through the speakers of the conference room that I had never said and would never say.
And it was really, you know, a sort of goosebump moment because I realized that generative AI used for voice was already very convincing. And that was a prototype of the voice engine, which is now being reported by the New York Times as having been this new product by OpenAi that the company is choosing to only release to a limited set of users as it's still testing the risky uses.
And I don't think this testing with a limited set of users is needed to understand the risks. We've already heard of fraudulent robocalls impersonating President Biden. We've heard of criminals trying to deceive parents, for example, with voice messages sounding like their children who are in trouble and asking for the parent to send money, which then, of course, benefits the criminal group, not their children.
So the risks of using voice impersonation are clear. Of course, companies will also point to opportunities of helping people who may have lost their voice through illness or disability, which I think is an important opportunity to explore. But we cannot be naive about the risks. And so in response to the political robocalls, the Federal Communications Commission at least drew a line and said that AI cannot be used for these. So there are some kind of restriction. But all in all, we need to see more independent assessment of these new technologies, a level playing field for all companies, not just those who want to choose to pace the release of their new models, but also those who want to race ahead. Because sooner or later, one or the other company will and we will all potentially be confronted with this widely accessible, voice generating artificial intelligence opportunity.
So it is a tricky moment when we see the race to bring to market and the rapid development of these technologies, which also incur a lot of risk and harm as an ongoing dynamic in the AI space. And so I hope that as there are discussions around regulation and guardrails happening around the world, that the full spectrum of use cases that we know and can anticipate will be on the table with the aim of keeping people free from crime, our democracy safe, while making sure that if there is a benefit for people in minority disabled communities, that they can benefit from this technology as well.
Keep reading...Show less
More from GZERO AI Video
Europe’s AI deepfake raid
March 04, 2025
AI's existential risks: Why Yoshua Bengio is warning the world
October 01, 2024
How is AI shaping culture in the art world?
July 02, 2024
How AI models are grabbing the world's data
June 18, 2024
Can AI help doctors act more human?
June 04, 2024
How neurotech could enhance our brains using AI
May 21, 2024
Should we regulate generative AI with open or closed models?
March 20, 2024
AI and Canada's proposed Online Harms Act
March 05, 2024
Gemini AI controversy highlights AI racial bias challenge
February 29, 2024
When AI makes mistakes, who can be held responsible?
February 20, 2024
AI & human rights: Bridging a huge divide
February 16, 2024
Taylor Swift AI images & the rise of the deepfakes problem
February 06, 2024
Will Taylor Swift's AI deepfake problems prompt Congress to act?
February 01, 2024
ChatGPT on campus: How are universities handling generative AI?
January 23, 2024
Davos 2024: AI is having a moment at the World Economic Forum
January 16, 2024
AI in 2024: Will democracy be disrupted?
December 20, 2023
New AI toys spark privacy concerns for kids
December 12, 2023
GZERO Series
GZERO Daily: our free newsletter about global politics
Keep up with what’s going on around the world - and why it matters.






























