Search
AI-powered search, human-powered content.
scroll to top arrow or icon

OpenAI is risk-testing Voice Engine, but the risks are clear

OpenAI is risk-testing Voice Engine, but the risks are clear
- YouTube
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she says that while OpenAI is testing its new Voice Engine model to identify its risks, we've already experienced the clear dangers of voice impersonation technology. What we need is a more independent assessment of these new technologies applying equally to companies who want to tread carefully and those who want to race ahead in developing and deploying the technology.

About a year ago, I was part of a small meeting where I was asked to read a paragraph, sort of random text to me, it seemed. But before I knew it, I heard my own voice very convincingly, saying things through the speakers of the conference room that I had never said and would never say.

And it was really, you know, a sort of goosebump moment because I realized that generative AI used for voice was already very convincing. And that was a prototype of the voice engine, which is now being reported by the New York Times as having been this new product by OpenAi that the company is choosing to only release to a limited set of users as it's still testing the risky uses.

And I don't think this testing with a limited set of users is needed to understand the risks. We've already heard of fraudulent robocalls impersonating President Biden. We've heard of criminals trying to deceive parents, for example, with voice messages sounding like their children who are in trouble and asking for the parent to send money, which then, of course, benefits the criminal group, not their children.

So the risks of using voice impersonation are clear. Of course, companies will also point to opportunities of helping people who may have lost their voice through illness or disability, which I think is an important opportunity to explore. But we cannot be naive about the risks. And so in response to the political robocalls, the Federal Communications Commission at least drew a line and said that AI cannot be used for these. So there are some kind of restriction. But all in all, we need to see more independent assessment of these new technologies, a level playing field for all companies, not just those who want to choose to pace the release of their new models, but also those who want to race ahead. Because sooner or later, one or the other company will and we will all potentially be confronted with this widely accessible, voice generating artificial intelligence opportunity.

So it is a tricky moment when we see the race to bring to market and the rapid development of these technologies, which also incur a lot of risk and harm as an ongoing dynamic in the AI space. And so I hope that as there are discussions around regulation and guardrails happening around the world, that the full spectrum of use cases that we know and can anticipate will be on the table with the aim of keeping people free from crime, our democracy safe, while making sure that if there is a benefit for people in minority disabled communities, that they can benefit from this technology as well.

GZEROMEDIA

Subscribe to GZERO's daily newsletter