First, OpenAI tackled text with ChatGPT, then images with DALL-E. Next, it announced Sora, its text-to-video platform. But perhaps the most pernicious technology is what might come next: text-to-voice. Not just audio — but specific voices.

A group of OpenAI clients is reportedly testing a new tool called Voice Engine, which can mimic a person’s voice based on a 15-second recording, according to the New York Times. And from there it can translate the voice into any language.

The report outlined a series of potential abuses: spreading disinformation, allowing criminals to impersonate people online or over phone calls, or even breaking voice-based authenticators used by banks.

In a blog post on its own site, OpenAI seems all too aware of the potential for misuse. Its usage policies mandate that anyone using Voice Engine obtain consent before impersonating someone else and disclose that the voices are AI-generated, and OpenAI says it’s watermarking all audio so third parties can detect it and trace it back to the original maker.

But the company is also using this opportunity to warn everyone else that this technology is coming, including urging financial institutions to phase out voice-based authentication.

AI voices have already wreaked havoc in American politics. In January, thousands of New Hampshire residents received a robocall from a voice pretending to be President Joe Biden, urging them not to vote in the Democratic primary election. It was generated using simple AI tools and paid for by an ally of Biden's primary challenger Dean Phillips, who has since dropped out of the race.

In response, the Federal Communications Commission clarified that AI-generated robocalls are illegal, and New Hampshire’s legislature passed a law on March 28 that requires disclosures for any political ads using AI.

So, what makes this so much more dangerous than any other AI-generated media? The imitations are convincing. The Voice Engine demonstrations so far shared with the public sound indistinguishable from the human-uttered originals — even in foreign languages. But even the Biden robocall, which its maker admitted was made for only $150 with tech from the company ElevenLabs, was a good enough imitation.

But the real danger lies in the absence of other indicators that the audio is fake. With every other AI-generated media, there are clues for the discerning viewer or reader. AI text can feel clumsily written, hyper-organized, and chronically unsure of itself, often refusing to give real recommendations. AI images often have a cartoonish or sci-fi sheen, depending on their maker, and are notorious for getting human features wrong: extra teeth, extra fingers, and ears without lobes. AI video, still relatively primitive, is infinitely glitchy.

It’s conceivable that each of these applications for generative AI improves to a point where they’re indistinguishable from the real thing, but for now, AI voices are the only iteration that feels like it could become utterly undetectable without proper safeguards. And even if OpenAI, often the first to market, is responsible, that doesn’t mean all actors will be.

The announcement of Voice Engine, which doesn’t have a set release date, as such, feels less like a product launch and more like a warning shot.

More For You

A woman prepares to throw trash on a street in downtown Havana, Cuba, February 16, 2026.
REUTERS/Norlys Perez

The lights are going out in Cuba. There are no planes landing at Havana’s international airport; the jet fuel's gone. Buses have stopped running across most of the capital.

Chris, an Army veteran, started his Walmart journey over 25 years ago as an hourly associate. Today, he manages a Distribution Center and serves as a mentor, helping others navigate their own paths to success. At Walmart, associates have the opportunity to take advantage of the pathways, perks, and pay that come with the job — with or without a college degree. In fact, more than 75% of Walmart management started as hourly associates. Learn more about how over 130,000 associates were promoted into roles of greater responsibility and higher pay in FY25.

Last week, at the Munich Security Conference, a group of global technology providers, including Microsoft, announced the Trusted Tech Alliance — committed to shared, verifiable principles for trusted, transparent, and resilient technology across borders. At a moment of economic volatility and zero-sum technological competition, countries and customers are demanding greater accountability from technology providers. The Alliance addresses this by bringing together companies from across Africa, Asia, Europe, and North America around shared commitments: transparent governance, secure development practices, supply chain oversight, open digital ecosystem, and respect for the rule of law — ensuring the benefits of emerging technologies strengthen public trust while driving job creation and economic growth. Explore the Trusted Tech Alliance here.