What is “safe” superintelligence?

​Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
REUTERS/Amir Cohen

OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.

Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)

Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.

Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”

More from GZERO Media

- YouTube

Frederic Werner discusses the importance of AI for global impact at the 2025 AI for Good Summit in Geneva, in an exclusive Global Stage interview with GZERO's Tony Maciulis. They discuss the future of AI and its role in solving humanity's challenges, from harnessing quantum computing to closing the digital divide.

- YouTube

Elon Musk wants to start a new political party and it’s already making waves. In this episode of Ian Bremmer’s Quick Take, Ian unpacks Musk’s so-called “America Party,” driven by Musk’s frustration with both Republicans and Democrats.

UK Prime Minister Keir Starmer and Mayor of London Sadiq Khan leave the St Paul’s Cathedral, where a service of commemoration took place to mark the 20th anniversary of the deadly July 7, 2005, London bombings in which four suicide bombers targeted London's public transport system, in London, United Kingdom, on July 7, 2025.
REUTERS/Chris J. Ratcliffe
- YouTube

As Independence Day approaches, President Trump is delighted to learn that one of America's most ferocious revolutionaries has... mellowed out. #PUPPETREGIME

Demonstrators with US and Ukrainian flags rally near the U.S. Capitol ahead of President Donald Trump’s address to a joint session of Congress in Washington, D.C., USA, on March 4, 2025.

Matrix Images/Gent Shkullaku

Here’s a short guide to making sense of why the US cut shipments of Patriot interceptor missiles to Kyiv and how it could affect the course of the Russia-Ukraine war.