Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI superintelligence is coming. Should we be worried?
Are AI companies recklessly racing toward artificial superintelligence or can we avoid a worst case scenario? On GZERO World, Ian Bremmer sits down with Daniel Kokotajlo, co-author of AI 2027, a new report that forecasts how artificial intelligence might progress over the next few years. As AI approaches human-level intelligence, AI 2027 predicts its impact will “exceed that of the Industrial Revolution,” but it warns of a future where tech firms race to develop superintelligence, safety rails are ignored, and AI systems go rogue, wreaking havoc on the global order. Kokotajlo, a former OpenAI researcher, left the company last year warning the company was ignoring safety concerns and avoiding oversight in its race to develop more and more powerful AI. Kokotajlo joins Bremmer to talk about the race to superhuman AI, the existential risk, and what policymakers and tech firms should be doing right now to prepare for an AI future experts warn is only a few short years away.
“One of the unfortunate situations that we're in as a species right now is that humanity in general mostly fixes problems after they happen,” Kokotajlo says, “Unfortunately, the problem of losing control of your army of super intelligences is a problem that we can't afford to wait and see how it goes and then fix it afterwards.”
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.
Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.
What is “safe” superintelligence?
OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.
Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)
Sutskever said that he won’t actually build products en route to superintelligence — so no ChatGPT competitor is coming your way.
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
Sutskever also hasn’t said what exactly he wants this superintelligence to do though he said he wants it to be more than a smart conversationalist and to help people with more ambitious tasks. But building the underlying tech and keeping it “safe” seems to be his only stated priority.
Sutskever’s view is still rather existentialist — as in, will the AI kill us all or not? Is it still a safe system if it perpetuates racial bias, hallucinates answers, or deceives users? Surely there should be better safeguards than,“Keep the AI away from our nukes!”