scroll to top arrow or icon

OpenAI vs. catastrophic AI

OpenAI vs. catastrophic AI

OpenAI, the much-hyped startup behind the chatbot ChatGPT and the image generator DALL-E, announced the formation of an internal preparedness team this week, designed to curb the “catastrophic risks” posed by AI. So, what exactly are they worried about? Just “chemical, biological, radiological, and nuclear” threats. That’s it? Well, there’s also “autonomous replication and adaptation” by which models could become disarmingly powerful by gathering resources and making copies of themselves.


OpenAI CEO Sam Altman has simultaneously embarked on a global tour calling for AI regulation and lobbying for AI rules that align with his vision.

OpenAI should be applauded for trying to prevent its own products from becoming a nuclear threat — something most companies don’t need to worry about. But what’s also happening here is that one of the world’s highest-profile AI firms is signaling to global regulators that it’s both a responsible actor and that it’s already regulating itself.

While Eurasia Group senior analyst Nick Reiners believes Altman is serious with his commitment to averting disaster, he doesn’t see self-regulating efforts by companies deterring governments from adopting their own AI regulations. The Biden administration, for one, “started with getting companies like OpenAI to sign up to a series of voluntary commitments in July, so if [they] believed in a purely voluntary approach, then they would have left it at that,” Reiners said. “Instead they are turning these commitments into legal obligations, and they are doing so at an impressive speed by government standards.”

GZEROMEDIA

Subscribe to GZERO's daily newsletter