We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
![An image of OpenAI CEO Sam Altman is seen on a mobile device screen in this illustration.](https://www.gzeromedia.com/media-library/an-image-of-openai-ceo-sam-altman-is-seen-on-a-mobile-device-screen-in-this-illustration.jpg?id=52334521&width=1200&height=748)
An image of OpenAI CEO Sam Altman is seen on a mobile device screen in this illustration.
OpenAI announced that it is training a new generative AI model to eventually replace GPT-4, the industry-standard model that powers ChatGPT and Microsoft Copilot.
But the OpenAI board of directors also said that it’s forming a new Safety and Security Committee to advise it on the risks posed by powerful AI. After the previous board of directors abruptly fired CEO Sam Altman for not being candid with them in November 2023, OpenAI staffers and lead investor Microsoft pressured the board to rehire Altman. It worked: Altman rejoined the company, and most of the old board members resigned.
OpenAI has sought to be an industry leader in generative AI while staying in the good graces of regulators looking to rein in its ambitions. OpenAI took the Biden administration’s voluntary pledge to mitigate AI risks in July 2023, and Altman recently joined the Department of Homeland Security’s new Artificial Intelligence Safety and Security Board.
The US has done little to curb the ambitions of its most prominent AI firms, but that good grace is dependent on the appearance of being a reliable and trustworthy actor — one that will propel Silicon Valley ahead of other global tech hubs while building AI that can help humanity, not harm it.