We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
It’s rare for the head of a cutting-edge tech firm to ask for more regulation, but that’s precisely what happened Tuesday when Sam Altman, CEO of OpenAI, appeared before Congress to talk about the powers and risks of artificial intelligence.
In the months since OpenAI released its ChatGPT module late last year, attention to AI’s possibilities — utopian, dystopian, and ridiculous — has ballooned. Who among us has not already created “a painting of Biggie Smalls being chauffeured across the Brooklyn Bridge in a lime green limousine driven by a capybara, in the style of Van Gogh”?
But the serious concerns fall into a few categories: AI causing potentially vast net job losses, or learning and then replicating harmful human biases, or being used to generate “Deep Fakes” so convincing that, as one disillusioned AI pioneer has warned, “no one will know what is true anymore.”
Altman, who said he fears AI could cause “significant harm,” suggested a government agency tasked with licensing large AI platforms and holding them to certain standards. Lawmakers say they are interested in regulating AI early – by contrast with social media, which became a jungle before Congress could grab a machete. But progress will be slow as they try to strike that delicate balance: preventing the worst harms without stifling the best innovations.