Search
AI-powered search, human-powered content.
scroll to top arrow or icon

{{ subpage.title }}

- YouTube

An OpenAI insider warns of the reckless race to AI dominance

Are AI companies being reckless and ignoring safety concerns in the race to develop superintelligence? On GZERO World, Ian Bremmer is joined by former OpenAI whistleblower and executive director of the AI Futures Project, Daniel Kokotajlo, to discuss new developments in artificial intelligence and his concerns that big tech companies like OpenAI and DeepMind are too focused on beating each other to create new, powerful AI systems and not focused enough on safety guardrails, oversight, and existential risk. Kokotajlo left OpenAI last year over deep concerns about the direction of its AI development and argues tech companies are dangerously unprepared for the arrival of superintelligent AI. If he’s right, humanity is barreling toward an era of unprecedented power without a safety net, one where the future of AI is decided not by careful planning, but by who gets there first.

“OpenAI and other companies are just not giving these issues the investment they need,” Kokotajlo warns, “We need to make sure that the control over the army of superintelligences is not something one man or one tiny group of people gets to have.”

GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).

New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.

OpenAI whistleblower Daniel Kokotajlo on superintelligence and existential risk of AI

Listen: How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI’s progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo’s team has direct knowledge of current research pipelines. Which is exactly why it’s so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether?

Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published

- YouTube

What is artificial general intelligence?

Artificial General Intelligence (AGI) is the holy grail of AI research and development. What exactly does AGI mean, and how will we know when we’ve achieved it? On Ian Explains, Ian Bremmer breaks down one of the most exciting (and terrifying) discussions happening in artificial intelligence right now: the race to build AGI, machines that don’t just mimic human thinking but match and then far surpass it. The idea of AGI is still a little hard to define. Some say it’s when a computer can accomplish any cognitive task a human can, others say it’s about transfer learning. Researchers have been predicting AGI’s arrival for decades, but lately, as new AI tools like ChatGPT and DeepSeek become more and more powerful, there is a consensus that achieving true general intelligence in computers isn’t a matter of if, but when. And when it does arrive, they say it will transform almost everything about the way humans live their lives. But is society ready for the huge changes experts warn are only a few years away? What happens when the line between man and machine disappears altogether?

GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).

New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.

Courtesy of Midjourney

Why Meta opened up

Last week, Meta CEO Mark Zuckerberg announced his intention to build artificial general intelligence, or AGI — a standard whereby AI will have human-level intelligence in all fields – and said Meta will have 350,000 high-powered NVIDIA graphics chips by the end of the year.

Zuckerberg isn’t alone in his intentions – Meta joins a long list of tech firms trying to build a super-powered AI. But he is alone in saying he wants to make Meta’s AGI open-source. “Our long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,” Zuckerberg said. Um, everyone?

Critics have serious concerns with the advent of the still-hypothetical AGI. Publishing such technology on the open web is a whole other story. “In the wrong hands, technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it.” University of Southampton professor Wendy Hall, who advises the UN on AI issues, told The Guardian. She added that it is “really very scary” for Zuckerberg to even consider it.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest