Search
AI-powered search, human-powered content.
scroll to top arrow or icon

{{ subpage.title }}

- YouTube

AI for Good depends on global cooperation, says ITU's Doreen Bogdan-Martin

“Connectivity is an enabler, but it’s not evenly distributed,” says Doreen Bogdan-Martin, Secretary-General of the ITU.

In a conversation with GZERO’s Tony Maciulis at the 2025 AI for Good Summit in Geneva, Bogdan-Martin lays out the urgent global challenge: a widening digital divide in AI access, policy, and infrastructure. “Only 32 countries have meaningful compute capacity. And 85% don’t have an AI strategy.”

Read moreShow less

Demonstration of AI innovation at the AI for Good Summit in Geneva, Switzerland, on July 7, 2025.

Photo courtesy of ITU

What is Artificial Intelligence “good” for?

Since ChatGPT burst onto the scene in late 2022, it’s been nearly impossible to attend a global conference — from Davos to Delhi — without encountering a slew of panels and keynote speeches on artificial intelligence. Will AI make our lives easier, or will it destroy humanity? Can it be a force for good? Can AI be regulated without stifling innovation?

At the ripe old age of eight, the AI for Good Summit is now a veteran voice in this rapidly-evolving dialogue. It kicks off today in Geneva, Switzerland, for what promises to be its most ambitious edition yet.

Read moreShow less
- YouTube

How AI for Good is tackling the digital divide

“AI is too important to be left to the experts,” says Frederic Werner, co-founder of the AI for Good Summit and head of strategic engagement at ITU (International Telecommunication Union), the United Nations' agency for digital technologies.

Speaking with GZERO's Tony Maciulis on the eve of the 2025 AI for Good Summit in Geneva, Werner reflects on how artificial intelligence has rapidly evolved from early promise to real-world applications—from disaster response to healthcare. But with 2.6 billion people still offline, he warns of a growing digital divide and urges leaders to build inclusive systems from the ground up. “It’s not about connectivity for the sake of it—it’s about unlocking local solutions for local problems,” he says.

Read moreShow less

American President Donald Trump's X Page is seen displayed on a smartphone with a Tiktok logo in the background

Avishek Das / SOPA Images via Reuters Connect

Where we get our news - and why it changes everything

In August 1991, a handful of high-ranking Soviet officials launched a military coup to halt what they believed (correctly) was the steady disintegration of the Soviet Union. Their first step was to seize control of the flow of information across the USSR by ordering state television to begin broadcasting a Bolshoi Theatre production of Swan Lake on a continuous loop until further notice. (Click that link for some prehistoric GZERO coverage of that event.)

Read moreShow less

The Graphic Truth: The majors least likely to get you a job out of college

A rising number of US college graduates are having trouble securing jobs. The Class of 2025 is up against the toughest labor market in four years, with the unemployment rate for recent graduates sitting nearly two percentage points above than the national average of 4%. Trade tensions are also raising fears of a global recession.

On top of these short-term economic factors is a major long-term one: experts say that many entry-level positions – particularly in the tech sector – are being displaced by artificial intelligence. Here’s a look at the majors least likely to lead to a job after college.

Maybe majoring in history was not such a bad idea after all.

- YouTube

Elon Musk steps down from Trump administration

Elon Musk’s exit from his role at DOGE marks a turning point in the Trump administration.

In this Quick Take, Ian Bremmer breaks down Elon Musk’s departure from the White House noting, “The impact of DOGE turns out to be one of the less successful experiments of the administration.”

With Musk stepping away to focus on Tesla, SpaceX, and his AI ventures, Ian explores the broader implications including missed opportunities in government reform, civil service cuts, and the political optics ahead of the US midterm elections.

- YouTube

An OpenAI insider warns of the reckless race to AI dominance

Are AI companies being reckless and ignoring safety concerns in the race to develop superintelligence? On GZERO World, Ian Bremmer is joined by former OpenAI whistleblower and executive director of the AI Futures Project, Daniel Kokotajlo, to discuss new developments in artificial intelligence and his concerns that big tech companies like OpenAI and DeepMind are too focused on beating each other to create new, powerful AI systems and not focused enough on safety guardrails, oversight, and existential risk. Kokotajlo left OpenAI last year over deep concerns about the direction of its AI development and argues tech companies are dangerously unprepared for the arrival of superintelligent AI. If he’s right, humanity is barreling toward an era of unprecedented power without a safety net, one where the future of AI is decided not by careful planning, but by who gets there first.

“OpenAI and other companies are just not giving these issues the investment they need,” Kokotajlo warns, “We need to make sure that the control over the army of superintelligences is not something one man or one tiny group of people gets to have.”

GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).

New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.

- YouTube

AI superintelligence is coming. Should we be worried?

Are AI companies recklessly racing toward artificial superintelligence or can we avoid a worst case scenario? On GZERO World, Ian Bremmer sits down with Daniel Kokotajlo, co-author of AI 2027, a new report that forecasts how artificial intelligence might progress over the next few years. As AI approaches human-level intelligence, AI 2027 predicts its impact will “exceed that of the Industrial Revolution,” but it warns of a future where tech firms race to develop superintelligence, safety rails are ignored, and AI systems go rogue, wreaking havoc on the global order. Kokotajlo, a former OpenAI researcher, left the company last year warning the company was ignoring safety concerns and avoiding oversight in its race to develop more and more powerful AI. Kokotajlo joins Bremmer to talk about the race to superhuman AI, the existential risk, and what policymakers and tech firms should be doing right now to prepare for an AI future experts warn is only a few short years away.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest