VIDEOSGZERO World with Ian BremmerQuick TakePUPPET REGIMEIan ExplainsGZERO ReportsAsk IanGlobal Stage
Site Navigation
Search
Human content,
AI powered search.
Latest Stories
Start your day right!
Get latest updates and insights delivered to your inbox.
GZERO AI Video
GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
Presented by
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the growing excitement around artificial intelligence. At a recent AI conference he attended, Owen observes that while startups and officials emphasized AI's economic potential, prominent AI researcher Yoshua Bengio voiced serious concerns about its existential risks. Bengio, who's crucial to the development of the technology, stresses the importance of cautious public policy, warning that current AI research tends to prioritize power over safety.
A couple of weeks ago, I was at this big AI conference in Montreal called All In. It was all a bit over the top. There were smoke machines, loud music, and food trucks. It's clear that AI has come a long way from the quiet labs it was developed in. I'm still skeptical of some of the hype around AI, but there's just no question we're in a moment of great enthusiasm. There were dozens of startup founders there talking about how AI was going to transform this industry or that, and government officials promising that AI was going to supercharge our economy.
And then there was Yoshua Bengio. Bengio is widely considered one of the world's most influential computer scientists. In 2018, he and two colleagues won the Turing Award, the Nobel Prize of Computing for their work on deep learning, which forms the foundation of much of our current AI models. In 2022, he was the most cited computer scientist in the world. It's really safe to say that AI, as we currently know it, might not exist without Yoshua Bengio.
And I recently got the chance to talk to Bengio for my podcast, "Machines Like Us." And I wanted to find out what he thinks about AI now, about the current moment we're in, and I learned three really interesting things. First, Bengio's had an epiphany of sorts, as been widely talked about in the media. Bengio now believes that, left unchecked, AI has the potential to pose an existential threat to humanity. And so he's asking us, even if there's a small chance of this, why not proceed with tremendous caution?
Second, he actually thinks that the divide over this existential risk, which seems to exist in the scientific community, is being overplayed. Him and Meta's Yann LeCun, for example, who he won the Turing Prize with, differ on the timeframe of this risk and the ability of industry to contain it. But Bengio argues they agree on the possibility of it. And in his mind it's this possibility which actually should create clarity in our public policy. Without certainty over risk, he thinks the precautionary principle should lead, particularly when the risk is so potentially grave.
Third, and really interestingly, he's concerned about the incentives being prioritized in this moment of AI commercialization. This extends from executives like LeCun potentially downplaying risk and overstating industry's ability to contain it, right down to the academic research labs where a majority of the work is currently focused on making AI more powerful, not safer. This is a real warning that I think we need to heed. There's just no doubt that Yoshua Bengio's research contributed greatly to the current moment of AI we're in, but I sure hope his work on risk and safety shapes the next. I'm Taylor Owen and thanks for watching.
Keep reading...Show less
More from GZERO AI Video
Europe’s AI deepfake raid
March 04, 2025
How is AI shaping culture in the art world?
July 02, 2024
How AI models are grabbing the world's data
June 18, 2024
Can AI help doctors act more human?
June 04, 2024
How neurotech could enhance our brains using AI
May 21, 2024
OpenAI is risk-testing Voice Engine, but the risks are clear
April 03, 2024
Should we regulate generative AI with open or closed models?
March 20, 2024
AI and Canada's proposed Online Harms Act
March 05, 2024
Gemini AI controversy highlights AI racial bias challenge
February 29, 2024
When AI makes mistakes, who can be held responsible?
February 20, 2024
AI & human rights: Bridging a huge divide
February 16, 2024
Taylor Swift AI images & the rise of the deepfakes problem
February 06, 2024
Will Taylor Swift's AI deepfake problems prompt Congress to act?
February 01, 2024
ChatGPT on campus: How are universities handling generative AI?
January 23, 2024
Davos 2024: AI is having a moment at the World Economic Forum
January 16, 2024
AI in 2024: Will democracy be disrupted?
December 20, 2023
New AI toys spark privacy concerns for kids
December 12, 2023
GZERO Series
GZERO Daily: our free newsletter about global politics
Keep up with what’s going on around the world - and why it matters.






























