Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Ian Bremmer and Amina Mohammed on the promise and peril of AI
In a GZERO Global Stage discussion at the 79th UN General Assembly, Ian Bremmer and Amina Mohammed emphasized the potential of artificial intelligence (AI) to drive progress towards the Sustainable Development Goals (SDGs) and address global inequities.
Bremmer noted that AI could be the key to achieving goals like clean water access and reducing hunger, pointing out the transformative power AI could bring in the coming years.
"AI is your opportunity," Bremmer said. He highlighted the importance of capacity building, standard setting, and ensuring that the Global South has a seat at the table in AI governance efforts, noting that AI has the potential to move the world towards meaningful progress by 2030.
Amina Mohammed echoed this sentiment but urged caution, emphasizing the need for responsible deployment of AI. "It's really exciting, it's scary, and we're not ready," she said, stressing the importance of investments and proper infrastructure to ensure AI benefits humanity as a whole. Mohammed underscored the responsibility of global leaders to ensure checks and balances are in place as AI continues to evolve.
Bremmer and Mohammed spoke during GZERO’s Global Stage livestream, “Live from the United Nations: Securing our Digital Future,” an event produced in partnership between the Complex Risk Analytics Fund, or CRAF’d, and GZERO Media’s Global Stage series, sponsored by Microsoft. The Global Stage series convenes heads of state, business leaders, and technology experts from around the world for critical debates about the geopolitical and technology trends shaping our world.
AI's existential risks: Why Yoshua Bengio is warning the world
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, reflects on the growing excitement around artificial intelligence. At a recent AI conference he attended, Owen observes that while startups and officials emphasized AI's economic potential, prominent AI researcher Yoshua Bengio voiced serious concerns about its existential risks. Bengio, who's crucial to the development of the technology, stresses the importance of cautious public policy, warning that current AI research tends to prioritize power over safety.
A couple of weeks ago, I was at this big AI conference in Montreal called All In. It was all a bit over the top. There were smoke machines, loud music, and food trucks. It's clear that AI has come a long way from the quiet labs it was developed in. I'm still skeptical of some of the hype around AI, but there's just no question we're in a moment of great enthusiasm. There were dozens of startup founders there talking about how AI was going to transform this industry or that, and government officials promising that AI was going to supercharge our economy.
And then there was Yoshua Bengio. Bengio is widely considered one of the world's most influential computer scientists. In 2018, he and two colleagues won the Turing Award, the Nobel Prize of Computing for their work on deep learning, which forms the foundation of much of our current AI models. In 2022, he was the most cited computer scientist in the world. It's really safe to say that AI, as we currently know it, might not exist without Yoshua Bengio.
And I recently got the chance to talk to Bengio for my podcast, "Machines Like Us." And I wanted to find out what he thinks about AI now, about the current moment we're in, and I learned three really interesting things. First, Bengio's had an epiphany of sorts, as been widely talked about in the media. Bengio now believes that, left unchecked, AI has the potential to pose an existential threat to humanity. And so he's asking us, even if there's a small chance of this, why not proceed with tremendous caution?
Second, he actually thinks that the divide over this existential risk, which seems to exist in the scientific community, is being overplayed. Him and Meta's Yann LeCun, for example, who he won the Turing Prize with, differ on the timeframe of this risk and the ability of industry to contain it. But Bengio argues they agree on the possibility of it. And in his mind it's this possibility which actually should create clarity in our public policy. Without certainty over risk, he thinks the precautionary principle should lead, particularly when the risk is so potentially grave.
Third, and really interestingly, he's concerned about the incentives being prioritized in this moment of AI commercialization. This extends from executives like LeCun potentially downplaying risk and overstating industry's ability to contain it, right down to the academic research labs where a majority of the work is currently focused on making AI more powerful, not safer. This is a real warning that I think we need to heed. There's just no doubt that Yoshua Bengio's research contributed greatly to the current moment of AI we're in, but I sure hope his work on risk and safety shapes the next. I'm Taylor Owen and thanks for watching.
UN’s first global framework for AI governance
Artificial intelligence is rapidly transforming global industries and societies, and the United Nations has taken a bold step to address its governance. During its 79th General Assembly, the UN adopted a pact they are calling “Summit of the Future.” Ian Bremmer, a member of the UN's high-level advisory panel on AI, highlighted the UN's efforts to create a global framework for AI governance.
The newly released report, Governing AI for Humanity, represents the first truly global approach to addressing the governance challenges posed by AI. Carme Artigas, co-chair of the panel, reinforced why the UN is uniquely positioned to lead this effort. As AI transcends borders and industries, no single nation can manage its potential harms, such as bias, discrimination, and lack of inclusivity, alone. By bringing together nations, particularly those from the global south, the UN aims to foster collaboration, encourage responsible AI development, and ensure that human rights remain at the forefront of innovation.
As AI continues to evolve, global governance of this transformative technology will become increasingly important in ensuring equity and minimizing risks.
UN Secretary-General António Guterres on AI, Security Council reform, and global conflicts
UN Secretary-General António Guterres joins Ian Bremmer on the GZERO World Podcast for an exclusive conversation from the sidelines of the General Assembly at a critical moment for the world and the UN itself. Amid so many ongoing crises, is meaningful reform at the world’s largest multilateral institution possible? Between ongoing wars in Ukraine and Gaza, the climate crisis threatening the lives of millions, and a broken Security Council, there’s a lot to discuss. But there are some reasons for optimism. This year could bring the UN into a new era by addressing one of the biggest challenges facing our society: artificial intelligence and the growing digital divide. This year, the UN will hold its first-ever Summit of the Future, where members will vote on a Global Digital Compact, agreeing to shared principles for AI and digital governance. In a wide-ranging conversation, Guterres lays out his vision for the future of the UN and why he believes now is the time to reform our institutions to meet today’s political and economic realities.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Can the UN get the world to agree on AI safety? ›
- An interview with UN Secretary-General António Guterres ›
- 2023 UN General Assembly's top objective, according to António Guterres ›
- Peace in Ukraine is world's priority, says UN chief António Guterres ›
- UN’s first global framework for AI governance - GZERO Media ›
Ian Explains: Why is the UN's Summit of the Future so important?
Will the United Nations be able to adapt to address problems of the modern era, like artificial intelligence and the growing digital divide? On Ian Explains, Ian Bremmer looks at the challenges of multilateralism in an increasingly fragmented world.
In the face of crises like Russia’s invasion of Ukraine, the war in Gaza, and a rapidly warming planet, the UN’s goals of peace and security feel like a failure. But this year’s Summit of the Future during the General Assembly could be a turning point for the 78-year-old institution. UN members will vote on a Global Digital Compact to regulate AI, fight misinformation, and connect the whole world to the internet. Bremmer is one of 39 experts on the UN’s High-Level Advisory Body who've been studying the issue of global AI governance for the past year to better understand what that Compact should include. This week, the group released a report called “Governing AI for Humanity” with recommendations for creating a global regulatory framework for AI that is safe, inclusive, and equitable. Instead of a patchwork of regulation that’s happened so far, which has been concentrated in wealthy countries, can the UN lead the global AI conversation?
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
Breaking: The UN unveils plan for AI
Overnight, and after months of deliberation, a United Nations advisory body studying artificial intelligence released its final report. Aptly called “Governing AI for Humanity,” it is a set of findings and policy recommendations for the international organization and an update since the group’s interim report in December 2023.
“As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place,” the report’s authors wrote. “The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”
Before we dive in, a quick humblebrag and editorial disclosure: Ian Bremmer, founder and president of both Eurasia Group and GZERO Media, served as a rapporteur for the UN High-Level Advisory Body on Artificial Intelligence, the group in charge of the report.
The HLAB-AI report asks the UN to begin working on a “globally inclusive” system for AI governance, calls on governments and stakeholders to develop AI in a way that protects human rights, and it makes seven recommendations. Let’s dive in to each:
- An international scientific panel on AI: A new group of volunteer experts would issue an annual report on AI risks and opportunities. They’d also contribute regular research on how AI could help achieve the UN’s Sustainable Development Goals, or SDGs.
- Policy dialogue on AI governance: A twice-yearly policy dialogue with governments and stakeholders on best practices for AI governance. It’d have an emphasis on “international interoperability” of AI governance.
- AI standards exchange: This effort would develop common definitions and standards for evaluating AI systems. It’d create a new process for identifying gaps in these definitions and how to write them, as well.
- Capacity development network: A network of new development centers that will provide researchers and social entrepreneurs with expertise, training data, and computing. It’d also develop online educational resources for university students and a fellowship program for individuals to spend time in academic institutions and tech companies.
- Global fund for AI: A new fund that would collect donations from public and private groups and disburse money to “put a floor under the AI divide,” focused on countries with fewer resources to fund AI.
- Global AI data framework: An initiative to set common standards and best practices governing AI training data and its provenance. It’d hold a repository of data sets and models to help achieve the SDGs.
- AI office within the Secretariat: This new office would see through the proposals in this report and advise the Secretary-General on all matters relating to AI.
The report’s authors conclude the report by remarking that if the UN is able to chart the right path forward, “we can look back in five years at an AI governance landscape that is inclusive and empowering for individuals, communities, and States everywhere.”
To learn more, Ian will host a UN panel conversation on Saturday, Sept. 21, which you can watch live here. And if you miss it, we’ll have a recap in our GZERO AI newsletter on Tuesday. You can also check out the full report here.
Can the UN get the world to agree on AI safety?
Artificial intelligence has the power to transform our world, but it’s also an existential threat. There's been a patchwork of efforts to regulate AI, but they’ve been concentrated in wealthy countries, while those in the Global South, who stand to benefit most from AI’s potential, have been left out. Can the United Nations come together at this year’s General Assembly to agree on standards for a safe, equitable, and inclusive AI future?
Tomorrow, the UN’s High Level Advisory Body on AI will release a report called “Governing AI for Humanity,” with recommendations for global AI governance that will be a roadmap for safeguarding our digital future and making sure AI will truly benefit everyone in the world. Ian Bremmer is one of the 39 experts on the AI Advisory Body, and he sat down with UN Secretary-General António Guterres for an exclusive GZERO World interview on the sidelines of the General Assembly to discuss the report and why Guterres believes the UN is the only organization capable of creating a truly global, inclusive framework for AI.
“The United Nations has one important characteristic: its legitimacy. It's a platform where everybody can be together,” Guterres says, “Others have the power, others have the money, but not the legitimacy or the convening power the UN has.”
The exclusive conversation begins airing nationally on GZERO World with Ian Bremmer on public television this Friday, Sept. 20. Everything you need to know about Advisory Body’s final report will be dissected and analyzed in the GZERO Daily, landing in inboxes tomorrow (Sept. 19) at 7 am. Sign up here.
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don''t miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
- Peace in Ukraine is world's priority, says UN chief António Guterres ›
- The White House sees AI clash with climate goals ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- How is AI shaping culture in the art world? ›
- Ian Explains: Why is the UN's Summit of the Future so important? - GZERO Media ›
- Can we use AI to secure the world's digital future? - GZERO Media ›
How neurotech could enhance our brains using AI
So earlier this year, Elon Musk's other company, Neuralink, successfully installed a brain implant in a 29-year-old quadriplegic man's brain named Noland Arbaugh. Last week, this story got a ton of attention when Neuralink announced that part of this implant had malfunctioned. But I think this news cycle and all the hyperbole around it misses the bigger picture.
Let me explain. So first, this Neuralink technology is really remarkable. It allowed Arbaugh to play chess with his mind, which he showed in his demo. But the potential beyond this really is fast. It's pretty early days for this technology, but there are signs that it might help us eliminate painful memories, repair lost bodily functions, and allow us to maybe even communicate with each other telepathically.
Second, this brain implant neurotechnology is part of a wider range of neuro tech. A second category isn't implanted in your body, but instead it sits on your body or near it, and picks up electric signals of your brain. These devices, which are being developed, by Meta and Apple, among many others, are more akin to health tracking devices, except they open up access to our thoughts.
Third point here is that this tech is an example of an adjacent technology to AI being turbocharged by recent advances in AI. One of the challenges of neurotech has been how to make sense of all of this data coming from our brains. And here is where AI becomes really powerful. We now increasingly have the power to give these data from our minds, meaning. The result is that the technology and corporations developing them have access to the most private data we have, data about what we think. Which of course brings up the bigger point here, which is we're on the cusp of getting access to our brain data and the potential and of abuse for this really is vast. And it's already happening.
Governments are already using neurotech to try and read their citizens minds, and corporations are working on ways to advertise to potential customers in their dreams. And finally, I think this shows very clearly that we need to be thinking about regulation and fast. Nita Farahany, who has recently written a book about the future of neurotechnology, called, “The Battle for Your Brain: Defending the right to Think Freely in the Age of Neurotechnology,” thinks we have a year to figure out the governance of this tech. A year, it's moving that fast. So many in the AI debate are debating and discussing the existential risks of AI, we might want to pay some attention to the technologies that are adjacent to AI and being empowered by it, as they likely present a far more immediate challenge.
I'm Taylor Owen, and thanks for watching.