We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Gemini AI controversy highlights AI racial bias challenge
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?
This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.
And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.
So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.
So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.
AI & human rights: Bridging a huge divide
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
I'm in the hallway of the Mila Quebec Institute for Artificial Intelligence, where there's a conference that deals with human rights and artificial intelligence. And I'm really happy that we focus on this uniquely today and also tomorrow, because too often the thoughts about, the analysis of and the agenda for human rights in the context of AI governance is an afterthought.
And so it's great to hear the various ways in which human rights are at stake, from facial recognition systems to, you know, making sure that there is representation in governance from marginalized communities, for example. But what I still think is missing is a deeper connection between those people who speak AI, if you will, and those people who speak human rights. Because still the worlds of policy and politics and the worlds of artificial intelligence, and within those, the people who care about human rights tend to speak in parallel universes. And so what I'll try to do in my closing keynote today is to bring people's minds to a concrete, positive political agenda for change in thinking about how we can frame human rights for a broader audience, making sure that we use the tools that are there, the laws that apply both international and national and doubling down on enforcement. Because so often the seeds for meaningful change are already in the laws, but they're not forceful in the way that they are being held to account.
And so we have a lot of work ahead of us. But I think the conference was a good start. And I'll be curious to see the different tone and the focus on geopolitics as I go to the Munich Security Conference with lots of the GZERO team as well.
- Siddhartha Mukherjee: CRISPR, AI, and cloning could transform the human race ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- New AI toys spark privacy concerns for kids ›
- Emotional AI: More harm than good? ›
- Singapore sets an example on AI governance ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Making rules for AI … before it’s too late ›
Are leaders asking the right questions about AI?
The official theme of the 2024 World Economic Forum held recently in Davos, Switzerland, was “Rebuilding Trust” in an increasingly fragmented world. But unofficially, the hottest topic on the icy slopes was artificial intelligence.
Hundreds of private sector companies convened to pitch new products and business solutions powered by AI, and nearly two dozen panel discussions featured “AI” in their titles. There was even an “AI House” on the main promenade, just blocks from the Congress Center, where world leaders and CEOs gathered.
So, there were many conversations about the rapidly evolving technology. But were they the right ones?
GZERO’s Tony Maciulis spoke to Marietje Schaake, a former member of the EU parliament who now leads an AI policy program at Stanford. Their conversation focused on the human side of AI and what it could mean for jobs and the workforce.
A recent study from the International Monetary Fund (IMF) revealed that as many as 40% of jobs worldwide could be adversely impacted by AI. Schaake said that kind of upheaval could lead to political unrest and a further rise in populism and encouraged corporations and public sector leaders alike to find solutions now before the equality gap further widens.
Watch the full GZERO World episode: Al Gore on US elections & climate change
Catch GZERO World with Ian Bremmer every week at http://gzeromedia.com/gzeroworld or on US public television. Check local listings.
Will Taylor Swift's AI deepfake problems prompt Congress to act?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.
Today I want to talk about Taylor Swift, and that may suggest that we are going to have a lighthearted episode, but that's not the case. On the contrary, because the pop icon has been the subject of one of the most traumatizing experiences that anyone can live through online in relation to AI and new technology.
Taylor Swift was the victim of the creation of non-consensual sexually explicit content or a pornographic deepfake. Now, the term deepfake may ring a bell because we've talked about the more convincing messages that generative AI can create in the context of election manipulation, disinformation. And that is indeed a grave concern of mine. But when you look at the numbers, the vast majority of deepfakes online are of a pornographic nature. And when those are non-consensual, imagine, for example, when it's not a pop icon that everybody knows and can come to the rescue for, but a young teenager who is faced with a deepfake porn image of themselves, classmates sharing it, you can well imagine the deep trauma and stress this causes, and we know that this kind of practice has unfortunately led to self-harm among young people as well.
So, it is high time that tech companies do more, take more responsibility for preventing this kind of terrible nonconsensual use of their products and the ensuing sharing and virality online. So, if there's one silver lining to this otherwise very depressing experience of Taylor Swift than it is that she and her followers may be able to do what few have managed to succeed in, which is to move Congress to pass legislation. There seems to be bipartisan movement and all I can hope is that it will lead to better protection of people from the worst practices of generative AI.
- Making rules for AI … before it’s too late ›
- Can watermarks stop AI deception? ›
- Deepfake porn targets high schoolers ›
- Regulate AI, but how? The US isn’t sure ›
- Taylor Swift AI images & the rise of deepfakes problem - GZERO Media ›
- Voters beware: Elections and the looming threat of deepfakes - GZERO Media ›
AI regulation means adapting old laws for new tech: Marietje Schaake
Why did Eurasia Group list "Ungoverned AI" as one of the top risks for 2024 in its annual report? Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, discussed the challenges around developing effective AI regulation, emphasizing that politicians and policymakers must recognize that not every challenge posed by AI and other emerging technologies will be novel; many merely require proactive approaches for resolution. She spoke during GZERO's Top Risks of 2024 livestream conversation, focused on Eurasia Group's report outlining the biggest global threats for the coming year.
"We didn't need AI to understand that discrimination is illegal. We didn't need AI to know that antitrust rules matter in a fair economy. We didn't need AI to know that governments have a key responsibility to safeguard national security," Schaake argues. "And so, those responsibilities have not changed. It's just that the way in which these poor democratic principles are at stake has changed."
For more:- Watch the full livestream discussion, moderated by GZERO's publisher Evan Solomon and featuring the authors of the report, Eurasia Group & GZERO President Ian Bremmer and Eurasia Group Chairman Cliff Kupchan.
- Read the full report on The Top Risks of 2024.
- And don't miss Marietje Schaake's updates as co-host of our video series GZERO AI.
- A world of conflict: The top risks of 2024 ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- Singapore sets an example on AI governance ›
- AI and Canada's proposed Online Harms Act - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake
The 2024 World Economic Forum in Davos was dominated by conversations about AI and its potential as well as possible pitfalls for society. GZERO’s Tony Maciulis spoke to former European Union parliamentarian Marietje Schaake about the current regulatory landscape, a recent report from the International Monetary Fund (IMF) saying as many as 40% of jobs globally could be lost or impacted by AI, and how that might give rise to unrest as we head into a critical year of elections.
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Sign up for the GZERO AI weekly newsletter to keep up with all things AI and find out when new episodes are published.
For more about AI at this year's World Economic Forum, watch our Global Stage discussion, Making AI Work for the World.
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney ›
- How is the world tackling AI, Davos' hottest topic? ›
- Is the EU's landmark AI bill doomed? ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Singapore sets an example on AI governance ›
- ChatGPT on campus: How are universities handling generative AI? - GZERO Media ›
- AI regulation means adapting old laws for new tech: Marietje Schaake - GZERO Media ›
- Ian Explains: How will AI impact the workplace? - GZERO Media ›
- How AI is changing the world of work - GZERO Media ›
Davos 2024: AI is having a moment at the World Economic Forum
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Schaake is live from the World Economic Forum meeting in Davos, where AI is one of the dominant themes. Interestingly, she says, the various conversations about AI have been nuanced: it's been acknowledged as a top risk for the year as much as for its immense potential.
Hi, my name is Maritje Schaake, we are in Davos at the World Economic Forum, where AI really is one of the key topics that people are talking about. And I think what stands out and what I've heard referenced in various meetings is that the WEF's risk report of this year has signaled that this information, especially as a result of the uptake of emerging technologies, is considered one of the key risks that people see this year.
Of course, this being a year in which many elections around the world will take place, but you know, disinformation about health, about geopolitics also factoring in there. So, there is more emphasis on risk as a result of that report than I would normally expect here, where companies are the dominant voices, companies that normally sell you know, all the great visions that they have for what AI can achieve. And what's interesting is that while there are a lot of panels and other sessions around artificial intelligence focusing on global governance, with the role of the United Nations, for example, on trust and elections, on healthcare and AI, geopolitics and AI, you know, AI in the frontlines, these discussions seem to be kind of happening in parallel universes where there are those who are focusing very much on their concerns for civil liberties and the risk of state surveillance.
There are others who are saying, well, scientific breakthroughs are going to save the world. So what I hope will happen either here or in the coming year is that the analysis of what we must expect from AI will start leading to much more concrete policies and enforceable action, because otherwise we're going to continue to see this rapidly changing technology that has deep and wide impact on people all around the world without consequences. And I think we need to make sure that there are guardrails and that these are firm and that, yes, opportunities can be reaped, but certainly risks can be prevented. And hopefully the report and the discussions here in Davos with people coming into these mountains from around the world can actually be meaningful and have impact the coming year.
- Russian war crimes exhibit at Davos reveals civilian toll in Ukraine ›
- A pinch of the Davos "secret sauce"? ›
- The AI power paradox: Rules for AI's power ›
- Davos 2024: China, AI & key topics dominating at the World Economic Forum ›
- Regulate AI, but how? The US isn’t sure ›
- Ukraine pushes to stay top of mind at Davos 2024 - GZERO Media ›
- How is the world tackling AI, Davos' hottest topic? - GZERO Media ›
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney - GZERO Media ›
Podcast: Trouble ahead: The top global risks of 2024
Listen: In a special edition of the GZERO podcast, we're diving into our expectations for the topsy-turvy year ahead. The war in Ukraine is heading into a stalemate and possible partition. Israel's invasion of Gaza has amplified region-wide tensions that threaten to spill over into an even wider, even more disastrous, even ghastlier conflict. And in the United States, the presidential election threatens to rip apart the feeble tendrils holding together American democracy.
All those trends and more topped Eurasia Group's annual Top Risks project for 2024, which takes the view from 30,000 feet to summarize the most dangerous and looming unknowns in the coming year. Everything from out-of-control AI to China's slow-rolling economy made this year's list.
GZERO Publisher Evan Solomon sat down with Eurasia Group Founder and President Ian Bremmer and Chairman Cliff Kupchan to work through their list of Top Risks for 2024 alongside Susan Glasser, staff writer at The New Yorker and co-author of "The Divider: Trump in the White House, 2017-2021"; Zeid Ra'ad Al Hussein, CEO & President of the International Peace Institute and former United Nations High Commissioner for Human Rights; and Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence. The big throughline this year? Events spiral out of control even against the wishes of major players. Whether it's possible escalation between Israel and Iranian proxies, Chinese retaliation to the result of the Taiwanese election, or central banks finding themselves squeezed into a corner by persistent inflation, the sheer number of moving parts presents a risk in and of itself.
Take a deep dive with the panel in our full discussion, recorded live on January 8.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.