Gemini AI controversy highlights AI racial bias challenge

title placeholder | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?

This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.

And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.

So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.

So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.

More from GZERO Media

- YouTube

Following a terrorist attack in Kashmir last spring, India and Pakistan, both nuclear powers, exchanged military strikes in an alarming escalation. Former Pakistani Foreign Minister Hina Khar joins Ian Bremmer on GZERO World to discuss Pakistan’s perspective in the simmering conflict.

- YouTube

A military confrontation between India and Pakistan in May nearly pushed the two nuclear-armed countries to the brink of war. On Ian Explains, Ian Bremmer breaks down the complicated history of the India-Pakistan conflict, one of the most contentious and bitter rivalries in the world.

A combination picture shows Russian President Vladimir Putin during a meeting with Arkhangelsk Region Governor Alexander Tsybulsky in Severodvinsk, Arkhangelsk region, Russia July 24, 2025.
REUTERS/Leah Millis

In negotiations, the most desperate party rarely gets the best terms. As Donald Trump and Vladimir Putin meet in Alaska today to discuss ending the Ukraine War, their diverging timelines may shape what deals emerge – if any.