Gemini AI controversy highlights AI racial bias challenge

title placeholder | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?

This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.

And so, lots of questions, embarrassing rollbacks by Google about their new model, and big questions, I think, about what we can expect in terms of corrections here. Because the problem of bias and discrimination has been well researched by people like Joy Buolamwini with her new book out called “Unmasking AI,” her previous research “Codes Bias,” you know, well established how models by the largest and most popular companies are still so flawed with harmful and illegal consequence.

So, it begs the question, how much grip do the engineers developing these models really have on what the outcomes can be and how could this have gone so wrong while this product has been put onto the markets? There are even those who say it is impossible to be fully representative in a in a fair way. And it is a big question whether companies should be the ones designing and deciding what that representation looks like. And indeed, with so much power over these models and so many questions about how controllable they are, we should really ask ourselves, you know, when are these products ready to go to market and what should be the consequences when people are discriminated against? Not just because there is a revelation of an embarrassing flaw in the model, but, you know, this could have real world consequences, misleading notions of history, mistreating people against protections from discrimination.

So, even if there was a lot of outcry and sometimes even sort of entertainment about how poor this model performed, I think there are bigger lessons about AI governance to be learned from the examples we saw from Google's Gemini this past week.

More from GZERO Media

US President Donald Trump meets with China's President Xi Jinping at the start of their bilateral meeting at the G20 leaders summit in Osaka, Japan, on June 29, 2019.
REUTERS/Kevin Lamarque

US President Donald Trump and his Chinese counterpart Xi Jinping spoke Thursday for the first time since the former returned to office, as a recent pause in their trade war looked set to fall apart.

A migrant carries his child after crossing the Darien Gap and arriving at the migrant reception center, in the village of Lajas Blancas, Darien Province, Panama, on September 26, 2024.

REUTERS/Enea Lebrun

More and more people will seek a new homeland over the next few decades, which will pose a major challenge to political leaders. However, politicians have shown little interest in dealing with this challenge in a sensible fashion.

Zelensky and Putin in front of flags and war.
Jess Frampton

On Sunday, Ukraine executed one of the most extraordinary asymmetric operations in modern military history. Using domestically built first-person-view (FPV) drones deployed from deep inside Russian territory, Kyiv launched a coordinated assault against several military airbases as far as eastern Siberia, the border with Mongolia, and the Arctic.