Scroll to the top
Gemini AI controversy highlights AI racial bias challenge
title placeholder | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she questions whether big tech companies can be trusted to tackle racial bias in AI, especially in the wake of Google's Gemini software controversy. Importantly, should these companies be the ones designing and deciding what that representation looks like?

This was a week full of AI-related stories. Again, the one that stood out to me was Google's efforts to correct for bias and discrimination in its generative AI model and utterly failing. We saw Gemini, the name of the model, coming up with synthetically generated images of very ethnically diverse Nazis. And of all political ideologies, this white supremacist group, of course, had few, if any, people of color in them historically. And that's the same, unfortunately, as the movement continues to exist, albeit in smaller form today.

Read moreShow less
AI & human rights: Bridging a huge divide
AI & human rights: Bridging a huge divide | Marietje Schaake | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.

Read moreShow less
Will Taylor Swift's AI deepfake problems prompt Congress to act?
AI Porn: Is Taylor Swift the messiah we've been waiting for? | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.

Read moreShow less
AI regulation means adapting old laws for new tech: Marietje Schaake
AI regulation & policy: How to adapt old laws for new tech | GZERO AI
It's not only about adopting new regulations for AI; it's really also about enforcing existing principles and laws in new contexts, says AI expert Marietje Schaake.
Read moreShow less
Davos 2024: AI is having a moment at the World Economic Forum
Davos 2024: AI is having a moment at the World Economic Forum | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Schaake is live from the World Economic Forum meeting in Davos, where AI is one of the dominant themes. Interestingly, she says, the various conversations about AI have been nuanced: it's been acknowledged as a top risk for the year as much as for its immense potential.

Hi, my name is Maritje Schaake, we are in Davos at the World Economic Forum, where AI really is one of the key topics that people are talking about. And I think what stands out and what I've heard referenced in various meetings is that the WEF's risk report of this year has signaled that this information, especially as a result of the uptake of emerging technologies, is considered one of the key risks that people see this year.

Read moreShow less
AI in 2024: Will democracy be disrupted?
2024 in AI: Democracy in the spotlight | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she shares her reflection on AI in 2023.

Hello, this is GZERO AI. My name is Marietje Schaake. It's the end of the year, and so it's the time for lists. As we see so many top fives, top threes, top tens of the key developments in AI, I thought I would just share a couple of reflections. Not list them, just look back on this year, which was remarkable in so many ways.

We saw a huge explosion of discussion around AI governance. Are companies, the ones that can take on all this responsibility of assessing risk, or deciding when to push new research onto the market, or as illustrated by the dramatic saga at OpenAI, are companies not in a good position to make all these decisions themselves and to sort of design checks and balances all in-house? Governments agree. I don't think they want to let these decisions to the big companies, and so they are really stepping up across the board and across the globe. We've only recently, in the last days of this year, seen the political agreement around the EU AI Act, a landmark law that will really set a standard in the democratic world for governing AI in a binding fashion. But there were also a lot of voluntary code of conduct, as we saw at the G7, statements that came out of the AI Safety Summit like the Bletchley Park Declaration, and there was the White House's executive order to add to the many initiatives that were taken in an attempt to make sure that AI developments at least respect the laws that are on the book, if not make new ones where needed.

Read moreShow less
Singapore sets an example on AI governance
AI governance: Singapore is having a critical discussion | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.

Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.

Read moreShow less
Is the EU's landmark AI bill doomed?
Is the EU's landmark AI bill doomed? | GZERO AI | GZERO Media

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.

After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.

Read moreShow less

Latest