AI & human rights: Bridging a huge divide

AI & human rights: Bridging a huge divide | Marietje Schaake | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.

I'm in the hallway of the Mila Quebec Institute for Artificial Intelligence, where there's a conference that deals with human rights and artificial intelligence. And I'm really happy that we focus on this uniquely today and also tomorrow, because too often the thoughts about, the analysis of and the agenda for human rights in the context of AI governance is an afterthought.

And so it's great to hear the various ways in which human rights are at stake, from facial recognition systems to, you know, making sure that there is representation in governance from marginalized communities, for example. But what I still think is missing is a deeper connection between those people who speak AI, if you will, and those people who speak human rights. Because still the worlds of policy and politics and the worlds of artificial intelligence, and within those, the people who care about human rights tend to speak in parallel universes. And so what I'll try to do in my closing keynote today is to bring people's minds to a concrete, positive political agenda for change in thinking about how we can frame human rights for a broader audience, making sure that we use the tools that are there, the laws that apply both international and national and doubling down on enforcement. Because so often the seeds for meaningful change are already in the laws, but they're not forceful in the way that they are being held to account.

And so we have a lot of work ahead of us. But I think the conference was a good start. And I'll be curious to see the different tone and the focus on geopolitics as I go to the Munich Security Conference with lots of the GZERO team as well.

More from GZERO Media

Elon Musk in an America Party hat.
Jess Frampton

Life comes at you fast. Only five weeks after vowing to step back from politics and a month after accusing President Donald Trump of being a pedophile, Elon Musk declared his intention to launch a new political party offering Americans an alternative to the Republicans and Democrats.

Chancellor of the Exchequer Rachel Reeves (right) crying as Prime Minister Sir Keir Starmer speaks during Prime Minister’s Questions in the House of Commons, London, United Kingdom, on July 2, 2025.
PA Images via Reuters Connect

UK Prime Minister Keir Starmer has struggled during his first year in office, an ominous sign for centrists in Western democracies.

- YouTube

“We wanted to be first with a flashy AI law,” says Kai Zenner, digital policy advisor in the European Parliament. Speaking with GZERO's Tony Maciulis at the 2025 AI for Good Summit in Geneva, Zenner explains the ambitions and the complications behind Europe’s landmark AI Act. Designed to create horizontal rules for all AI systems, the legislation aims to set global standards for safety, transparency, and oversight.

More than 60% of Walmart suppliers are small businesses.* Through a $350 billion investment in products made, grown, or assembled in the US, Walmart is helping these businesses expand, create jobs, and thrive. This effort is expected to support the creation of over 750,000 new American jobs by 2030, empowering companies like Athletic Brewing, Bon Appésweet, and Milo’s Tea to grow their teams, scale their production, and strengthen the communities they call home. Learn more about Walmart's commitment to US manufacturing. *See website for additional details.

Last month, Microsoft released its 2025 Responsible AI Transparency Report, demonstrating the company’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.