AI regulation means adapting old laws for new tech: Marietje Schaake
Why did Eurasia Group list "Ungoverned AI" as one of the top risks for 2024 in its annual report? Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, discussed the challenges around developing effective AI regulation, emphasizing that politicians and policymakers must recognize that not every challenge posed by AI and other emerging technologies will be novel; many merely require proactive approaches for resolution. She spoke during GZERO's Top Risks of 2024 livestream conversation, focused on Eurasia Group's report outlining the biggest global threats for the coming year.
"We didn't need AI to understand that discrimination is illegal. We didn't need AI to know that antitrust rules matter in a fair economy. We didn't need AI to know that governments have a key responsibility to safeguard national security," Schaake argues. "And so, those responsibilities have not changed. It's just that the way in which these poor democratic principles are at stake has changed."For more:
- Watch the full livestream discussion, moderated by GZERO's publisher Evan Solomon and featuring the authors of the report, Eurasia Group & GZERO President Ian Bremmer and Eurasia Group Chairman Cliff Kupchan.
- Read the full report on The Top Risks of 2024.
- And don't miss Marietje Schaake's updates as co-host of our video series GZERO AI.
- A world of conflict: The top risks of 2024 ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- Singapore sets an example on AI governance ›