Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Exclusive: How to govern the unknown – a Q&A with MEP Eva Maydell
The European Parliament passed the Artificial Intelligence Act on March 13, becoming the world’s first major government to pass comprehensive regulations for the emerging technology. This capped a five-year effort to manage AI and its potential to disrupt every industry and cause geopolitical tensions.
The AI Act, which takes effect later this year, places basic transparency requirements on generative AI models such as OpenAI’s GPT-4, mandating that their makers share some information about how they are trained. There are more stringent rules for more powerful models or ones that will be used in sensitive sectors, such as law enforcement or critical infrastructure. Like with the EU’s data privacy law, there are steep penalties for companies that violate the new AI legislation – up to 7% of their annual global revenue.
GZERO spoke with Eva Maydell, a Bulgarian member of the European Parliament on the Committee on Industry, Research, and Energy, who negotiated the details of the AI Act. We asked her about the imprint Europe is leaving on global AI regulation.
GZERO: What drove you to spearhead work on AI in the European Parliament?
MEP Eva Maydell: It’s vital that we not only tackle the challenges and opportunities of today but those of tomorrow. That way, we can ensure that Europe is its most resilient and prepared. One of the most interesting and challenging aspects of being a politician that works on tech policy is trying to reach the right balance between enabling innovation and competitiveness with ensuring we have the right protections and safeguards in place. Artificial intelligence has the potential to change the world we live in, and having the opportunity to work on such an impactful piece of law was a privilege and a responsibility.
How do you think the AI Act balances regulation with innovation? Can Europe become a standard-setter for the AI industry while also encouraging development and progress within its borders?
Maydell: I fought very hard to ensure that innovation remained a strong feature of the AI Act. However, the proof of the pudding is in the eating. We must acknowledge that Europe has some catching up to do. AI take-up by European companies is 11%. Europeans rely on foreign countries for 80% of digital products and services. We also have to tackle inflation and stagnating growth. AI has the potential to be the engine for innovation, creativity, and prosperity, but only if we ensure that we keep working on all the other important pieces of the puzzle, such as a strong single market and greater access to capital.
The pace of AI is evolving rapidly. Does the AI Act set Europe up to be responsive to unforeseen advancements in technology?
Maydell: One of the most difficult aspects of regulating technology is trying to regulate the unknown. However, this is why it’s essential to stick to principles rather than over-prescription wherever possible - for example, a risk-based approach, and where possible aligning with international standards. This allows you the ability to adapt. It is also why the success of the AI Office and AI Forum will be so important. The guidance that we offer businesses and organizations in the coming months on how to implement the AI Act, will be key to its long-term success. Beyond the pages of the AI Act, we need to think about technological foresight. This is why I launched an initiative at the 60th annual Munich Security Conference – the “Council on the Future.” It aims to bridge the foresight and collaboration gap between the public and private sector with a view toward enabling the thoughtful stewardship of technology.
Europe is the first mover on AI regulation. How would you like to see the rest of the world follow suit and pass their own laws? How can Europe be an example to other countries?
Maydell: I hope we’re an example to others in the sense that we have tried to take a responsible approach to the development of AI. We are already seeing nations around the world take important steps towards shaping their own governance structures for AI. We have the Executive Order in the US and the UK had the AI Safety Summit. It is vital that like-minded nations are working together to ensure that there is broader coherence around the values associated with the development and use of our technologies. Deeper collaboration through the G7, the UN, and the OECD is something we must continue to pursue.
Is there anything the AI Act doesn't do that you'd like to turn your attention to next?
Maydell: The AI Act is not a silver bullet, but it is an important piece of a much bigger puzzle. We have adopted an unprecedented amount of digital legislation in the last five years. With these strong regulatory foundations in place, my hope is that we now focus on perhaps the less newsworthy but equally important issue of good implementation. This means cutting red tape, reducing existing excess bureaucracy, and removing any frictions or barriers between different EU laws in the digital space. The more clarity and certainty we can offer companies, the more likely it is that Europe will attract inward investment and be the birthplace of some of the biggest names in global tech.AI vs. truth: Battling deepfakes amid 2024 elections
With nearly half of the globe heading to the polls this year amid lightning-speed developments in generative AI, fears are running rampant over tech-driven disinformation campaigns.
During a Global Stage panel at the Munich Security Conference, Bulgarian politician and European Parliament member Eva Maydell said she fears we will soon be unable to separate fact from deepfake fiction.
While acknowledging the important developments AI and emerging tech offer, Maydell warned that we also “need to be very sober” about how they are threatening the “very fabric of our democratic societies” and eroding trust.
While the EU is trying to push voluntary measures and legislative proposals, Maydell points out that political conversations often revolve around the sense that “we'll probably never be as good as those that are trying to deceive society.”
“But you still have to give it a try, and you need to do it in a very prepared way,” she adds.
Watch the full conversation: How to protect elections in the age of AI
Watch more Global Stage coverage on the 2024 Munich Security Conference.
- How to protect elections in the age of AI ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes on are on the campaign trail too ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- Combating AI deepfakes in elections through a new tech accord ›
- Protect free media in democracies, urges Estonia's former president Kersti Kaljulaid - GZERO Media ›
How to protect elections in the age of AI
Half of the world’s population will have the chance to head to the polls this year in dozens of critical elections worldwide. These votes, which will shape policy and democracy for years to come, come amid light-speed development in artificial intelligence. As Eurasia Group noted in its 2024 Top Risk entitled “Ungoverned AI,” generative AI could be used by domestic and foreign actors – we’re looking at you, Russia – to impact campaigns and undermine trust in democracy.
To meet the moment, GZERO Media, on the ground at the 2024 Munich Security Conference, held a Global Stage discussion on Feb. 17 entitled “Protecting Elections in the Age of AI.” We spoke with Brad Smith, vice chair and president of Microsoft; Ian Bremmer, president and founder of Eurasia Group and GZERO Media; Fiona Hill, senior fellow for the Center on the United States and Europe at Brookings; Eva Maydell, an EU parliamentarian and a lead negotiator of the EU Chips Act and Artificial Intelligence Act; Kersti Kaljulaid, the former president of Estonia; with European correspondent Maria Tadeo moderating. The program also featured interviews with Kyriakos Mitsotakis, Greece’s prime minister, and Benedikt Franke, CEO and vice-chair of the Munich Security Conference. These thought leaders and experts discussed the implications of the rapid rise of AI amid this historic election year.
The group started by delving into what Bremmer has referred to as the “Voldemort” of years surrounding elections, to look at how election interference and disinformation have evolved since 2016.
“This is the year that people have been very concerned about, but have kind of hoped that they could push off. It's not just because there are elections all over the world and trust in institutions is deteriorating, it's also because the most powerful country in the world, and it's not becoming less powerful, is also one of the most politically dysfunctional,” says Bremmer, referring to the US.
The 2024 US presidential election “is maximally distrust-laden,” says Bremmer, adding that it’s “really hard to have a free and fair election in the US that all of its population” believes is legitimate.
And the worry is that AI could complicate the landscape even further.
Hill agreed that there’s cause for concern but underscored that people should not “panic” to a point where they’re “paralyzed” and “not taking action.”
“Panic is not an option given the stakes,” says Hill, adding, “There are negative aspects of all of this, but there's also the kind of question that we have to grapple with is how when legitimate competitors or opposition movements that otherwise beleaguered decide to use AI tools, that then also has an impact.”
There’s no doubt that AI can be used for nefarious purposes. Deepfakes can fool even the most discerning eye. Disinformation has already been rampant across the internet in recent election cycles and helped sow major divisions in many countries well before AI tools — far more sophisticated than your average meme — were widely available.
“With new tools and products that use generative AI, including from a company like ours, somebody can create a very realistic video, audio, or image. Just think about the different ways it can be used. Somebody can use it and they can make a video of themself, and they can make clear in the video that this is AI generated. That is one way a political candidate, even one who is in prison can speak,” says Smith, alluding to ex-Pakistani Prime Minister Imran Khan’s recent use of AI from behind bars.
Along these lines, there are many serious, valid concerns about the impact AI can have on elections and democracy more generally — particularly at a time when people are exhibiting rising levels of distrust in key institutions.
“It's very important to acknowledge a lot of the important developments that AI and emerging tech can bring to support our economic development,” says Maydell, adding, “but in the same time, especially this year, we need to be very sober about some of those threats that are in a way threatening the very fabric of our democratic societies.
As Maydell noted, this evolving new technology can be harnessed for good and bad. Can AI be used as a tool to protect candidates and the integrity of the electoral process?
A number of major tech companies, including Microsoft, signed an accord at the Munich Security Conference on Friday to help thwart and combat AI-related election interference.
“It's all about trying to put ourselves in a position, not to solve this problem completely, I don't think that's possible, but to manage this new reality in a way that will make a difference,” says Smith. The Microsoft president says the accord brings the tech sector together to preserve the authenticity of content, including by working to detect deepfakes and providing candidates with a mechanism to report any that are created about them.
“We'll work together to promote transparency and public education. This clearly is going to require a lot of work with civil society, with others around the world to help the public be ready,” says Smith.
But is enough being done?
“It's good that both politicians and the companies and society as a whole now has a better understanding where this is all leading us and we are collectively taking actions,” says Kaljulaid, but this is just a “first step” and “next steps need to follow.”
A balance will need to be found between legislating the challenges presented by AI and giving tech companies space to collaborate, innovate and address problems on their own.
“Democracy is always in jeopardy. Every generation has to answer the call to defend it,” says Smith, adding, “Now it's our turn. It's our turn as a generation of people to say that technology always changes, but democracy is a value that we hold timeless. So let's do what it takes to defend it, to preserve and promote it.”
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, and technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- AI's potential to impact election is cause for concern - EU's Eva Maydell ›
- AI in 2024: Will democracy be disrupted? ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- How AI threatens elections ›
- At the Munich Security Conference, Trump isn't the only elephant in the room ›
- Ukraine crisis one of many global threats at Munich Security Conference ›
- 4 things to know about the Munich Security Conference ›
- Munich Security Conference 2024: What to expect ›
- AI & election security - GZERO Media ›
- AI vs. truth: Battling deepfakes amid 2024 elections - GZERO Media ›
- Protect free media in democracies, urges Estonia's former president Kersti Kaljulaid - GZERO Media ›
AI's potential to impact election is cause for concern - EU's Eva Maydell
EU Parliamentarian Eva Maydell says AI's potential impact on the world's biggest year of elections keeps her up at night. And it's a valid worry—AI's ability to create and disseminate deceptive content at lightning speed means our society can be divided and radicalized faster than ever.
Speaking in a GZERO Global Stage discussion from the 2024 World Economic Forum in Davos, Switzerland, EU Parliamentarian Evan Maydell shares her concerns about the weaponization of AI and other emerging technologies in such a massive global election year.
“I'm worried about deceptive content that can be created faster, can be disseminated faster, and it can divide, and it can radicalize our society,” she said.
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How is the world tackling AI, Davos' hottest topic?
How is the world tackling AI, Davos' hottest topic?
It’s the big topic at Davos: What the heck are we going to do about artificial intelligence? Governments just can’t seem to keep up with the pace of this ever-evolving technology—but with dozens of elections scheduled for 2024, the world has no time to lose.
GZERO and Microsoft brought together folks who are giving the subject a great deal of thought for a Global Stage event on the ground in Switzerland, including Microsoft’s Brad Smith, EU Member of Parliament Eva Maydell, the UAE’s AI Minister Omar Sultan al Olama, the UN Secretary’s special technology envoy Amandeep Singh Gill, and GZERO Founder & President Ian Bremmer, moderated by CNN’s Bianna Golodryga.
The opportunities presented by AI could revolutionize healthcare, education, scientific research, engineering – just about every human activity. But the technology threatens to flood political discourse with disinformation, victimize people through scams or blackmail, and put people out of work. A poll of over 2,500 GZERO readers found a 45% plurality want to see international cooperation to develop a regulatory framework.
The world made great strides in AI regulation in 2023, perhaps most prominently in the European Union’s AI Act. But implementation and enforcement are a different game, and with every passing month, AI gets more powerful and more difficult to rein in.
So where do these luminaries see the path forward? Tune in to our full discussion from the World Economic Forum in Davos, Switzerland, above.
- Davos 2024: AI is having a moment at the World Economic Forum ›
- Be very scared of AI + social media in politics ›
- The AI power paradox: Rules for AI's power ›
- Davos 2024: China, AI & key topics dominating at the World Economic Forum ›
- Accelerating Sustainability with AI: A Playbook ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›