We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Singapore sets an example on AI governance
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.
Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.
Speaking of the world full of AI activities, the EU will have the last, at least last planned negotiating round on the EU AI Act where the most difficult points will have to come to the table. Outstanding differences between Member States, the European parliaments around national security uses of AI, or the extent to which human rights protections will be covered, but also the critical discussion that is surfacing more and more around foundation models, whether they should be regulated, how they should be regulated, and how that can be done in a way that European companies are not disadvantaged compared to, for example, US leaders in the generative AI space in particular. So it's a pretty intense political fight, even after it looked like there was political consensus until about a month ago. But of course that is not unusual. Negotiations always have to tackle the most difficult points at the end, and that is where we are. So it's a space to watch, and I wouldn't be surprised if there would be an additional negotiating round planned after the one this week.
Then there will be the first physical meeting of the UN AI Advisory Body, of which I'm a member and I'm looking forward. This is going to happen in New York City and it will really be the first opportunity for all of us to get together and discuss, after online working sessions have taken place and a flurry of activities has already taken off after we were appointed roughly a month ago. So the UN is moving at break speed this time, and hopefully it will lead to important questions and answers with regard to the global governance of AI, the unique role of the United Nations, and the application of the charter international human rights and international law at this critical moment for global governance of artificial intelligence.
Is the EU's landmark AI bill doomed?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.
After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.
And actually, that speed of development is what's kind of catching up now with the negotiators, because in the initial phase, the European Commission had designed the law to be risk-based when we look at the outcomes of AI applications. So, if AI is used to decide on whether to hire someone or give them access to education or social benefits, the consequences for the individual impacted can be significant and so, proportionate to the risk, mitigating measures should be in place. And the law was designed to include anything from very low or no-risk applications to high and unacceptable risk of applications, such as a social credit scoring system as unacceptable, for example. But then when generative AI products started flooding the market, the European Parliament, which was taking its position, decided, “We need to look at the technology as well. We cannot just look at the outcomes.” And I think that that is critical because foundation models are so fundamental. Really, they form the basis of so much downstream use that if there are problems at that initial stage, they ripple through like an earthquake in many, many applications. And if you don't want startups or downstream users to be confronted with liability or very high compliance costs, then it's also important to start at the roots and make sure that sort of the core ingredients of the uses of these AI models are properly governed and that they are safe and okay to use.
So, when I look ahead at December, when the European Commission, the European Parliament and Member States come together, I hope negotiators will look at the way in which foundation models can be regulated, that it is not a yes or no to regulation, but it's a progressive work tiered approach that really attaches the strongest mitigating or scrutiny measures to the most powerful players. The way that has been done in many other sectors. It would be very appropriate for AI foundation models, as well. There's a lot of debate going on. Open letters are being penned, op-ed experts are speaking out, and I'm sure there is a lot of heated debate between Member States of the European Union. I just hope that the negotiators appreciate that the world is watching. Many people with great hope as to how the EU can once again regulate on the basis of its core values, and that with what we now know about how generative AI is built upon these foundation models, it would be a mistake to overlook them in the most comprehensive EU AI law.
British Prime Minister Rishi Sunak reshuffled his ministerial team on Monday, including bringing back former leader David Cameron, seen here, as foreign minister.
Sunak’s desperate cabinet reshuffle is unlikely to pay off
British Prime Minister Rishi Sunak engaged in a stunning game of political musical chairs on Monday, unexpectedly breathing new life into the career of David Cameron – who, as prime minister, enabled the Brexit referendum.
Sunak sacked Suella Braverman as home secretary, shifting James Cleverly — who was foreign secretary — into the role, and now, seven years after leaving Downing Street, Cameron returns as the UK’s top diplomat.
Not an easy gig: Cameron becomes foreign secretary amid an array of global crises, with Russia’s war against Ukraine, growing tensions between the West and China, and the Israel-Hamas war topping the list. He has his work cut out for him, but with a strong record of support for Ukraine and Israel, he’s unlikely to shift the government’s approach in a drastic way.
What this means: It’s curious that Sunak would choose Cameron — a former leader who resigned after failing to get Brits to reject Brexit — as a top cabinet member, particularly with a national election looming before January 2025. That said, Cameron is a moderate with years of political and diplomatic experience. It could be a signal that Sunak is pushing his government toward the center ahead of the general election, as polling shows Conservatives trailing far behind Labour.
But, as things stand, creating distance from Braverman while pulling in Cameron is probably not enough to save the Tories. A snap YouGov poll found that 57% of British adults believe Sunak was right to sack Braverman, while just 24% said it was a good decision to appoint Cameron as foreign secretary.
Some analysts think Sunak’s move smacks of political desperation. The cabinet reshuffle “shows a government running on empty,” tweeted Mujtaba Rahman, managing director for Europe at Eurasia Group.
UK AI Safety Summit brings government leaders and AI experts together
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she takes you behind the scenes of the first-ever UK AI Safety Summit.
Last week, the AI Summit took place, and I'm sure you've read all the headlines, but I thought it would be fun to also take you behind the scenes a little bit. So I arrived early in the morning of the day that the summit started, and everybody was made to go through security between 7 and 8 AM, so pretty early, and the program only started at 10:30. So what that led to was a longstanding reception over coffee where old friends and colleagues met, new people were introduced, and all participants from business, government, civil society, academia really started to mingle.
And maybe that was a part of the success of the summit, which then started with a formal opening with remarkably global representation. There had been some discussion about whether it was appropriate to invite the Chinese government, but indeed a Chinese minister, but also from India, from Nigeria, were there to underline that the challenges that governments have to deal with around artificial intelligence are a global one. And I think that that was an important symbol that the UK government sought to underline. Now, there was a little bit of surprise in the opening when Secretary Raimondo of the United States announced the US would also initiate an AI Safety Institute right after the UK government had announced its. And so it did make me wonder why not just work together globally? But I guess they each want their own institute.
And those were perhaps the more concrete, tangible outcomes of the conference. Other than that, it was more a statement to look into the risks of AI safety more. And ahead of the conference, there had been a lot of discussion about whether the UK government was taking a too-narrow focus on AI safety, whether they had been leaning towards the effective altruism, existential risk camp too much. But in practice, the program gave a lot of room for discussions, and I thought that was really important, about the known and current day risks that AI presents. For example, to civil rights, when we think about discrimination, or to human rights, when we think about the threats to democracy, from both disinformation that generative AI can put on steroids, but also the real question of how to govern it at all when companies have so much power, when there's such a lack of transparency. So civil society leaders that were worried that they were not sufficiently heard in the program will hopefully feel a little bit more reassured because I spoke to a wide variety of civil society representatives that were a key part of the participants among government, business, and academic leaders.
So, when I talked to some of the first generation of thinkers and researchers in the field of AI, for them it was a significant moment because never had they thought that they would be part of a summit next to government leaders. I mean, for a long time they were mostly in their labs researching AI, and suddenly here they were being listened to at the podium alongside government representatives. So in a way, they were a little bit starstruck, and I thought that was funny because it was probably the same the other way around, certainly for the Prime Minister, who really looked like a proud student when he was interviewing Elon Musk. And that was another surprising development, that actually briefly, after the press conference had taken place, so a moment to shine in the media with the outcomes of the summit, Prime Minister Sunak decided to spend the airtime and certainly the social media coverage interviewing Elon Musk, who then predicted that AI would eradicate lots and lots of jobs. And remarkably, that was a topic that barely got mentioned at the summit, so maybe it was a good thing that it got part of the discussion after all, albeit in an unusual way.
Rishi Sunak's first-ever UK AI Safety Summit: What to expect
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she previews what to expect from the UK's upcoming AI Summit.
This week takes us to the AI Summit in the UK. This is a little preview, so I don't know yet what the results of the summit will be. But what we can say is that it's a prestige project for the Sunak government. It's been hastily put together. I think a month ago, the invitation wasn't even in my inbox yet, and the government is looking to see what it can deliver, short of calling for regulation, because that is definitely something it wants to stay away from. We've seen speculation that it will come up with something like the IPCC for AI, modeled after the successful Intergovernmental Panel on Climate Change to do something where existing research might be compared, and lessons can be learned about, again, the narrow focus on safety of AI.
I'll be looking specifically at how representative the attendance of the summit will be. In the past, we've seen governments leaning very heavily on bringing CEOs into meetings to talk about AI, but I think it's very important to have multi-disciplinary, multi-stakeholder groups of people thinking about the future of AI, civil society representatives, academics with different views, and other experts that can speak to the lived experience with AI in the here and now. Because frankly, it's not only catastrophic risk that people should be concerned about, but much more present and clear problems that AI causes now.
Discrimination and bias are well-known problems of AI systems, but they're also very harmful for the environment, and we don't talk about that enough. There are concerns about antitrust in AI context, and of course the ease with which disinformation and manipulation can be convincing through synthetic media and how that will harm democracy. I hope that there will be a focus on that as well.
We do know that for the UK, getting investments, getting companies to settle in the United Kingdom to be welcoming to the economic development of AI is important, especially after Brexit. The country has tried to say to the world it's open for business, even if its economy is not doing too well. And as such, it is kind of going against the current of, for example, the EU, which is focusing on hard regulation, a comprehensive AI law in the EU AI Act. But even today, we saw an executive order announced by the White House on AI and a code of conduct presented by the G7.
So, it almost looks as if there are a lot of people who want to steal the UK's thunder, but it's too early to tell. The summit is still to take place, and of course, we will keep you posted on how that goes.
More from GZERO AI: https://www.gzeromedia.com/gzero-ai/
- A vision for inclusive AI governance ›
- The geopolitics of AI ›
- GZERO AI launches October 31st ›
- How AI will roil politics even if it creates more jobs ›
- Regulating AI: The urgent need for global safeguards ›
- Can data and AI save lives and make the world safer? ›
- What does the UK’s Sunak want from Biden? ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- Should AI content be protected as free speech? - GZERO Media ›
- Is the EU's landmark AI bill doomed? - GZERO Media ›
Is King Charles III the "Wolf" of Buckingham Palace?
Britain's King Charles III was only four years old when his mother was crowned in 1953. But at 74, he's now the oldest person to be crowned in British history, Ian Bremmer explains on GZERO World.
He hasn't spent the past 50 years just sitting around, though - he's transformed his private estate, the Duchy of Cornwall, into a billion-pound business empire.
In 2021, it was worth over a billion pounds, and Charles had received £23 million from it.
While the family does bring in a lot of money for the UK economy, some are questioning the Windsors' ballooning personal fortune in a time of economic crisis.
US debt limit: default unlikely, dysfunction probable
Ian Bremmer shares his insights on global politics this week on World In :60.
Is the United States at real risk of default over the debt limit?
I say no. More importantly, the markets say no. Investors certainly aren't concerned about it. But of course, the fact that investors aren't concerned about it is part of the reason why the politicians will get closer to breaking the debt limit without an agreement. It's good that Biden and McCarthy are finally talking to each other, but in the near-term, if June is really the X date, the date where you would hit a default, what looks more likely, since there's not enough time to really agree to anything, is they punt for a few months with a very short-term extension, and then you're still in the same soup. And some level of credit crisis is probably required to make the deal painful enough that McCarthy feels like he can get away with it, not lose his job, Biden, get away with it, and not lose political support in the election. So that's the dysfunction of Washington around the debt limit.
The "godfather of AI "says we may be approaching a "nightmare scenario." What should we do about it?
This is the fellow, Hinton, that just resigned from Google, and now is coming out and speaking pretty strongly. The issue is what can we do about it? In the near-term, everyone I talk to that's involved in developing AI is spending all of their time ensuring that they are not left behind. So it's a massive amount of investment and a massive amount of effort that is gas pedaled down, full steam ahead, less the Chinese win, less your competitors in the United States win. And there aren't a lot of people in the US government that really understand the issue. There aren't a lot of people that would agree on what regulation should look like, and there's no one serious that controls and releases these tools that's prepared to do a pause or any real limitations. So I hate to say it, but I think what we should do about it is get as educated as humanly possible, get the government leaders up to speed so that when the initial major crisis hits, as it will, there's more capacity to respond and start formulating what some of the institutional responses and some of the constraints will be. But until that crisis hits, the likelihood that you slow this down is virtually zero. And also, I'm a huge enthusiast for all the upside that comes from AI. So it's not like I'm thinking, "Oh my God, this is all dystopian." No, it's great, and it's driving a lot of progress and driving a lot of efficiency, which is why there's so much money coming into it. But it's also why we're not going to slow it down until something pretty bad hits.
As the coronation of King Charles approaches, what's the state of the United Kingdom?
Well, I mean, better in the sense that Rishi Sunak is a credible, capable, solid pair of hands, at least in terms of his economic rule of the UK, as well as engagement with the EU, engagement with Macron in France, resolution of the Northern Ireland-Ireland border issue. And therefore, the person who can finally put a stake in Brexit and get the country moving on. You'd still be betting on Labour and Keir Starmer as the next prime minister in a general election, but it's no longer inconceivable that the Tories can be competitive and Sunak deserves a lot of credit for that. So have to say, I suspect he's going to have quite a positive trip to the United States the next month, and I look forward to spending a little time with him when I'm in the UK too.
- Ian Explains: The debt ceiling ›
- Who will cave on raising US debt ceiling (again?) ›
- US debt hits record: Should you worry? ›
- Can the US stay ahead of China on AI? ›
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- What We’re Watching: Zelensky’s Bakhmut message, Rishi’s post-Brexit win, Trudeau’s take on Haiti, Ethiopia’s peace progress ›
- Upheaval in UK: the sobering challenges facing new PM Truss & new King Charles III ›
- Pete Buttigieg explains: How the debt limit impacts transportation - GZERO Media ›
- US debt default would be "destabilizing," says World Bank's David Malpass - GZERO Media ›
- Will the US default on its debt? Ask GZERO World's guests - GZERO Media ›
- Biden & McCarthy both win in debt ceiling showdown - GZERO Media ›
Gang members wait to be taken to their cell after 2000 gang members were transferred to the Terrorism Confinement Center, in Tecoluca, El Salvador. Handout distributed March 15, 2023.
What We’re Watching: El Salvador’s lingering state of emergency, Northern Ireland on alert, Alibaba’s breakup, Greek election matters
El Salvador’s state of emergency one year later
This week marks one year since El Salvador’s bullish millennial president, Nayib Bukele, introduced a state of emergency, enabling his government to deal with the scourge of gang violence that has long made his country one of the world’s most dangerous.
Quick recap: To crack down on the country’s 70,000 gang members, Bukele’s government denied alleged criminals the right to know why they were detained and access to legal counsel. The arrest blitz has seen nearly 2% of the adult population locked up.
Despite these draconian measures and Bukele’s efforts to circumvent a one-term limit, he enjoys a staggering 91% approval rating.
Bukele has also sought to distinguish himself as an anti-corruption warrior, which resonates with an electorate disillusioned by years of corrupt politicians (Bukele’s three predecessors have all been charged with corruption. One is in prison; two are on the run.)
Externally, relations with the Biden administration have been icy under Bukele, with San Salvador refusing to back a US-sponsored UN resolution condemning Russia’s war in Ukraine.
What matters most to Salvadorans is the dropping crime rate, which is why Bukele will likely cruise to reelection next year.
Fears of domestic terror attack in Northern Ireland
Britain's MI5 intelligence agency has raised the domestic terror threat in Northern Ireland from “substantial” to “severe” amid fears of an imminent attack in the British-run region. This follows a series of attacks by Irish nationalist groups, mainly against police, in Northern Ireland in recent months.
The New Irish Republican Army, a paramilitary group with roots in the original militant group of the same name, has taken responsibility for a series of crimes against law enforcement and journalists.
For context, the IRA dominant in the 20th century disbanded with the signing of the Good Friday Agreement in 1998 that put an end to decades of violence between pro-British unionists wanting to stay part of the UK, and Irish nationalists calling for the unification of Northern Ireland with Ireland.
This warning comes as US President Joe Biden is preparing to travel to Belfast next month to mark the 25th anniversary of the peace deal, which put an end to the conflict, known as the Troubles.
Indeed, tensions have risen since Brexit, which revived age-old questions about the status of Northern Ireland’s borders. The threat level in Britain, meanwhile, remains “substantial,” meaning that an attack is still a strong possibility, according to authorities.
Alibaba breaks up … itself
Now we know the real reason Alibaba founder Jack Ma resurfaced in China this week. On Tuesday, the Chinese e-commerce giant announced it would spin off its different businesses into six units with separate CEOs under a single holding company. Each unit will be allowed to seek outside capital or go public independently.
Alibaba claims that the Chinese government did not order the restructuring, but it's an open secret that Xi Jinping thought the company had become too rich and powerful. The restructuring plan was unveiled the day after Ma made his first public appearance in the country since late 2020 to boost confidence in the tech company and within the broader sector. (His public criticism of regulators set off a broader crackdown against China's tech sector that hit Alibaba hard.)
Politics aside, Alibaba is just following in the footsteps of its main rivals, Tencent and JD.com, which showed earlier they got the memo from Xi: Break yourself up before you become too big to fail, or it'll be worse if we have to do it for you. The question is, would this ever happen in the US to curb the power of Big Tech?
Greek PM calls spring election
PM Kyriakos Mitsotakis, whose popularity has dipped in the wake of a train disaster last month that killed 57, has called for a general election on May 21. The train crash sparked national protests and strikes as angry Greeks pointed blame at the government for poor transport-sector investment and regulation.
In this election, Greece is transitioning to a proportional representation system, making it harder for any party to enjoy an outright win.
Mitsotakis, whose term was set to end in July, has been dogged by protests and allegations of wiretapping of political opponents by security forces. His reputational dent mixed with his New Democracy Party’s declining numbers – though they remain slightly ahead of the opposition Syriza Party – raise the likelihood of Greece soon being ruled by a coalition.
Syriza, meanwhile, says that even if it wins an outright majority, it will form a "government of cooperation." But the left-wingers have ruled out the possibility of working in a coalition with Mitsotakis’s conservatives.