State of the World with Ian Bremmer
Scroll to the top

{{ subpage.title }}

Singapore sets an example on AI governance
AI governance: Singapore is having a critical discussion | GZERO AI

Singapore sets an example on AI governance

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.

Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.

Read moreShow less
Courtesy of Midjourney

“Like asking the butcher how to test his meat”: Q&A on the OpenAI fiasco and the need for regulation

AI-generated art courtesy of Midjourney

The near-collapse of OpenAI, the world’s foremost artificial intelligence company, shocked the world earlier this month. Its nonprofit board of directors fired its high-profile and influential CEO, Sam Altman, on Friday, Nov. 17, for not being “consistently candid” with them. But the board never explained its rationale. Altman campaigned to get his job back and was joined in his pressure campaign by OpenAI lead investor Microsoft and 700 of OpenAI’s 770 employees. Days later, multiple board members resigned, new ones were installed, and Altman returned to his post.

To learn more about what the blowup means for global regulation, we spoke to Marietje Schaake, a former member of the European Parliament who serves as the international policy director of the Cyber Policy Center at Stanford University and as president of the Cyber Peace Institute. Schaake is also a host of the GZERO AI video series.

The interview has been edited for clarity and length.

Read moreShow less

Illustration of the NVIDIA logo.

REUTERS/Dado Ruvic

Hard Numbers: Delayed chip exports, Three-day workweek, Tim Cook’s view on regulation, Concern vs. excitement, Security pact

1.9%: NVIDIA is building new computer chips to sell to China that are compliant with updated US export regulations. But the California-based company recently announced a delay in the release of those chips until Q1 2024, citing technical problems. In response, NVIDIA’s high-flying stock, which took the company’s valuation north of $1 trillion this year, fell 1.9% on Friday.

Read moreShow less

An illustration of AI atop a computer motherboard.

Dado Ruvic/Illustration/Reuters

EU AI regulation efforts hit a snag

Europe has spent two years trying to adopt comprehensive AI regulation. The AI Act, first introduced by the European Commission in 2021, aspires to regulate AI models based on different risk categories.

The proposed law would ban dangerous models outright, such as those that might manipulate humans, and mandate strict oversight and transparency for powerful models that carry the risk of harm. For lower-risk models, the AI Act would require simple disclosures. In May, the European Parliament approved the legislation, but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.

Read moreShow less
Is the EU's landmark AI bill doomed?
Is the EU's landmark AI bill doomed? | GZERO AI | GZERO Media

Is the EU's landmark AI bill doomed?

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.

After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.

Read moreShow less

Sen. Amy Klobuchar (D-MN) speaks to media near the Senate Chamber during a vote at the US Capitol, in Washington, D.C., on Wednesday, Nov. 15, 2023.

Graeme Sloan/Sipa USA via Reuters

Senators push bipartisan AI bill for increased transparency, accountability

GOP Sen. John Thune of South Dakota and Democratic Sen. Amy Klobuchar of Minnesota last Wednesday unveiled bipartisan legislation – the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 – that aims to establish basic safeguards surrounding the use of AI systems and tools.

Read moreShow less

India's Prime Minister Narendra Modi arrives at the Bharat Mandapam to inaugurate the Indian Mobile Congress 2023, in New Delhi, India on Oct. 27, 2023.

Kabir Jhangiani/NurPhoto via Reuters

Should India roll the dice on AI regulation?

The United Kingdom and Japan recently hosted influential AI summits, spurring global conversations about regulatory solutions. Now, India wants in on the act, and it is set to host a major conference next month aimed at boosting trust in and adoption of artificial intelligence. But as concerns over the safety of AI grow, New Delhi faces a choice between taking a leading role in the growing international consensus on AI regulation and striking out on its own to nurture innovation with light regulatory touches.

Read moreShow less

Traders work on the floor at the New York Stock Exchange in New York City.

REUTERS/Brendan McDermid

Hard Numbers: A soured stock sale, a European agreement, copyright complaints, and a secretive summit

$86 billion: Sam Altman’s ouster from OpenAI calls into question an employee stock sale that would have valued the company at $86 billion. The sale was supposed to close as early as next month, according to The Information. With Altman’s departure and the expected mass exodus of OpenAI staff, possibly to Microsoft, expect that valuation to take a serious hit — if the stock sale happens at all. Microsoft stocks, meanwhile, reached a record-high close on Monday.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily