We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Singapore sets an example on AI governance
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.
Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.
Speaking of the world full of AI activities, the EU will have the last, at least last planned negotiating round on the EU AI Act where the most difficult points will have to come to the table. Outstanding differences between Member States, the European parliaments around national security uses of AI, or the extent to which human rights protections will be covered, but also the critical discussion that is surfacing more and more around foundation models, whether they should be regulated, how they should be regulated, and how that can be done in a way that European companies are not disadvantaged compared to, for example, US leaders in the generative AI space in particular. So it's a pretty intense political fight, even after it looked like there was political consensus until about a month ago. But of course that is not unusual. Negotiations always have to tackle the most difficult points at the end, and that is where we are. So it's a space to watch, and I wouldn't be surprised if there would be an additional negotiating round planned after the one this week.
Then there will be the first physical meeting of the UN AI Advisory Body, of which I'm a member and I'm looking forward. This is going to happen in New York City and it will really be the first opportunity for all of us to get together and discuss, after online working sessions have taken place and a flurry of activities has already taken off after we were appointed roughly a month ago. So the UN is moving at break speed this time, and hopefully it will lead to important questions and answers with regard to the global governance of AI, the unique role of the United Nations, and the application of the charter international human rights and international law at this critical moment for global governance of artificial intelligence.
“Like asking the butcher how to test his meat”: Q&A on the OpenAI fiasco and the need for regulation
AI-generated art courtesy of Midjourney
The near-collapse of OpenAI, the world’s foremost artificial intelligence company, shocked the world earlier this month. Its nonprofit board of directors fired its high-profile and influential CEO, Sam Altman, on Friday, Nov. 17, for not being “consistently candid” with them. But the board never explained its rationale. Altman campaigned to get his job back and was joined in his pressure campaign by OpenAI lead investor Microsoft and 700 of OpenAI’s 770 employees. Days later, multiple board members resigned, new ones were installed, and Altman returned to his post.
To learn more about what the blowup means for global regulation, we spoke to Marietje Schaake, a former member of the European Parliament who serves as the international policy director of the Cyber Policy Center at Stanford University and as president of the Cyber Peace Institute. Schaake is also a host of the GZERO AI video series.
The interview has been edited for clarity and length.
GZERO: What are you taking away from the OpenAI debacle?
Schaake: This incident makes it crystal clear that companies alone are not the legitimate or most fit stakeholder to govern over powerful AI. The confrontation between the board and the executive leadership at OpenAI seems to have at least included disagreement about the impact of next-generation models on society. To weigh what is and is not an acceptable risk to accept, there needs to be public research and scrutiny, based on public policy. I am hoping the soap opera we watched at OpenAI underlines the need for democratic governance, not corporate governance.
Was there any element that was particularly concerning to you?
The governance processes seem underdeveloped in light of the stakes. And there are probably many other parts of OpenAI that lack the maturity to deal with the many impacts their products will have around the world. I am even more concerned than I was two weeks ago.
Microsoft exerted its power by pressuring OpenAI's nonprofit board to partially resign and reinstate Altman. Should we be concerned about Microsoft's influence in the AI industry?
I do not like the fact that with the implosions of OpenAI's governance, the entire notion of giving less power to investors may now lose support. For Microsoft to throw around the weight of its financial resources is not surprising, but also hardly reassuring. Profit motives all too often clash with the public interest, and the competition between companies investing in AI is almost as fierce as that between the developers of AI applications. The drive to outgame competitors rather than to consider multiple stakeholders and factors in society is a perverse one. But instead of looking at the various companies in the ecosystem, we need to look to government to assert itself, and to develop a mechanism of independent oversight.
Sam Altman has been an incredibly visible ambassador for this technology in the US and on the world stage. How would you describe the role he played over the past year with regard to shaping global regulation of AI?
Altman has become the face of the industry, for better and worse. He has made conflicting statements on how he sees regulation as impacting the company. In the same week, he encouraged Congress to adopt regulation, and threatened OpenAI would leave the EU because of the EU AI Act – regulation. It is a reminder for anyone who needs it that a brilliant businessman should not be the one in charge of deciding on regulation. This anecdote also shows we need a more sophisticated debate about regulation. Just claiming to be in favor or against means little, it is about the specific objectives of a given piece of regulation, the trade offs, and the enforcement.
In your view, has his lobbying been successful? Was his message more successful with certain regulators as opposed to others? Did politicians listen to him?
He cleverly presented himself as an ally to regulators, when he appeared before Congress. That is a lesson he may well have learned from Microsoft. In that sense, Altman got a much more friendly reception than Mark Zuckerberg ever got. It seems members of Congress listened and even asked him for advice on how AI should be regulated. It is like asking the butcher how to test his meat. I hope politicians stop asking CEOs for advice and rather feel empowered to consider many more experts and people impacted by the rollout of AI, to serve the public interest, and to prevent harms, protect rights, competition, and national security.
Given what you know now, do you think Altman will continue being the posterboy for AI and an active player in shaping AI regulation?
There are already different camps with regard to what success or danger looks like around AI. There will surely be tribes that see Altman as having come out stronger from this episode. Others will underline the very cynical dealings we saw on display. We should not forget that there is a lot of detail we do not even know about what went down.
I feel like everyone is the meme of Michael Jackson eating popcorn, fascinated by this bizarre series of events, desperately trying to understand what's going on. What are you hoping to learn next? What answers do the people at the center of this ordeal owe to the public?
Actually, we should not be distracted by the entertainment aspect of this soap of a confrontation, complete with cliffhangers and plot twists. Instead, if the board, which had a mandate emphasizing the public good, has concerns about OpenAI’s new models, they should speak out. Even if the steps taken appeared hasty and haphazardly, we must assume there were reasons behind their concerns.
If you were back in the European Parliament, how would you be responding?
I would work on regulation, before, during, and after this drama. In other words, I would not have changed my activities because of it.
What final message would you like to leave us with?
Maybe just to repeat that this saga underlines the key problems of a lack of transparency, of democratic rules, and of independent oversight over these companies. If anyone needed a refresher of why those are urgently needed, we can thank the OpenAI board and Sam Altman for sounding the alarm bell once more.
Illustration of the NVIDIA logo.
Hard Numbers: Delayed chip exports, Three-day workweek, Tim Cook’s view on regulation, Concern vs. excitement, Security pact
1.9%: NVIDIA is building new computer chips to sell to China that are compliant with updated US export regulations. But the California-based company recently announced a delay in the release of those chips until Q1 2024, citing technical problems. In response, NVIDIA’s high-flying stock, which took the company’s valuation north of $1 trillion this year, fell 1.9% on Friday.
3: Microsoft co-founder Bill Gates doesn’t think AI is going to take everyone’s job, but he does think it could lead to a three-day workweek. “I don't think AI's impact will be as dramatic as the Industrial Revolution,” Gates told Trevor Noah on the comedian’s podcast, “but it certainly will be as big as the introduction of the PC.”
18: Apple CEO Tim Cook thinks that generative AI needs “rules of the road and some regulation,” which he expects will come in the next 18 months. “I think most governments are a little behind the curve today,” Cook said on a podcast with the pop singer Dua Lipa. “I think the US, the UK, the EU, and several countries in Asia are quickly coming up to speed.”
52%: More Americans, some 52%, are concerned about the use of AI than they are excited about it, according to a Pew Research Center survey. Ten percent are more excited than concerned, and 36% have mixed feelings.
18: A group of 18 countries, headlined by the US and UK, announced on Sunday that they had signed a pact to ensure AI systems are safe from cybersecurity threats. The commitments are voluntary but offer guidelines to companies developing AI systems at a time when governments are still in the early stages of crafting regulation to rein in the emerging technology.An illustration of AI atop a computer motherboard.
EU AI regulation efforts hit a snag
Europe has spent two years trying to adopt comprehensive AI regulation. The AI Act, first introduced by the European Commission in 2021, aspires to regulate AI models based on different risk categories.
The proposed law would ban dangerous models outright, such as those that might manipulate humans, and mandate strict oversight and transparency for powerful models that carry the risk of harm. For lower-risk models, the AI Act would require simple disclosures. In May, the European Parliament approved the legislation, but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.
Bump in the road: Last week, France, Germany, and Italy dealt the AI Act a setback by reaching an agreement that supports “mandatory self-regulation through codes of conduct" for AI developers building so-called foundation models. These are the models that are trained on massive sets of data and can be used for a wide range of applications, including OpenAI’s GPT-4, the large language model that powers ChatGPT. This surprise deal represents a desire to bolster European AI firms at the expense of the effort to hold them legally accountable for their products.
The view of these countries, three of the most powerful in the EU, is that the application of AI should be regulated, not the technology itself, which is a departure from the EU’s existing plan to regulate foundation models. While the tri-country proposal would require developers to publish information about safety tests, it doesn’t demand penalties for withholding that information — though it suggests that sanctions could be introduced.
A group of tech companies, including Apple, Ericson, Google, and SAP signed a letter backing the proposal: “Let's not regulate [AI] out of existence before they get a chance to scale, or force them to leave," the group wrote.
But it angered European lawmakers who favor the AI Act. “This is a declaration of war," one member of the European Parliament told Politico, which suggested that this “power grab” could even end progress on the AI Act altogether. A fifth round of European trilogue discussions is set for Dec. 6, 2023.
EU regulators have grown hungry to regulate AI, a counter to the more laissez-faire approach of the United Kingdom under Prime Minister Rishi Sunak, whose recent Bletchley Declaration, signed by countries including the US and China, was widely considered nonbinding and light-touch. Now, the three largest economies in Europe, France, Germany, and Italy, have brought that thinking to EU negotiations — and they need to be appeased. Europe’s Big Three not only carry political weight but can form a blocking minority in the Council of Europe if there’s a vote, says Nick Reiners, senior geotechnology analyst at Eurasia Group.
Reiners says this thrown-wrench by the three makes it unlikely that the AI Act’s text will be agreed upon by the original target date of Dec. 6. But there is still strong political will on both sides, he says, to reach a compromise before next June’s European Parliament elections.
Is the EU's landmark AI bill doomed?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.
After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.
And actually, that speed of development is what's kind of catching up now with the negotiators, because in the initial phase, the European Commission had designed the law to be risk-based when we look at the outcomes of AI applications. So, if AI is used to decide on whether to hire someone or give them access to education or social benefits, the consequences for the individual impacted can be significant and so, proportionate to the risk, mitigating measures should be in place. And the law was designed to include anything from very low or no-risk applications to high and unacceptable risk of applications, such as a social credit scoring system as unacceptable, for example. But then when generative AI products started flooding the market, the European Parliament, which was taking its position, decided, “We need to look at the technology as well. We cannot just look at the outcomes.” And I think that that is critical because foundation models are so fundamental. Really, they form the basis of so much downstream use that if there are problems at that initial stage, they ripple through like an earthquake in many, many applications. And if you don't want startups or downstream users to be confronted with liability or very high compliance costs, then it's also important to start at the roots and make sure that sort of the core ingredients of the uses of these AI models are properly governed and that they are safe and okay to use.
So, when I look ahead at December, when the European Commission, the European Parliament and Member States come together, I hope negotiators will look at the way in which foundation models can be regulated, that it is not a yes or no to regulation, but it's a progressive work tiered approach that really attaches the strongest mitigating or scrutiny measures to the most powerful players. The way that has been done in many other sectors. It would be very appropriate for AI foundation models, as well. There's a lot of debate going on. Open letters are being penned, op-ed experts are speaking out, and I'm sure there is a lot of heated debate between Member States of the European Union. I just hope that the negotiators appreciate that the world is watching. Many people with great hope as to how the EU can once again regulate on the basis of its core values, and that with what we now know about how generative AI is built upon these foundation models, it would be a mistake to overlook them in the most comprehensive EU AI law.
Sen. Amy Klobuchar (D-MN) speaks to media near the Senate Chamber during a vote at the US Capitol, in Washington, D.C., on Wednesday, Nov. 15, 2023.
Senators push bipartisan AI bill for increased transparency, accountability
GOP Sen. John Thune of South Dakota and Democratic Sen. Amy Klobuchar of Minnesota last Wednesday unveiled bipartisan legislation – the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 – that aims to establish basic safeguards surrounding the use of AI systems and tools.
The bill would provide definitions for different AI systems — including “critical impact” systems — and direct the Commerce Department to develop a five-year plan for testing and certifying critical-impact AI, per an overview from the lawmakers. Companies or organizations employing critical impact systems would be required to self-certify compliance with standards determined by the Commerce Department.
The legislation would require the National Institute of Standards and Technology to conduct research with the goal of creating guidelines for providing information on the authenticity and origin of online content. The bill would also require large internet platforms to let users know when they are interacting with content produced via generative AI.
Light-touch: In comments to Politico in September, Thune referred to the legislation as a “light-touch” approach to governing AI that avoids what he described as harmful, heavy-handed regulation. Last week, he explained that the bill was designed to help identify “basic rules of the road” to “protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention.”
Congress, which is not exactly known for being particularly tech-savvy, has ramped up efforts over the past year to address the rapid development and expansion of AI. Senate Majority Leader Chuck Schumer has held a series “AI Insight Forums” aimed at educating lawmakers, for example.
Between challenges in understanding the technology and deeply entrenched political divisions, however, Congress seems unlikely to pass any major AI laws in the near future — particularly as the country enters an election year.
India's Prime Minister Narendra Modi arrives at the Bharat Mandapam to inaugurate the Indian Mobile Congress 2023, in New Delhi, India on Oct. 27, 2023.
Should India roll the dice on AI regulation?
The United Kingdom and Japan recently hosted influential AI summits, spurring global conversations about regulatory solutions. Now, India wants in on the act, and it is set to host a major conference next month aimed at boosting trust in and adoption of artificial intelligence. But as concerns over the safety of AI grow, New Delhi faces a choice between taking a leading role in the growing international consensus on AI regulation and striking out on its own to nurture innovation with light regulatory touches.
India’s government has flip-flopped on the issue. In April, it said it would not regulate AI at all, giving entrepreneurs the leeway they need to build up a world-leading innovation environment. But just two months later, the Ministry of Electronics and Information Technology said India would roll out broad rules after all through the Digital India Act, a major overhaul of decades-old laws governing the tech sector that is still being drafted.
You can see the temptation for India to give the market free reign: AI is expected to add nearly a trillion dollars to India’s GDP by 2035. Its $23 billion market for the semiconductor chips that power AI is expected to nearly quadruple by 2028. The country also has a thriving tech startup culture – 80,000 firms bloomed between 2012 and 2020 – and world-class engineering schools, including the infamously competitive and rigorous Indian Institute of Technology system. Major domestic players like Tata and Reliance have attracted investment and partnerships with NVIDIA, the world’s foremost semiconductor designer, eager to build up new markets to replace Chinese business lost to US export controls.
So why shouldn’t India press its advantage? Well, it’s not as if New Delhi is immune to the disruptions and dangers AI potentially poses. The same concerns about malicious actors using the technology to spread disinformation or conduct cyberattacks apply to India, and being the odd country out when even China is joining efforts to set global rules of the road may not be the best look. We’ll get a better sense of India’s preferred policy direction when they host the annual summit of the OECD’s Global Partnership on Artificial Intelligence on Dec. 12-14.Traders work on the floor at the New York Stock Exchange in New York City.
Hard Numbers: A soured stock sale, a European agreement, copyright complaints, and a secretive summit
$86 billion: Sam Altman’s ouster from OpenAI calls into question an employee stock sale that would have valued the company at $86 billion. The sale was supposed to close as early as next month, according to The Information. With Altman’s departure and the expected mass exodus of OpenAI staff, possibly to Microsoft, expect that valuation to take a serious hit — if the stock sale happens at all. Microsoft stocks, meanwhile, reached a record-high close on Monday.
3: Three major European countries have come to an agreement about how AI should be regulated. France, Germany, and Italy have agreed to "mandatory self-regulation through codes of conduct," but without any punitive sanctions, at least for now. The move will further weaken European efforts to pass the Artificial Intelligence Act owing to disagreements over how strenuously to regulate the technology.
10,000: Shira Perlmutter, the US register of copyrights, the country’s top copyright official, said her office has received 10,000 comments about artificial intelligence in recent months. Artists have urged federal officials like Perlmutter to take a stance against AI for fear it’s innately violative. Meanwhile, a litany of lawsuits alleging copyright violations are making their way through federal courts.
100: More than 100 people gathered last week in the mountains of Utah for an invite-only conference called the AI Security Summit. Bloomberg called the event “secretive” but reported that speakers included multiple OpenAI executives — this was just days before the Altman ouster — as well as multiple US military officials. Among the topics discussed were Biden’s AI executive order, the threat from China, and the state of the semiconductor industry.