We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Singapore sets an example on AI governance
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she reviews the Singapore government's latest agenda in its AI policy: How to govern AI, at the Singapore Conference on Artificial Intelligence.
Hello. My name is Marietje Schaake. I'm in Singapore this week, and this is GZERO AI. Again, a lot of AI activities going on here at a conference organized by the Singaporese government that is looking at how to govern AI, the key question, million-dollar question, billion-dollar question that is on agendas for politicians, whether it is in cities, countries, or multilateral organizations. And what I like about the approach of the government here in Singapore is that they've brought together a group of experts from multiple disciplines, multiple countries around the world, to help them tackle the question of, what should we be asking ourselves? And how can experts inform what Singapore should do with regard to its AI policy? And this sort of listening mode and inviting experts first, I think is a great approach and hopefully more governments will do that, because I think it's necessary to have such well-informed thoughts, especially while there is so much going on already. Singapore is thinking very, very clearly and strategically about what its unique role can be in a world full of AI activities.
Speaking of the world full of AI activities, the EU will have the last, at least last planned negotiating round on the EU AI Act where the most difficult points will have to come to the table. Outstanding differences between Member States, the European parliaments around national security uses of AI, or the extent to which human rights protections will be covered, but also the critical discussion that is surfacing more and more around foundation models, whether they should be regulated, how they should be regulated, and how that can be done in a way that European companies are not disadvantaged compared to, for example, US leaders in the generative AI space in particular. So it's a pretty intense political fight, even after it looked like there was political consensus until about a month ago. But of course that is not unusual. Negotiations always have to tackle the most difficult points at the end, and that is where we are. So it's a space to watch, and I wouldn't be surprised if there would be an additional negotiating round planned after the one this week.
Then there will be the first physical meeting of the UN AI Advisory Body, of which I'm a member and I'm looking forward. This is going to happen in New York City and it will really be the first opportunity for all of us to get together and discuss, after online working sessions have taken place and a flurry of activities has already taken off after we were appointed roughly a month ago. So the UN is moving at break speed this time, and hopefully it will lead to important questions and answers with regard to the global governance of AI, the unique role of the United Nations, and the application of the charter international human rights and international law at this critical moment for global governance of artificial intelligence.
The OpenAI-Sam Altman drama: Why should you care?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the OpenAI-Sam Altman drama.
Hi, I'm Taylor Owen. This is GZERO AI. So if you're watching this video, then like me, you're probably glued to your screen over the past week, watching the psychodrama play out at OpenAI, a company literally at the center of the current AI moment we're in.
Sam Altman, the CEO of OpenAI, was kicked out of his company by his own board of directors. Under a week later, he was back as CEO, and all but one of those board members was gone. All of this would be amusing, and it certainly was in a glib sort of way, if the consequences weren't so profound. I've been thinking a lot about how to make sense of all this, and I keep coming back to this profound sense of deja vu.
First, though, a quick recap. We don't know all of the details, but it really does seem to be the case that at the core of this conflict was a tension between two different views of what OpenAI was and will be in the future. Remember, OpenAI was founded in 2015 as a nonprofit, and a nonprofit because it was choosing a mission of building technologies to benefit all of humanity over a private corporate mission of increasing value for shareholders. When they started running out of money, though, a couple of years later, they embedded a for-profit entity within this nonprofit structure so that they could capitalize on the commercial value of the products that the nonprofit was building. This is where the tension lied, between the incentives of a for-profit engine and the values and mission of a nonprofit board structure.
All of this can seem really new. OpenAI was building legitimately groundbreaking technologies, technologies that could transform our world. But I think the problem and the wider problem here is not a new one. This is where I was getting deja vu. Back in the early days of Web 2.0, there was also a huge amount of excitement over a new disruptive technology. In this case, the power of social media. In some ways, events like the Arab Spring were very similar to the emergence of ChatGPT, a seismic of event that demonstrated to broader society the power of an emerging technology.
Now I spent the last 15 years studying the emergence of social media, and in particular how we as societies can balance the immense benefits and upside of these technologies with also the clear downside risks as they emerged. I actually think we got a lot of that balance wrong. It's times like this when a new technology emerges that we need to think carefully about what lessons we can learn from the past. I want to highlight three.
First, we need to be really clear-eyed about who has power in the technological infrastructure we're deploying. In the case of OpenAI, it seems very clear that the profit incentives won over the more broader social mandate. Power is also, though, who controls infrastructure. In this case, Microsoft played a big role. They controlled the compute infrastructure, and they wielded this power to come out on top in this turmoil.
Second, we need to bring the public into this discussion. Ultimately, a technology will only be successful if it has legitimate citizen buy-in, if it has a social license. What are citizens supposed to think when they hear the very people building these technologies disagreeing over their consequences? Ilya Sutskever, for example, said just a month ago, "If you value intelligence over all human qualities, you're going to have a bad time," when talking about the future of AI. This kind of comment coming from the very people that are building the technologies is just exacerbating an already deep insecurity many people feel about the future. Citizens need to be allowed and be enabled and empowered to weigh into the conversation about the technologies that are being built on their behalf.
Finally, we simply need to get the governance right this time. We didn't last time. For over 20 years, we've largely left the social web unregulated, and it's had disastrous consequences. This means not being confused by technical or systemic complexity masking lobbying efforts. It means applying existing laws and regulations first ... In the case of AI, things like copyright, online safety rules, data privacy rules, competition policy ... before we get too bogged down in big, large-scale AI governance initiatives. We just can't let the perfect be the enemy of the good. We need to iterate, experiment, and countries need to learn from each other in how they step into this complex new world of AI governance.
Unfortunately, I worry we're repeating some of the same mistakes of the past. Once again, we're moving fast and we're breaking things. If the new board of OpenAI is any indication about how they're thinking about governance and how the AI world in general values and thinks about governance, there's even more to worry about. Three white men calling the shots at a tech company that could very well transform our world. We've been here before, and it doesn't end well. Our failure to adequately regulate social media had huge consequence. While the upside of AI is undeniable, it's looking like we're making many of the same mistakes, only this time the consequences could be even more dire.
I'm Taylor Owen, and thanks for watching.
Is the EU's landmark AI bill doomed?
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about the potential pitfalls of the imminent EU AI Act and the sudden resistance that could jeopardize it altogether.
After a weekend full of drama around OpenAI, it is now time to shift to another potentially dramatic conclusion of an AI challenge, namely the EU AI Act, that's entering its final phase. And this week, the Member States of the EU will decide on their position. And there is sudden resistance coming from France and Germany in particular, to including foundation models in the EU AI Act. And I think that is a mistake. I think it is crucial for a safe but also competitive and democratically governed AI ecosystem that foundation models are actually part of the EU AI Act, which would be the most comprehensive AI law that the democratic world has put forward. So, the world is watching, and it is important that EU leaders understand that time is really of the essence if we look at the speed of development of artificial intelligence and in particular, generative AI.
And actually, that speed of development is what's kind of catching up now with the negotiators, because in the initial phase, the European Commission had designed the law to be risk-based when we look at the outcomes of AI applications. So, if AI is used to decide on whether to hire someone or give them access to education or social benefits, the consequences for the individual impacted can be significant and so, proportionate to the risk, mitigating measures should be in place. And the law was designed to include anything from very low or no-risk applications to high and unacceptable risk of applications, such as a social credit scoring system as unacceptable, for example. But then when generative AI products started flooding the market, the European Parliament, which was taking its position, decided, “We need to look at the technology as well. We cannot just look at the outcomes.” And I think that that is critical because foundation models are so fundamental. Really, they form the basis of so much downstream use that if there are problems at that initial stage, they ripple through like an earthquake in many, many applications. And if you don't want startups or downstream users to be confronted with liability or very high compliance costs, then it's also important to start at the roots and make sure that sort of the core ingredients of the uses of these AI models are properly governed and that they are safe and okay to use.
So, when I look ahead at December, when the European Commission, the European Parliament and Member States come together, I hope negotiators will look at the way in which foundation models can be regulated, that it is not a yes or no to regulation, but it's a progressive work tiered approach that really attaches the strongest mitigating or scrutiny measures to the most powerful players. The way that has been done in many other sectors. It would be very appropriate for AI foundation models, as well. There's a lot of debate going on. Open letters are being penned, op-ed experts are speaking out, and I'm sure there is a lot of heated debate between Member States of the European Union. I just hope that the negotiators appreciate that the world is watching. Many people with great hope as to how the EU can once again regulate on the basis of its core values, and that with what we now know about how generative AI is built upon these foundation models, it would be a mistake to overlook them in the most comprehensive EU AI law.
AI agents are here, but is society ready for them?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the rise of AI agents.
Today I want to talk about a recent big step towards the world of AI agents. Last week, OpenAI, the company behind ChatGPT, announced that users can now create their own personal chatbots. Prior to this, tools like ChatGPT were primarily useful because they could answer users' questions, but now they can actually perform tasks. They can do things instead of just talking about them. I think this really matters for a few reasons. First, AI agents are clearly going to make some things in our life easier. They're going to help us book travel, make restaurant reservations, manage our schedules. They might even help us negotiate a raise with our boss. But the bigger news here is that private corporations are now able to train their own chatbots on their own data. So a medical company, for example, could use personal health records to create virtual health assistants that could answer patient inquiries, schedule appointments or even triage patients.
Second, this I think, could have a real effect on labor markets. We've been talking about this for years, that AI was going to disrupt labor, but it might actually be the case soon. If you have a triage chatbot for example, you might not need a big triage center, and therefore you'd need less nurses and you'd need less medical staff. But having AI in the workplace could also lead to fruitful collaboration. AI is becoming better than humans at breast cancer screening, for example, but humans will still be a real asset when it comes to making high stakes life or death decisions or delivering bad news. The key point here is that there's a difference between technology that replaces human labor and technology that supplements it. We're at the very early stages of figuring out the balance.
And third, AI Safety researchers are worried about these new kinds of chatbots. Earlier this year, the Center for AI Safety listed autonomous agents as one of its catastrophic AI risks. Imagine a chatbot being programmed with incorrect medical data, triaging patients in the wrong order. This could quite literally be a matter of life or death. These new agents are clear demonstration of the disconnect that's increasingly growing between the pace of AI development, the speed with which new tools are being developed and let loose on society, and the pace of AI regulation to mitigate the potential risks. At some point, this disconnect could just catch up with us. The bottom line though is that AI agents are here. As a society, we better start preparing for what this might mean.
I'm Taylor Owen, and thanks for watching.
- AI at the tipping point: danger to information, promise for creativity ›
- Governing AI Before It’s Too Late ›
- How AI will roil politics even if it creates more jobs ›
- Everybody wants to regulate AI ›
- The OpenAI-Sam Altman drama: Why should you care? - GZERO Media ›
- CRISPR, AI, and cloning could transform the human race - GZERO Media ›
AI and data regulation in 2023 play a key role in democracy
Artificial intelligence and data have hugged the headlines this year, even at the just concluded 78th United Nations General Assembly.
According to Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, its impact and use will continue to soar and play a pivotal role in determining critical elections.
However, to be in the driving seat, Dhar suggests the people and communities at the heart of data collection be key regulators.
“Instead of thinking only about short-term risks and long-term risks, thinking about the middle where we build prosocial applications of these tools that really bring together incredible data sets, but say we will learn what the risks are as we deploy them with communities as co-architects,” Dhar, who lead efforts in the AI and data solutions space, said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
The discussion was moderated by Nicholas Thompson of The Atlantic. It was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- How should artificial intelligence be governed? ›
- AI governance: Cultivating responsibility ›
- A vision for inclusive AI governance ›
- Regulate AI: Sure, but how? ›
- The US and Canada struggle to regulate AI ›
- AI regulation can’t address what people want ›
- Should AI content be protected as free speech? - GZERO Media ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- How AI threatens elections - GZERO Media ›
AI plus existing technology: A recipe for tackling global crisis
When a country experiences a natural disaster, satellite technology and artificial intelligence can be used to rapidly gather data on the damage and initiate an effective response, according to Microsoft Vice Chair and President Brad Smith.
But to actually save lives “it's high-tech meets low-tech,” he said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
He gave the example of SEEDS, an Indian NGO that dispatches local teens to distribute life-saving aid during heatwaves. He said the program emblemizes the effective combination of “artificial intelligence, technology, and people on the ground.”
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The urgent global water crisis ›
- Armenia faces Karabakh refugee crisis ›
- Is the global food crisis here to stay? ›
- Can data and AI save lives and make the world safer? ›
- The AI power paradox: Rules for AI's power ›
- A vision for inclusive AI governance ›
- An early warning system from the UN to avert global disasters - GZERO Media ›
How AI can be used in public policy: Anne Witkowsky
There are some pretty sharp people all around the world trying to craft policy, but their best efforts are often limited by poor data. Anne Witkowsky, Assistant Secretary of State at the Bureau of Conflict and Stabilization Operations, says that’s about to change.
“Data-driven, evidence-driven decision-making by policymakers is going to be more successful” with the help of artificial intelligence, she said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
Witkowsky said the focus needs to be on inclusion and partnership with governments in developing countries to use new technology to “build resilience” against the unrelenting pressure such states face.
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed
Artificial Intelligence promises revolutionary advances in the way we work, live and govern ourselves, but is it all a rosy picture?
United Nations Deputy Secretary-General Amina Mohammed says that while the potential benefits are enormous, “so is the dark side.” Without thoughtful leadership, the world could lose a precious opportunity to close major social divides. She spoke during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly. The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
She says it will take a “transformative mindset” and an eagerness to tackle more and bigger problems to pull off the transition, and emphasizes the severe mismatch of capable leadership with positions of power.
"Where there is leadership, there's not much power. And where there is power, that leadership is struggling,” she said.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The UN will discuss AI rules at this week's General Assembly ›
- Ian Bremmer: How AI may destroy democracy ›
- AI at the tipping point: danger to information, promise for creativity ›
- Can data and AI save lives and make the world safer? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- Governing AI Before It’s Too Late ›
- The AI power paradox: Rules for AI's power ›