We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The OpenAI-Sam Altman drama: Why should you care?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the OpenAI-Sam Altman drama.
Hi, I'm Taylor Owen. This is GZERO AI. So if you're watching this video, then like me, you're probably glued to your screen over the past week, watching the psychodrama play out at OpenAI, a company literally at the center of the current AI moment we're in.
Sam Altman, the CEO of OpenAI, was kicked out of his company by his own board of directors. Under a week later, he was back as CEO, and all but one of those board members was gone. All of this would be amusing, and it certainly was in a glib sort of way, if the consequences weren't so profound. I've been thinking a lot about how to make sense of all this, and I keep coming back to this profound sense of deja vu.
First, though, a quick recap. We don't know all of the details, but it really does seem to be the case that at the core of this conflict was a tension between two different views of what OpenAI was and will be in the future. Remember, OpenAI was founded in 2015 as a nonprofit, and a nonprofit because it was choosing a mission of building technologies to benefit all of humanity over a private corporate mission of increasing value for shareholders. When they started running out of money, though, a couple of years later, they embedded a for-profit entity within this nonprofit structure so that they could capitalize on the commercial value of the products that the nonprofit was building. This is where the tension lied, between the incentives of a for-profit engine and the values and mission of a nonprofit board structure.
All of this can seem really new. OpenAI was building legitimately groundbreaking technologies, technologies that could transform our world. But I think the problem and the wider problem here is not a new one. This is where I was getting deja vu. Back in the early days of Web 2.0, there was also a huge amount of excitement over a new disruptive technology. In this case, the power of social media. In some ways, events like the Arab Spring were very similar to the emergence of ChatGPT, a seismic of event that demonstrated to broader society the power of an emerging technology.
Now I spent the last 15 years studying the emergence of social media, and in particular how we as societies can balance the immense benefits and upside of these technologies with also the clear downside risks as they emerged. I actually think we got a lot of that balance wrong. It's times like this when a new technology emerges that we need to think carefully about what lessons we can learn from the past. I want to highlight three.
First, we need to be really clear-eyed about who has power in the technological infrastructure we're deploying. In the case of OpenAI, it seems very clear that the profit incentives won over the more broader social mandate. Power is also, though, who controls infrastructure. In this case, Microsoft played a big role. They controlled the compute infrastructure, and they wielded this power to come out on top in this turmoil.
Second, we need to bring the public into this discussion. Ultimately, a technology will only be successful if it has legitimate citizen buy-in, if it has a social license. What are citizens supposed to think when they hear the very people building these technologies disagreeing over their consequences? Ilya Sutskever, for example, said just a month ago, "If you value intelligence over all human qualities, you're going to have a bad time," when talking about the future of AI. This kind of comment coming from the very people that are building the technologies is just exacerbating an already deep insecurity many people feel about the future. Citizens need to be allowed and be enabled and empowered to weigh into the conversation about the technologies that are being built on their behalf.
Finally, we simply need to get the governance right this time. We didn't last time. For over 20 years, we've largely left the social web unregulated, and it's had disastrous consequences. This means not being confused by technical or systemic complexity masking lobbying efforts. It means applying existing laws and regulations first ... In the case of AI, things like copyright, online safety rules, data privacy rules, competition policy ... before we get too bogged down in big, large-scale AI governance initiatives. We just can't let the perfect be the enemy of the good. We need to iterate, experiment, and countries need to learn from each other in how they step into this complex new world of AI governance.
Unfortunately, I worry we're repeating some of the same mistakes of the past. Once again, we're moving fast and we're breaking things. If the new board of OpenAI is any indication about how they're thinking about governance and how the AI world in general values and thinks about governance, there's even more to worry about. Three white men calling the shots at a tech company that could very well transform our world. We've been here before, and it doesn't end well. Our failure to adequately regulate social media had huge consequence. While the upside of AI is undeniable, it's looking like we're making many of the same mistakes, only this time the consequences could be even more dire.
I'm Taylor Owen, and thanks for watching.
AI agents are here, but is society ready for them?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the rise of AI agents.
Today I want to talk about a recent big step towards the world of AI agents. Last week, OpenAI, the company behind ChatGPT, announced that users can now create their own personal chatbots. Prior to this, tools like ChatGPT were primarily useful because they could answer users' questions, but now they can actually perform tasks. They can do things instead of just talking about them. I think this really matters for a few reasons. First, AI agents are clearly going to make some things in our life easier. They're going to help us book travel, make restaurant reservations, manage our schedules. They might even help us negotiate a raise with our boss. But the bigger news here is that private corporations are now able to train their own chatbots on their own data. So a medical company, for example, could use personal health records to create virtual health assistants that could answer patient inquiries, schedule appointments or even triage patients.
Second, this I think, could have a real effect on labor markets. We've been talking about this for years, that AI was going to disrupt labor, but it might actually be the case soon. If you have a triage chatbot for example, you might not need a big triage center, and therefore you'd need less nurses and you'd need less medical staff. But having AI in the workplace could also lead to fruitful collaboration. AI is becoming better than humans at breast cancer screening, for example, but humans will still be a real asset when it comes to making high stakes life or death decisions or delivering bad news. The key point here is that there's a difference between technology that replaces human labor and technology that supplements it. We're at the very early stages of figuring out the balance.
And third, AI Safety researchers are worried about these new kinds of chatbots. Earlier this year, the Center for AI Safety listed autonomous agents as one of its catastrophic AI risks. Imagine a chatbot being programmed with incorrect medical data, triaging patients in the wrong order. This could quite literally be a matter of life or death. These new agents are clear demonstration of the disconnect that's increasingly growing between the pace of AI development, the speed with which new tools are being developed and let loose on society, and the pace of AI regulation to mitigate the potential risks. At some point, this disconnect could just catch up with us. The bottom line though is that AI agents are here. As a society, we better start preparing for what this might mean.
I'm Taylor Owen, and thanks for watching.
AI's role in the Israel-Hamas war so far
Artificial intelligence is changing the world, and our new video series GZERO AI explores what it all means for you—from disinformation and regulation to the economic and political impact. Co-hosted by Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, and by Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence and former European Parliamentarian, this weekly video series will help you keep up and make sense of the latest news on the AI revolution.
In the first episode of the series, Taylor Owen takes a look at how artificial intelligence is shaping the war between Israel and Hamas.
As the situation in the Middle East just continues to escalate, today we're asking how is artificial intelligence shaping the war between Israel and Hamas? The short answer is that not as many expected it might. I think there's two cautions about the power of AI here, and one place where AI has been shown to really matter. The first caution is on the value of predictive AI. For years, many have been arguing that AI might not just help us understand the world as it is but might actually be able to help us predict future events. Nowhere has this been more the case than in the worlds of national security and policing.
Now, Gaza happens to be one of the most surveyed regions in the world. The use of drones, facial recognition, border checkpoints, and phone tapping have allowed the Israeli government to collect vast amounts of data about the Gazan population. Add to this, the fact that the director of the Israeli Defense Ministry has said that Israel is about to become an AI superpower, and one would think that the government might have had the ability to predict such events. But on October 7th, this was notably not the case. The government, the military, and Israeli citizens themselves were taken by surprise by this attack.
The reality, of course, is however powerful the AI might be, it is only as good as the data that's fed into it, and often if the data is biased or just plain wrong, so will the predictive capacity. So I think we need to be really cautious, particularly about the sales pitches being made by the companies selling these predictive tools to our policing and our national security services. The certainty which with they're doing so, I think, needs to be questioned.
The second caution I would add is on the role that AI plays in the creation of misinformation. Don't get me wrong, there's been a ton of it in this conflict, but it hasn't really been the synthetic media or the deep fakes that many feared would be a big problem in events like this. Instead, the misinformation has been low tech. It's been photos and videos from other events taken out of context and displayed as if they were from this one. It's been cheap fakes, not deep fakes. Now, there have been some cases even where AI deepfake detection tools, which have been rolled out in response to the problem of deep fakes, have actually falsely identified AI images as being created by AI. In this case, the threat of deep fakes is causing more havoc than the deep fakes themselves.
Finally, though, I think there is a place where AI is causing real harm in this conflict, and that is on social media. Our Twitter and our Facebook and our TikTok feeds are being shaped by artificially intelligent algorithms. And too often than not, these algorithms reinforce our biases and fuel our collective anger. The world seen through content that only makes us angry is just fundamentally a distorted one. And more broadly, I think calls for reigning in social media, whether by the companies themselves or through regulation, are being replaced with opaque and ill-defined notions of AI governance. And don't get me wrong, AI policy is important, but it is the social media ecosystem that is still causing real harm. We can't take our eye off of that policy ball.
I'm Taylor Owen, and thanks for watching.
GZERO AI launches October 31st
There is no more disruptive or more remarkable technology than AI, but let’s face it, it is incredibly hard to keep up with the latest developments. Even more importantly, it’s almost impossible to understand what the latest AI innovations actually mean. How will AI affect your job? What do you need to know? Who will regulate it? How will it disrupt work, the economy, politics, war?
That's where our new weekly GZERO AI newsletter comes in to help. GZERO AI will give you the first key insights you need to know, putting perspective on the hype and context on the AI doomers and dreamers. Featuring the world class analysis that is the hallmark of GZERO and its founder, Ian Bremmer--who himself is a leading voice in the AI space--GZERO AI is the essential weekly read of the AI revolution.
Our goal is to deliver understanding as well as news, to turn information into perspective and data into insights. GZERO AI will feature some of the world’s most important voices on technology, such as our weekly data columnist Azeem Azhar, and our video columnists Marietje Schaake and Taylor Owen. GZERO AI is your essential tool to understanding the technology that...is understanding you!
Sign up now for GZERO AI (along with GZERO's other newsletters.)