We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Avoiding extinction: A Q&A with Gladstone AI’s Jeremie Harris
In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before to write such reports and brief government officials on matters concerning AI safety.
Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” released to the public on March 11.
The short version? It’s pretty dire: “The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks.” Next to the words “catastrophic risks” is a particularly worrying footnote: “By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction.”
With all that in mind, GZERO spoke to Jeremie Harris, co-founder and CEO of Gladstone AI, about how this report came to be and how we should rewire our thinking about the risks posed by AI.
This interview has been edited for clarity and length.
GZERO: What is Gladstone and how did the opportunity to write this report come about?
Jeremie Harris: After GPT-3 came out in 2020, we assessed that the key principle behind it might be extensible enough that we should expect a radical acceleration in AI capabilities. Our views were shaped by our technical expertise in AI (we'd founded a now-acquired AI company in 2016), and by our conversations with friends at the frontier labs, including OpenAI itself.
By then, it was already clear that a ChatGPT moment was coming, and that the US government needed to be brought up to speed. We briefed a wide range of stakeholders, from cabinet secretaries to working-level action officers on the new AI landscape. A year before ChatGPT was released, we happened upon a team at the State Department that recognized the importance of AI scaling up with larger, more powerful models. They decided to commission an assessment of that risk set a month before ChatGPT launched, and we were awarded the contract.
You interviewed 200 experts. How did you determine who to talk to and who to take most seriously?
Harris: We knew who the field's key contributors were, and had spoken to many of them personally.
Our approach was to identify and engage all of the key pockets of informed opinion on these issues, from leadership to AI risk skeptics, to concerned researchers. We spoke to members of the executive, policy, safety, and capabilities teams at top labs. In addition, we held on-site engagements with researchers at top academic institutions in the US and U.K., as well as with AI auditing companies and civil society groups.
We also knew that we needed to account for the unique perspective of the US government's national security community, which has a long history of dealing with new emerging technologies and WMD-like risks. We held unprecedented workshops that brought together representatives and WMD experts from across the US interagency to discuss AI and its national security risks, and had them red-team our recommendations and analysis.
What do you want the average person to know about what you found?
Harris: AI has already helped us make amazing breakthroughs in fields like materials science and medicine. The technology’s promise is real. Unfortunately, the same capabilities that create that promise also create risks, and although we can't be certain, a significant and growing body of data does suggest that these risks could lead to WMD-scale effects if they're not properly managed. The question isn't how do we stop AI development, but rather, how can we implement common-sense safeguards that AI researchers themselves are often calling for, so that we can reap the immense benefits.
Our readership is (hopefully) more informed than the average person about AI. What should they take away from the report?
Harris: Top AI labs are currently locked in a race on the path to human-level AI, or AGI. This competitive dynamic erodes the margins that they otherwise might be investing in developing and implementing safety measures, at a time when we lack the technical means to ensure that AGI-level systems can be controlled or prevented from being weaponized. Compounding this challenge is the geopolitics of AI development, as other countries develop their own domestic AI programs.
This problem can be solved. The action plan lays out a way to stabilize the racing dynamics playing out at the frontier of the field; strengthen the US government's ability to detect and respond to AI incidents; and scale AI development safely domestically and internationally.
We suggest leveraging existing authorities, identifying requirements for new legal regimes when appropriate, and highlighting new technical options for AI governance that make domestic and international safeguards much easier to implement.
What is the most surprising—or alarming—thing you encountered in putting this report together?
Harris: From speaking to frontier researchers, it was clear that labs are under significant pressure to accelerate their work and build more powerful systems, and this increasingly involves hiring staff who are more interested in pushing forward capabilities as opposed to addressing risks. This has created a significant opportunity: many frontier lab executives and staff want to take a more balanced approach. As a result, the government has a window to introduce common-sense safeguards that would be welcomed not only by the public, but by important elements within frontier labs themselves.
Have anything to make us feel good about where things are headed?
Harris: Absolutely. If we can solve for the risk side of the equation, AI offers enormous promise. And there really are solutions to these problems. They require bold action, but that's not unprecedented: we've had to deal with catastrophic national security risks before, from biotechnology to nuclear weapons.
AI is a different kind of challenge, but it also comes with technical levers that can make it easier to secure and assure. On-chip governance protocols offer new ways to verify adherence to international treaties, and fine-grained software-enabled safeguards can allow for highly targeted regulatory measures that place the smallest possible burden on industry.
Biden preaches AI safety
The group includes large tech companies like Amazon, Meta, and Microsoft; AI-focused startups like Anthropic and OpenAI; along with government contractors, advocacy groups, research labs, and universities.
The Biden administration, which is working to implement the many provisions of the executive order, previously secured voluntary commitments from major AI firms to mitigate the worst harms possible in the development of AI.
While the government is slow to pass laws and implement executive action, engaging with the private sector directly can be a productive first step toward rolling out a new regulatory regime to rein in this emerging set of technologies. The administration recently met a series of deadlines from the wide-ranging order and has begun to offer updates, such as the new know-your-customer rules for AI firms.Grown-up AI conversations are finally happening, says expert Azeem Azhar
“The thing that’s surprised me most is how well CEOs are [now] articulating generative AI, this technology that’s only been public for a year or so,” Azhar says,” “I’ve never experienced that in my life and didn’t realize how quickly they’ve moved.”
Azhar and Bremmer also discuss the underlying technology that’s allowed generative AI tools like ChatGPT-4 to advance so quickly and where conversations about applications of artificial intelligence go from here. Whereas a year ago, experts were focused on the macro implications of existential risk, Azhar is excited this year to hear people focus on practical things like copyright and regulation—the small yet impactful things that move the economy and change how we live our lives.
Catch Azeem Azhar's full conversation with Ian Bremmer in next week's episode of GZERO World on US public television. Check local listings.
One big thing missing from the AI conversation | Zeynep Tufekci
When deployed cheaply and at scale, artificial intelligence will be able to infer things about people, places, and entire nations, which humans alone never could. This is both good and potentially very, very bad.
If you were to think of some of the most overlooked stories of 2023, artificial intelligence would probably not make your list. OpenAI's ChatGPT has changed how we think about AI, and you've undoubtedly read plenty of quick takes about how AI will save or destroy the planet. But according to Princeton sociologist Zeynep Tufekci, there is a super important implication of AI that not enough people are talking about.
"Rather than looking at what happens between you and me if we use AI," Tufekci said to Ian on the sidelines of the Paris Peace Forum, "What I would like to see discussed is what happens if it's used by a billion people?" In a short but substantive interview for GZERO World, Tufekci breaks down just how important it is to think about the applications of AI "at scale" when its capabilities can be deployed cheaply. Tufekci cites the example of how AI could change hiring practices in ways we might not intend, like weeding out candidates with clinical depression or with a history of unionizing. AI at scale will demonstrate a remarkable ability to infer things that humans cannot, Tufekci explains.
Watch the GZERO World with Ian Bremmer episode: Overlooked stories in 2023
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
New AI toys spark privacy concerns for kids
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks at a new phenomenon in the AI industry: interactive toys powered by AI. However, its interactivity function comes with a host of privacy concerns. According to Owen, it doesn't end there.
So, it's that time of year where I start thinking, admittedly far too late, about my holiday shopping. And because I have a ten-year-old child, this means that I am seeing a lot of ads for new kids’ toys. Kids have had interactive toys for decades. Remember Tickle Me Elmo?
But now these interactive toys are being powered by AI. For example, for $1500, you can buy your kid a Moxie robot. My name is Moxie. I am a new robot. What is your name? Moxie is sort of like a robotic best friend. When your kid talks to it, Moxie records those conversations and then uses technology powered by OpenAI to analyze those interactions and react back.
Embodied, the company that makes Moxie, says that this helps kids regulate their emotions, provides them with companionship, and boost their self-esteem. All of which sounds great, but toys like this should also give us pause. Let me explain. A toy like this comes with a whole host of privacy concerns. Moxie records video and audio of your child and then analyzes that data to create facial expression and user image data.
Now they say they don't store the audio and video recordings, but they do keep the metadata about your child's facial expressions and how they're interacting with the toy. Embodied says it's ultimately parents’ responsibility to ensure that their child isn't giving out personal data. But I don't know., that seems unlikely for a toy that's designed to be your child's digital best friend.
These types of privacy concerns, of course, aren't new. Home assistants like Amazon Alexa and other smart appliances also record and mine your data. And big tech companies aren't likely to move away from this kind of practice, as data collection is essential to their market power. It's pretty clear we're extending this collection practice into the lives of our children.
While privacy concerns with toys like these are well-established, there's another issue that I think requires some thought. How will toys like these affect childhood development? There's a chance these toys could become a powerful tool in helping kids learn and grow. Embodied claims that 71% of the kids that use Moxie saw improved social skills. But this also represents a pretty radical new frontier in childhood development.
What happens when kids are being socialized with robots instead of with other kids? It's often said that AI is going to transform our society, but this may not be a binary event. Sometimes the effect of AI is going to creep into our lives slowly. Kids toys, slowly but surely becoming agents, may be one way this happens.
I'm Taylor Owen and thanks for watching.
The OpenAI-Sam Altman drama: Why should you care?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the OpenAI-Sam Altman drama.
Hi, I'm Taylor Owen. This is GZERO AI. So if you're watching this video, then like me, you're probably glued to your screen over the past week, watching the psychodrama play out at OpenAI, a company literally at the center of the current AI moment we're in.
Sam Altman, the CEO of OpenAI, was kicked out of his company by his own board of directors. Under a week later, he was back as CEO, and all but one of those board members was gone. All of this would be amusing, and it certainly was in a glib sort of way, if the consequences weren't so profound. I've been thinking a lot about how to make sense of all this, and I keep coming back to this profound sense of deja vu.
First, though, a quick recap. We don't know all of the details, but it really does seem to be the case that at the core of this conflict was a tension between two different views of what OpenAI was and will be in the future. Remember, OpenAI was founded in 2015 as a nonprofit, and a nonprofit because it was choosing a mission of building technologies to benefit all of humanity over a private corporate mission of increasing value for shareholders. When they started running out of money, though, a couple of years later, they embedded a for-profit entity within this nonprofit structure so that they could capitalize on the commercial value of the products that the nonprofit was building. This is where the tension lied, between the incentives of a for-profit engine and the values and mission of a nonprofit board structure.
All of this can seem really new. OpenAI was building legitimately groundbreaking technologies, technologies that could transform our world. But I think the problem and the wider problem here is not a new one. This is where I was getting deja vu. Back in the early days of Web 2.0, there was also a huge amount of excitement over a new disruptive technology. In this case, the power of social media. In some ways, events like the Arab Spring were very similar to the emergence of ChatGPT, a seismic of event that demonstrated to broader society the power of an emerging technology.
Now I spent the last 15 years studying the emergence of social media, and in particular how we as societies can balance the immense benefits and upside of these technologies with also the clear downside risks as they emerged. I actually think we got a lot of that balance wrong. It's times like this when a new technology emerges that we need to think carefully about what lessons we can learn from the past. I want to highlight three.
First, we need to be really clear-eyed about who has power in the technological infrastructure we're deploying. In the case of OpenAI, it seems very clear that the profit incentives won over the more broader social mandate. Power is also, though, who controls infrastructure. In this case, Microsoft played a big role. They controlled the compute infrastructure, and they wielded this power to come out on top in this turmoil.
Second, we need to bring the public into this discussion. Ultimately, a technology will only be successful if it has legitimate citizen buy-in, if it has a social license. What are citizens supposed to think when they hear the very people building these technologies disagreeing over their consequences? Ilya Sutskever, for example, said just a month ago, "If you value intelligence over all human qualities, you're going to have a bad time," when talking about the future of AI. This kind of comment coming from the very people that are building the technologies is just exacerbating an already deep insecurity many people feel about the future. Citizens need to be allowed and be enabled and empowered to weigh into the conversation about the technologies that are being built on their behalf.
Finally, we simply need to get the governance right this time. We didn't last time. For over 20 years, we've largely left the social web unregulated, and it's had disastrous consequences. This means not being confused by technical or systemic complexity masking lobbying efforts. It means applying existing laws and regulations first ... In the case of AI, things like copyright, online safety rules, data privacy rules, competition policy ... before we get too bogged down in big, large-scale AI governance initiatives. We just can't let the perfect be the enemy of the good. We need to iterate, experiment, and countries need to learn from each other in how they step into this complex new world of AI governance.
Unfortunately, I worry we're repeating some of the same mistakes of the past. Once again, we're moving fast and we're breaking things. If the new board of OpenAI is any indication about how they're thinking about governance and how the AI world in general values and thinks about governance, there's even more to worry about. Three white men calling the shots at a tech company that could very well transform our world. We've been here before, and it doesn't end well. Our failure to adequately regulate social media had huge consequence. While the upside of AI is undeniable, it's looking like we're making many of the same mistakes, only this time the consequences could be even more dire.
I'm Taylor Owen, and thanks for watching.
- AI's role in the Israel-Hamas war so far ›
- AI agents are here, but is society ready for them? ›
- A chaotic shakeup at OpenAI ›
- Ask ChatGPT: Is OpenAi CEO Sam Altman in, or out? - GZERO Media ›
- Ask ChatGPT: What will Sam Altman achieve for Microsoft? ›
- New AI toys spark privacy concerns for kids - GZERO Media ›
- How OpenAI CEO Sam Altman became the most influential voice in tech - GZERO Media ›
AI agents are here, but is society ready for them?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the rise of AI agents.
Today I want to talk about a recent big step towards the world of AI agents. Last week, OpenAI, the company behind ChatGPT, announced that users can now create their own personal chatbots. Prior to this, tools like ChatGPT were primarily useful because they could answer users' questions, but now they can actually perform tasks. They can do things instead of just talking about them. I think this really matters for a few reasons. First, AI agents are clearly going to make some things in our life easier. They're going to help us book travel, make restaurant reservations, manage our schedules. They might even help us negotiate a raise with our boss. But the bigger news here is that private corporations are now able to train their own chatbots on their own data. So a medical company, for example, could use personal health records to create virtual health assistants that could answer patient inquiries, schedule appointments or even triage patients.
Second, this I think, could have a real effect on labor markets. We've been talking about this for years, that AI was going to disrupt labor, but it might actually be the case soon. If you have a triage chatbot for example, you might not need a big triage center, and therefore you'd need less nurses and you'd need less medical staff. But having AI in the workplace could also lead to fruitful collaboration. AI is becoming better than humans at breast cancer screening, for example, but humans will still be a real asset when it comes to making high stakes life or death decisions or delivering bad news. The key point here is that there's a difference between technology that replaces human labor and technology that supplements it. We're at the very early stages of figuring out the balance.
And third, AI Safety researchers are worried about these new kinds of chatbots. Earlier this year, the Center for AI Safety listed autonomous agents as one of its catastrophic AI risks. Imagine a chatbot being programmed with incorrect medical data, triaging patients in the wrong order. This could quite literally be a matter of life or death. These new agents are clear demonstration of the disconnect that's increasingly growing between the pace of AI development, the speed with which new tools are being developed and let loose on society, and the pace of AI regulation to mitigate the potential risks. At some point, this disconnect could just catch up with us. The bottom line though is that AI agents are here. As a society, we better start preparing for what this might mean.
I'm Taylor Owen, and thanks for watching.
- AI at the tipping point: danger to information, promise for creativity ›
- Governing AI Before It’s Too Late ›
- How AI will roil politics even if it creates more jobs ›
- Everybody wants to regulate AI ›
- The OpenAI-Sam Altman drama: Why should you care? - GZERO Media ›
- CRISPR, AI, and cloning could transform the human race - GZERO Media ›
- New AI toys spark privacy concerns for kids - GZERO Media ›
- ChatGPT on campus: How are universities handling generative AI? - GZERO Media ›