Scroll to the top

{{ subpage.title }}

AI policy formation must include voices from the global South
AI policy formation: The dire need for diverse voices | GZERO AI

AI policy formation must include voices from the global South

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she explains the need to incorporate diverse and inclusive perspectives in formulating policies and regulations for artificial intelligence. Narrowing the focus primarily to the three major policy blocs—China, the US, and Europe—would overlook crucial opportunities to address risks and concerns unique to the global South.

This is GZERO AI from Stanford's campus, where we just hosted a two-day conference on AI policy around the world. And when I say around the world, I mean truly around the world, including many voices from the Global South, from multilateral organizations like the OECD and the UN, and from the big leading AI policy blocs like the EU, the UK, the US and Japan that all have AI offices for oversight.

Read moreShow less

British Prime Minister Rishi Sunak speaks during a news conference at the AI Safety Summit in Milton Keynes, near London, last November.

Kyodo via Reuters Connect

The UK is plotting to regulate AI

Six months after British Prime Minister Rishi Sunak hosted a global summit on artificial intelligence at Bletchley Park, the United Kingdom is making moves to start regulating AI.
Read moreShow less
Israel's Lavender: What could go wrong when AI is used in military operations?
Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI

Israel's Lavender: What could go wrong when AI is used in military operations?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces' use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.
Read moreShow less
Midjourney

Biden pushes forward on AI

Joe Biden is starting to walk the talk on artificial intelligence. Federal agencies have until December to get a handle on how to use — and minimize the risks from — AI, thanks to new instructions from the White House Office of Management and Budget. The policies mark the next step along the path laid out by Biden’s October AI executive order, adding specific goals after a period of evaluation.

What’s new

Federal agencies will need to “assess, test, and monitor” the impact of AI, “mitigate the risks of algorithmic discrimination,” and provide “transparency into how the government uses AI.”

It’s unclear to what extent AI currently factors into government work. The Defense Department already has key AI investments, while other agencies may only be toying with the new technology. Under Biden’s new rules, agencies seeking to use AI must create an “impact assessment” for the tools they use, conduct real-world testing before deployment, obtain independent evaluation from an oversight board or another body, do regular monitoring and risk-assessment, and work to mitigate any associated risks.

Adam Conner, vice president of technology policy at the Center for American Progress, says that the OMB guidance is “an important step in articulating that AI should be used by federal agencies in a responsible way.”

The OMB policy isn’t solely aimed at protecting against AI’s harms. It mandates that federal agencies name a Chief AI Officer charged with implementing the new standards. These new government AI czars are meant to work across agencies, coordinate the administration’s AI goals, and remove barriers to innovation within government.

What it means

Dev Saxena, director of Eurasia Group's geo-technology practice, said the policies are “precedent-setting,” especially in the absence of comprehensive artificial intelligence legislation like the one the European Union recently passed.

Saxena noted that the policies will move the government further along than industry in terms of safety and transparency standards for AI since there’s no federal law governing this technology specifically. While many industry leaders have cooperated with the Biden administration and signed a voluntary pledge to manage the risks of AI, the new OMB policies could also serve as a form of “soft law” to force higher standards of testing, risk-assessment, and transparency for the private sector if they want to sell their technology and services to the federal government.

However, there’s a notable carveout for the national security and defense agencies, which could be targets for the most dangerous and insidious uses of AI. We’ve previously written about America’s AI militarization and goal of maintaining a strategic advantage over rivals such as China. While they’re exempted from these new rules, a separate track of defense and national-security guidelines are expected to come later this year.

Fears and concerns

Still, public interest groups are concerned about the ways in which the citizens’ liberties could be curtailed when the government uses AI. The American Civil Liberties Union called on governments to do more to protect citizens from AI. “OMB has taken an important step, but only a step, in protecting us from abuses by AI. Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” wrote Cody Venzke, ACLU senior policy counsel, in a statement.

Of course, the biggest risk to the implementation of these policies is the upcoming presidential election. Former President Donald Trump, if reelected, might keep some of the policies aimed at China and other political adversaries, Saxena says, but could significantly pull back from the rights- and safety-focused protections.

Beyond the uncertainty of election season, the Biden administration has a real challenge going from zero to full speed. “The administration should be commended on its work so far,” Conner says, “but now comes the hard part: implementation.”

OpenAI is risk-testing Voice Engine, but the risks are clear
- YouTube

OpenAI is risk-testing Voice Engine, but the risks are clear

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she says that while OpenAI is testing its new Voice Engine model to identify its risks, we've already experienced the clear dangers of voice impersonation technology. What we need is a more independent assessment of these new technologies applying equally to companies who want to tread carefully and those who want to race ahead in developing and deploying the technology.
Read moreShow less
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
Social media's AI wave: Are we in for a “deepfakification” of the entire internet? | GZERO AI

Social media's AI wave: Are we in for a “deepfakification” of the entire internet?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.

Read moreShow less
Midjourney

Avoiding extinction: A Q&A with Gladstone AI’s Jeremie Harris

In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before to write such reports and brief government officials on matters concerning AI safety.

Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” released to the public on March 11.

The short version? It’s pretty dire: “The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks.” Next to the words “catastrophic risks” is a particularly worrying footnote: “By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction.”

With all that in mind, GZERO spoke to Jeremie Harris, co-founder and CEO of Gladstone AI, about how this report came to be and how we should rewire our thinking about the risks posed by AI.

This interview has been edited for clarity and length.

GZERO: What is Gladstone and how did the opportunity to write this report come about?

Jeremie Harris: After GPT-3 came out in 2020, we assessed that the key principle behind it might be extensible enough that we should expect a radical acceleration in AI capabilities. Our views were shaped by our technical expertise in AI (we'd founded a now-acquired AI company in 2016), and by our conversations with friends at the frontier labs, including OpenAI itself.

By then, it was already clear that a ChatGPT moment was coming, and that the US government needed to be brought up to speed. We briefed a wide range of stakeholders, from cabinet secretaries to working-level action officers on the new AI landscape. A year before ChatGPT was released, we happened upon a team at the State Department that recognized the importance of AI scaling up with larger, more powerful models. They decided to commission an assessment of that risk set a month before ChatGPT launched, and we were awarded the contract.

You interviewed 200 experts. How did you determine who to talk to and who to take most seriously?

Harris: We knew who the field's key contributors were, and had spoken to many of them personally.

Our approach was to identify and engage all of the key pockets of informed opinion on these issues, from leadership to AI risk skeptics, to concerned researchers. We spoke to members of the executive, policy, safety, and capabilities teams at top labs. In addition, we held on-site engagements with researchers at top academic institutions in the US and U.K., as well as with AI auditing companies and civil society groups.

We also knew that we needed to account for the unique perspective of the US government's national security community, which has a long history of dealing with new emerging technologies and WMD-like risks. We held unprecedented workshops that brought together representatives and WMD experts from across the US interagency to discuss AI and its national security risks, and had them red-team our recommendations and analysis.

What do you want the average person to know about what you found?

Harris: AI has already helped us make amazing breakthroughs in fields like materials science and medicine. The technology’s promise is real. Unfortunately, the same capabilities that create that promise also create risks, and although we can't be certain, a significant and growing body of data does suggest that these risks could lead to WMD-scale effects if they're not properly managed. The question isn't how do we stop AI development, but rather, how can we implement common-sense safeguards that AI researchers themselves are often calling for, so that we can reap the immense benefits.

Our readership is (hopefully) more informed than the average person about AI. What should they take away from the report?

Harris: Top AI labs are currently locked in a race on the path to human-level AI, or AGI. This competitive dynamic erodes the margins that they otherwise might be investing in developing and implementing safety measures, at a time when we lack the technical means to ensure that AGI-level systems can be controlled or prevented from being weaponized. Compounding this challenge is the geopolitics of AI development, as other countries develop their own domestic AI programs.

This problem can be solved. The action plan lays out a way to stabilize the racing dynamics playing out at the frontier of the field; strengthen the US government's ability to detect and respond to AI incidents; and scale AI development safely domestically and internationally.

We suggest leveraging existing authorities, identifying requirements for new legal regimes when appropriate, and highlighting new technical options for AI governance that make domestic and international safeguards much easier to implement.

What is the most surprising—or alarming—thing you encountered in putting this report together?

Harris: From speaking to frontier researchers, it was clear that labs are under significant pressure to accelerate their work and build more powerful systems, and this increasingly involves hiring staff who are more interested in pushing forward capabilities as opposed to addressing risks. This has created a significant opportunity: many frontier lab executives and staff want to take a more balanced approach. As a result, the government has a window to introduce common-sense safeguards that would be welcomed not only by the public, but by important elements within frontier labs themselves.

Have anything to make us feel good about where things are headed?

Harris: Absolutely. If we can solve for the risk side of the equation, AI offers enormous promise. And there really are solutions to these problems. They require bold action, but that's not unprecedented: we've had to deal with catastrophic national security risks before, from biotechnology to nuclear weapons.

AI is a different kind of challenge, but it also comes with technical levers that can make it easier to secure and assure. On-chip governance protocols offer new ways to verify adherence to international treaties, and fine-grained software-enabled safeguards can allow for highly targeted regulatory measures that place the smallest possible burden on industry.

A view of the Georgia State Capitol in Atlanta, Georgia, U.S., May 11, 2021. Picture taken May 11, 2021.

REUTERS/Linda So

Deepfake recordings make a point in Georgia

A Georgia lawmaker used a novel approach to help pass legislation to ban deepfakes in politics: he used a deepfake. Republican state representative Brad Thomas used an AI-generated recording of two of his bills opponents—state senator Colton Moore and activist Mallory Staples—endorsing the bill.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest