Search
AI-powered search, human-powered content.
scroll to top arrow or icon

{{ subpage.title }}

Commerce Secretary Gina Raimondo arrives to a Senate Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies hearing on expanding broadband access on Capitol Hill in Washington, D.C., U.S. February 1, 2022.

Sarah Silbiger/Pool via REUTERS

National safety institutes — assemble!

The Biden administration announced that it will host a global safety summit on artificial intelligence on Nov. 20-21 in San Francisco. The International Network of AI Safety Institutes, which was formed at the AI Safety Summit in Seoul in May, will bring together safety experts from each member country’s AI safety institute. The current member countries are Australia, Canada, the European Union, France, Japan, Kenya, Singapore, South Korea, the United Kingdom, and the United States.

The aim? “Strengthening international collaboration on AI safety is critical to harnessing AI technology to solve the world’s greatest challenges,” Secretary of State Antony Blinken said in a statement.

Commerce Secretary Gina Raimondo, co-hosting the event with Blinken, said that the US is committed to “pulling every lever” on AI regulation. “That includes close, thoughtful coordination with our allies and like-minded partners.”

Chinese and U.S. flags flutter outside the building of an American company in Beijing, China, January 21, 2021.

REUTERS/Tingshu Wang

American and Chinese companies set new standards

It’s not every day that companies from the United States and China work together. But on Sept. 6, a new coalition of big tech companies representing both global powers announced that they have joined forces to develop new security standards for large language models.

Read moreShow less
Midjourney

How the Department of Homeland Security’s WMD office sees the AI threat

The US Department of Homeland Security is preparing for the worst possible outcomes from the rapid progression of artificial intelligence technology technology. What if powerful AI models are used to help foreign adversaries or terror groups build chemical, biological, radiological, or nuclear weapons?

The department’s Countering Weapons of Mass Destruction office, led by Assistant Secretary Mary Ellen Callahan, issued a report to President Joe Biden that was released to the public in June, with recommendations about how to rein in the worst threats from AI. Among other things, the report recommends building consensus across agencies, developing safe harbor measures to incentivize reporting vulnerabilities to the government without fear of prosecution, and developing new guidelines for handling sensitive scientific data.

We spoke to Callahan about the report, how concerned she actually is, and how her office is using AI to further its own goals while trying to outline the risks of the technology.

Read moreShow less

Ilya Sutskever, co-Founder and Chief Scientist of OpenAI speaks during a talk at Tel Aviv University in Tel Aviv, Israel June 5, 2023.

REUTERS/Amir Cohen

What is “safe” superintelligence?

OpenAI co-founder and chief scientist Ilya Sutskever has announced a new startup called Safe Superintelligence. You might remember Sutskever as one of the board members who unsuccessfully tried to oust Sam Altman last November. He has since apologized and hung around OpenAI before departing in May.

Little is known about the new company — including how it’s funded — but its name has inspired debate about what’s involved in building a safe superintelligent AI system. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever disclosed. (‘Trust and safety’ is typically what internet companies call their content moderation teams.)

Read moreShow less
Midjourney

Avoiding extinction: A Q&A with Gladstone AI’s Jeremie Harris

In November 2022, the US Department of State commissioned a comprehensive report on the risks of artificial intelligence. The government turned to Gladstone AI, a four-person firm founded the year before to write such reports and brief government officials on matters concerning AI safety.

Gladstone AI interviewed more than 200 people working in and around AI about what risks keep them up at night. Their report, titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI,” released to the public on March 11.

The short version? It’s pretty dire: “The recent explosion of progress in advanced artificial intelligence has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like and WMD-enabling catastrophic risks.” Next to the words “catastrophic risks” is a particularly worrying footnote: “By catastrophic risks, we mean risks of catastrophic events up to and including events that would lead to human extinction.”

With all that in mind, GZERO spoke to Jeremie Harris, co-founder and CEO of Gladstone AI, about how this report came to be and how we should rewire our thinking about the risks posed by AI.

Read moreShow less

President Joe Biden walks across the stage to sign an executive order about artificial intelligence at the White House on Oct. 30, 2023.

REUTERS/Leah Millis/File Photo

Biden preaches AI safety

The Biden administration has created a new body to tackle the threats of AI: the US AI Safety Institute Consortium. The group of 200 AI “stakeholders” led by the Commerce Department and the National Institute of Standards and Technology is tasked with the “development and deployment of safe and trustworthy artificial intelligence.” The group will advise on many of the priorities of Biden’s October 2023 executive order on AI, on matters including “red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
Read moreShow less
Grown-up AI conversations are finally happening, says expert Azeem Azhar
Nuanced AI conversations a major progress, says expert Azeem Azhar | GZERO World

Grown-up AI conversations are finally happening, says expert Azeem Azhar

Tech expert Azeem Azhar is optimistic the conversation around generative artificial intelligence has shifted from existential risk to practical applications at the World Economic Forum in Davos. Artificial intelligence dominated the conversation at this year’s World Economic Forum in Davos, but what is the business world getting right vs. wrong about how it will affect our lives? On GZERO World, Ian Bremmer sat down with AI expert and writer Azeem Azhar for his take on how conversations around the rapidly developing technology have changed in the last year. Unlike previous flash-in-the-pan technologies like crypto and blockchain, Azhar notes, AI is just getting started, and almost every CEO he spoke with has integrated it into their business in some way.
Read moreShow less
One big thing missing from the AI conversation | Zeynep Tufekci
One big thing missing from the AI conversation | Zeynep Tufekci | GZERO World

One big thing missing from the AI conversation | Zeynep Tufekci

When deployed cheaply and at scale, artificial intelligence will be able to infer things about people, places, and entire nations, which humans alone never could. This is both good and potentially very, very bad.

If you were to think of some of the most overlooked stories of 2023, artificial intelligence would probably not make your list. OpenAI's ChatGPT has changed how we think about AI, and you've undoubtedly read plenty of quick takes about how AI will save or destroy the planet. But according to Princeton sociologist Zeynep Tufekci, there is a super important implication of AI that not enough people are talking about.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest