Skip to content
Search

Latest Stories

GZERO AI

Biden pushes forward on AI

Biden pushes forward on AI
Midjourney

Joe Biden is starting to walk the talk on artificial intelligence. Federal agencies have until December to get a handle on how to use — and minimize the risks from — AI, thanks to new instructions from the White House Office of Management and Budget. The policies mark the next step along the path laid out by Biden’s October AI executive order, adding specific goals after a period of evaluation.


What’s new

Federal agencies will need to “assess, test, and monitor” the impact of AI, “mitigate the risks of algorithmic discrimination,” and provide “transparency into how the government uses AI.”

It’s unclear to what extent AI currently factors into government work. The Defense Department already has key AI investments, while other agencies may only be toying with the new technology. Under Biden’s new rules, agencies seeking to use AI must create an “impact assessment” for the tools they use, conduct real-world testing before deployment, obtain independent evaluation from an oversight board or another body, do regular monitoring and risk-assessment, and work to mitigate any associated risks.

Adam Conner, vice president of technology policy at the Center for American Progress, says that the OMB guidance is “an important step in articulating that AI should be used by federal agencies in a responsible way.”

The OMB policy isn’t solely aimed at protecting against AI’s harms. It mandates that federal agencies name a Chief AI Officer charged with implementing the new standards. These new government AI czars are meant to work across agencies, coordinate the administration’s AI goals, and remove barriers to innovation within government.

What it means

Dev Saxena, director of Eurasia Group's geo-technology practice, said the policies are “precedent-setting,” especially in the absence of comprehensive artificial intelligence legislation like the one the European Union recently passed.

Saxena noted that the policies will move the government further along than industry in terms of safety and transparency standards for AI since there’s no federal law governing this technology specifically. While many industry leaders have cooperated with the Biden administration and signed a voluntary pledge to manage the risks of AI, the new OMB policies could also serve as a form of “soft law” to force higher standards of testing, risk-assessment, and transparency for the private sector if they want to sell their technology and services to the federal government.

However, there’s a notable carveout for the national security and defense agencies, which could be targets for the most dangerous and insidious uses of AI. We’ve previously written about America’s AI militarization and goal of maintaining a strategic advantage over rivals such as China. While they’re exempted from these new rules, a separate track of defense and national-security guidelines are expected to come later this year.

Fears and concerns

Still, public interest groups are concerned about the ways in which the citizens’ liberties could be curtailed when the government uses AI. The American Civil Liberties Union called on governments to do more to protect citizens from AI. “OMB has taken an important step, but only a step, in protecting us from abuses by AI. Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” wrote Cody Venzke, ACLU senior policy counsel, in a statement.

Of course, the biggest risk to the implementation of these policies is the upcoming presidential election. Former President Donald Trump, if reelected, might keep some of the policies aimed at China and other political adversaries, Saxena says, but could significantly pull back from the rights- and safety-focused protections.

Beyond the uncertainty of election season, the Biden administration has a real challenge going from zero to full speed. “The administration should be commended on its work so far,” Conner says, “but now comes the hard part: implementation.”

More For You

What we learned from a week of AI-generated cartoons
Courtesy of ChatGPT
Last week, OpenAI released its GPT-4o image-generation model, which is billed as more responsive to prompts, more capable of accurately rendering text, and better at producing higher-fidelity images than previous AI image generators. Within hours, ChatGPT users flooded social media with cartoons they made using the model in the style of the [...]
The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

Jonathan Raa/NurPhoto via Reuters
H3C, one of China’s biggest server makers, has warned about running out of Nvidia H20 chips, the most powerful AI chips Chinese companies can legally purchase under US export controls. [...]
​North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

KCNA via REUTERS
Hermit Kingdom leader Kim Jong Un has reportedly supervised AI-powered kamikaze drone tests. He told KCNA, the state news agency, that developing unmanned aircraft and AI should be a top priority to modernize North Korea’s armed forces. [...]
The logo for Isomorphic Labs is displayed on a tablet in this illustration.

The logo for Isomorphic Labs is displayed on a tablet in this illustration.

Igor Golovniov/SOPA Images/Sipa USA via Reuters
In 2024, Demis Hassabis won a Nobel Prize in chemistry for his work in predicting protein structures through his company, Isomorphic Labs. The lab, which broke off from Google's DeepMind in 2021, raised $600 million from investors in a new funding round led by Thrive Capital on Monday. The company did not disclose a valuation. [...]