Biden pushes forward on AI

Midjourney

Joe Biden is starting to walk the talk on artificial intelligence. Federal agencies have until December to get a handle on how to use — and minimize the risks from — AI, thanks to new instructions from the White House Office of Management and Budget. The policies mark the next step along the path laid out by Biden’s October AI executive order, adding specific goals after a period of evaluation.

What’s new

Federal agencies will need to “assess, test, and monitor” the impact of AI, “mitigate the risks of algorithmic discrimination,” and provide “transparency into how the government uses AI.”

It’s unclear to what extent AI currently factors into government work. The Defense Department already has key AI investments, while other agencies may only be toying with the new technology. Under Biden’s new rules, agencies seeking to use AI must create an “impact assessment” for the tools they use, conduct real-world testing before deployment, obtain independent evaluation from an oversight board or another body, do regular monitoring and risk-assessment, and work to mitigate any associated risks.

Adam Conner, vice president of technology policy at the Center for American Progress, says that the OMB guidance is “an important step in articulating that AI should be used by federal agencies in a responsible way.”

The OMB policy isn’t solely aimed at protecting against AI’s harms. It mandates that federal agencies name a Chief AI Officer charged with implementing the new standards. These new government AI czars are meant to work across agencies, coordinate the administration’s AI goals, and remove barriers to innovation within government.

What it means

Dev Saxena, director of Eurasia Group's geo-technology practice, said the policies are “precedent-setting,” especially in the absence of comprehensive artificial intelligence legislation like the one the European Union recently passed.

Saxena noted that the policies will move the government further along than industry in terms of safety and transparency standards for AI since there’s no federal law governing this technology specifically. While many industry leaders have cooperated with the Biden administration and signed a voluntary pledge to manage the risks of AI, the new OMB policies could also serve as a form of “soft law” to force higher standards of testing, risk-assessment, and transparency for the private sector if they want to sell their technology and services to the federal government.

However, there’s a notable carveout for the national security and defense agencies, which could be targets for the most dangerous and insidious uses of AI. We’ve previouslywritten about America’s AI militarization and goal of maintaining a strategic advantage over rivals such as China. While they’re exempted from these new rules, a separate track of defense and national-security guidelines are expected to come later this year.

Fears and concerns

Still, public interest groups are concerned about the ways in which the citizens’ liberties could be curtailed when the government uses AI. The American Civil Liberties Union called on governments to do more to protect citizens from AI. “OMB has taken an important step, but only a step, in protecting us from abuses by AI. Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” wrote Cody Venzke, ACLU senior policy counsel, in a statement.

Of course, the biggest risk to the implementation of these policies is the upcoming presidential election. Former President Donald Trump, if reelected, might keep some of the policies aimed at China and other political adversaries, Saxena says, but could significantly pull back from the rights- and safety-focused protections.

Beyond the uncertainty of election season, the Biden administration has a real challenge going from zero to full speed. “The administration should be commended on its work so far,” Conner says, “but now comes the hard part: implementation.”

More from GZERO Media

German Chancellor Friedrich Merz and Ukrainian President Volodymyr Zelenskiy attend a press conference, on the day they attend a virtual meeting with U.S. President Donald Trump and European leaders on the upcoming Trump-Putin summit on Ukraine, in Berlin, Germany, August 13, 2025.
REUTERS/Liesa Johannssen

During a planned group call later today, Ukrainian President Volodymyr Zelensky and some of his fellow European leaders will press US President Donald Trump to consult Kyiv more deeply.

Russian President Vladimir Putin and then-Indian ambassador to Russia Pankaj Saran attend a ceremony to hand over credentials at the Kremlin in Moscow, Russia, on April 20, 2016.

REUTERS/Kirill Kudryavtsev/Pool

Amid US President Donald Trump’s tariff threats, GZERO spoke to former Indian Ambassador to Russia Pankaj Saran to better understand why India’s relationship with Russia is so crucial to Prime Minister Narendra Modi.

This summer, Microsoft released the 2025 Responsible AI Transparency Report, demonstrating Microsoft’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.