Scroll to the top

{{ subpage.title }}

Midjourney

Biden pushes forward on AI

Joe Biden is starting to walk the talk on artificial intelligence. Federal agencies have until December to get a handle on how to use — and minimize the risks from — AI, thanks to new instructions from the White House Office of Management and Budget. The policies mark the next step along the path laid out by Biden’s October AI executive order, adding specific goals after a period of evaluation.

What’s new

Federal agencies will need to “assess, test, and monitor” the impact of AI, “mitigate the risks of algorithmic discrimination,” and provide “transparency into how the government uses AI.”

It’s unclear to what extent AI currently factors into government work. The Defense Department already has key AI investments, while other agencies may only be toying with the new technology. Under Biden’s new rules, agencies seeking to use AI must create an “impact assessment” for the tools they use, conduct real-world testing before deployment, obtain independent evaluation from an oversight board or another body, do regular monitoring and risk-assessment, and work to mitigate any associated risks.

Adam Conner, vice president of technology policy at the Center for American Progress, says that the OMB guidance is “an important step in articulating that AI should be used by federal agencies in a responsible way.”

The OMB policy isn’t solely aimed at protecting against AI’s harms. It mandates that federal agencies name a Chief AI Officer charged with implementing the new standards. These new government AI czars are meant to work across agencies, coordinate the administration’s AI goals, and remove barriers to innovation within government.

What it means

Dev Saxena, director of Eurasia Group's geo-technology practice, said the policies are “precedent-setting,” especially in the absence of comprehensive artificial intelligence legislation like the one the European Union recently passed.

Saxena noted that the policies will move the government further along than industry in terms of safety and transparency standards for AI since there’s no federal law governing this technology specifically. While many industry leaders have cooperated with the Biden administration and signed a voluntary pledge to manage the risks of AI, the new OMB policies could also serve as a form of “soft law” to force higher standards of testing, risk-assessment, and transparency for the private sector if they want to sell their technology and services to the federal government.

However, there’s a notable carveout for the national security and defense agencies, which could be targets for the most dangerous and insidious uses of AI. We’ve previously written about America’s AI militarization and goal of maintaining a strategic advantage over rivals such as China. While they’re exempted from these new rules, a separate track of defense and national-security guidelines are expected to come later this year.

Fears and concerns

Still, public interest groups are concerned about the ways in which the citizens’ liberties could be curtailed when the government uses AI. The American Civil Liberties Union called on governments to do more to protect citizens from AI. “OMB has taken an important step, but only a step, in protecting us from abuses by AI. Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” wrote Cody Venzke, ACLU senior policy counsel, in a statement.

Of course, the biggest risk to the implementation of these policies is the upcoming presidential election. Former President Donald Trump, if reelected, might keep some of the policies aimed at China and other political adversaries, Saxena says, but could significantly pull back from the rights- and safety-focused protections.

Beyond the uncertainty of election season, the Biden administration has a real challenge going from zero to full speed. “The administration should be commended on its work so far,” Conner says, “but now comes the hard part: implementation.”

Subscribe to our free newsletter, GZERO Daily

Latest