Skip to content
Search

Latest Stories

GZERO AI

Inside the fight to shape Trump’s AI policy

Inside the fight to shape Trump’s AI policy
Midjourney

The Trump White House has received thousands of recommendations for its upcoming AI Action Plan, a roadmap that will define how the US government will approach artificial intelligence for the remainder of the administration.

The plan was first mandated by President Donald Trump in his January executive order that scrapped the AI rules of his predecessor, Joe Biden. While Silicon Valley tech giants have put forth their plans for industry-friendly regulation and deregulation, many civil society groups have taken the opportunity to warn of the dangers of AI. Ahead of the March 15 deadline set by the White House to answer a request for information, Google and OpenAI were some of the biggest names to propose measures they’d like to see in place at the federal level.


What Silicon Valley wants

OpenAI urged the federal government to allow AI companies to train their models’ copyrighted material without restriction, shield them from state-level regulations, and implement additional export controls against Chinese competitors.

“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans,” OpenAI’s head of global policy, Christopher Lehane, wrote in a memo. Google meanwhile called for weakened copyright restrictions on training AI and “balanced” export controls that would protect national security without strangling American companies.

Xiaomeng Lu, the director of geo-technology at the Eurasia Group, said invoking Chinese AI models was a “competitive play” from OpenAI.

“OpenAI is threatened by DeepSeek and other open-source models that put pressure on the company to lower prices and innovate better,” she said. “Sam [Altman] likely wants the US government’s aid in wider access to data, export restrictions, and government procurement to boost its own market position.”

Laura Caroli, a senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies, agreed. “Despite DeepSeek’s problems in safety and privacy, the real point is … OpenAI feels threatened by DeepSeek’s ability to build powerful open-source models at lower costs,” she said. “They use the national security narrative to advance their commercial goals.”

Civil liberties and national security concerns

Civil liberties groups painted a more dire picture of what could happen if Trump pursues an AI strategy that does not attempt to place guardrails on the development of this technology.

“Automating important decisions about people is reckless and dangerous,” said Corynne McSherry, legal director at the Electronic Frontier Foundation. The group submitted its own response to the government on March 13. McSherry told GZERO it criticized tech companies for ignoring “serious and well-documented risks of using AI tools for consequential decisions about housing, employment, immigration, access to benefits” and more.

There are also important national security measures that might be ignored by the Trump administration if it removes all regulations governing AI.

“I agree that maintaining US leadership in AI is a national security imperative,” said Cole McFaul, research analyst at Georgetown University's Center for Security and Emerging Technology, which also submitted a response that focused on securing American leadership in AI while mitigating risks and better competing with China. “OpenAI’s RFI response includes a call to ban the use of PRC-trained models. I agree with a lot of what they proposed, but I worry that some of Washington’s most influential AI policy advocates are also those with the most to gain.”

But even with corporate influence in Washington, it’s a confusing time to try to navigate the AI landscape with so many nascent regulations in Europe, plus changing signals from the White House.

Mia Rendar, an attorney at the law firm Pillsbury Winthrop Shaw Pittman, noted that while the government is figuring out how to regulate this emerging technology, businesses are caught in the middle. “We’re at a similar inflection point that we were when GDPR was being put in place,” Rendar said, referring to the European privacy law. “If you’re a multinational company, AI laws are going to follow a similar model – you’ll need to set and maintain standards that meet the most stringent set of obligations.”

How influential is Silicon Valley?

With close allies like Tesla CEO Elon Musk and investor David Sacks in Trump’s orbit, the tech sector’s influence has been hard to ignore. Thus, the final AI Action Plan, expected in July, will show whether Silicon Valley really has pull with the Trump administration — and, specifically, which firms have what kind of sway.

While the administration has already signaled that it will be hands-off in regulating AI, it’s unclear what path Trump will take in helping American-made AI companies, sticking it to China, and signaling to the rest of the world that the United States is, in fact, the global leader on AI.

More For You

What we learned from a week of AI-generated cartoons
Courtesy of ChatGPT
Last week, OpenAI released its GPT-4o image-generation model, which is billed as more responsive to prompts, more capable of accurately rendering text, and better at producing higher-fidelity images than previous AI image generators. Within hours, ChatGPT users flooded social media with cartoons they made using the model in the style of the [...]
The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

Jonathan Raa/NurPhoto via Reuters
H3C, one of China’s biggest server makers, has warned about running out of Nvidia H20 chips, the most powerful AI chips Chinese companies can legally purchase under US export controls. [...]
​North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

KCNA via REUTERS
Hermit Kingdom leader Kim Jong Un has reportedly supervised AI-powered kamikaze drone tests. He told KCNA, the state news agency, that developing unmanned aircraft and AI should be a top priority to modernize North Korea’s armed forces. [...]
The logo for Isomorphic Labs is displayed on a tablet in this illustration.

The logo for Isomorphic Labs is displayed on a tablet in this illustration.

Igor Golovniov/SOPA Images/Sipa USA via Reuters
In 2024, Demis Hassabis won a Nobel Prize in chemistry for his work in predicting protein structures through his company, Isomorphic Labs. The lab, which broke off from Google's DeepMind in 2021, raised $600 million from investors in a new funding round led by Thrive Capital on Monday. The company did not disclose a valuation. [...]