Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Trump wants a White House AI czar
If appointed, this person would be the White House official tasked with coordinating the federal government’s use of the emerging technology and its policies toward it. And while the role will not go to Elon Musk, the billionaire tech CEO who has been named to run a government efficiency commission for Trump, he will have input as to who gets the job.
The Trump administration has promised a deregulatory attitude toward artificial intelligence, including undoing President Joe Biden’s 2023 executive order on AI.
That order not only tasked the federal departments and agencies with evaluating how to regulate the technology given their statutory authority but also how to use it to further their own goals. Under Biden, each agency was tasked with naming a chief AI officer. If Trump is to keep those positions, the White House AI czar would likely coordinate with these officials across the executive branch.Will Donald Trump let AI companies run wild?
Days are numbered for Biden’s executive order
Trump hasn’t given many details about how exactly he’ll rejigger the regulatory approach to AI, but he has promised to repeal President Joe Biden’s executive order on AI, which tasked every executive department and agency with developing common-sense rules to rein in AI while also exploring how they can use the technology to further their work. At a December 2023 campaign rally in Iowa, Trump promised to “cancel” the executive order and “ban the use of AI to censor the speech of American citizens on day one.” (It’s unclear what exactly Trump was referring to, but AI has long been used by social media companies for content moderation.)
The states will be in charge of regulating AI
Megan Shahi, director of technology policy at the Center for American Progress, a liberal think tank, said that a deregulatory approach by the Trump administration will cause a patchwork system that’ll be difficult for AI companies to comply with.
“This can be beneficial for some Americans living in states willing to pass regulation, but harmful for others without it,” she said. “The hope is that states set a national standard that AI companies seek to universally comply with, but that is unlikely to be a reality right away at least.”
While Trump himself is likely to be hands-off, she expects him to “entrust a team of his trusted allies”— such as Tesla and X CEO Elon Musk — “to do much of the agenda setting, decision making, and execution of the tech agenda.”
Will Trump reverse Biden’s chip crackdown?
Matt Mittelsteadt, a research fellow at the libertarian Mercatus Center at George Mason University, said he expects export controls on chips aimed at curbing China’s ability to compete on AI to continue. And while he thinks it’s a harmful idea, he believes a Republican unified government could enact controls on AI software — especially following reports that China used Meta’s open-source Llama models for military purposes.
The biggest change is Trump’s proposed tariffs on China. “For AI, the use of tariffs to either attempt to ‘punish China’ or reshore industry could be an industry killer,” Mittelsteadt said. “AI hardware depends on materials either not found or manufactured in the United States and no amount of trade protection will ‘reshore’ what cannot be reshored. The only possible result here will be financial strain that is bound to tighten the belts of Silicon Valley and yield a resulting decrease in research and development spend.”
This could give China a strategic advantage: “At this critical moment in the ‘AI race’ with China, such restrictions could represent a generational leapfrog opportunity for China’s tech sector.”
In the coming weeks, Trump will announce his Cabinet selections — the earliest indication of how he’ll handle AI and a litany of other crucial policy areas. Personnel is policy, after all. How quickly he can get them confirmed will impact how quickly he can unwind Biden’s orders and chart a new path, especially with a first 100 days agenda that’s likely to be jam-packed. Will AI make the cut or fall by the wayside? Trump hasn’t even been sworn in yet, but the clock is already ticking.
Gov. Gavin Newsom vetoes California’s AI safety bill
California Gov. Gavin Newsom on Sunday vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, the AI safety bill passed by the state’s legislature in August.
Newsom has signed other AI-related bills into law, such as two recent measures protecting performers from AI deepfakes of their likenesses, but vetoed this one over concerns about the focus of the would-be law.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote in a letter on Sept. 29. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
Democratic state Sen. Scott Wiener, who sponsored the bill, called the veto a “setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” Wiener hasn’t disclosed the next steps but vowed to continue pushing the envelope on AI regulation in the state. “California will continue to lead in that conversation — we are not going anywhere.”National safety institutes — assemble!
The Biden administration announced that it will host a global safety summit on artificial intelligence on Nov. 20-21 in San Francisco. The International Network of AI Safety Institutes, which was formed at the AI Safety Summit in Seoul in May, will bring together safety experts from each member country’s AI safety institute. The current member countries are Australia, Canada, the European Union, France, Japan, Kenya, Singapore, South Korea, the United Kingdom, and the United States.
The aim? “Strengthening international collaboration on AI safety is critical to harnessing AI technology to solve the world’s greatest challenges,” Secretary of State Antony Blinken said in a statement.
Commerce Secretary Gina Raimondo, co-hosting the event with Blinken, said that the US is committed to “pulling every lever” on AI regulation. “That includes close, thoughtful coordination with our allies and like-minded partners.”What do Democrats want for AI?
At last week’s Democratic National Convention, the Democratic Party and its newly minted presidential candidate, Vice President Kamala Harris, made little reference to technology policy or artificial intelligence. But the party’s platform and a few key mentions at the DNC show how a Harris administration would handle AI.
In the official party platform, there are three mentions of AI: First, it says Democrats will support historic federal investments in research and development, break “new frontiers of science,” and create jobs in artificial intelligence among other sectors. It also says it will invest in “technology and forces that meet the threats of the future,” including artificial intelligence and unmanned systems.
Lastly, the Dems’ platform calls for regulation to bridge “the gap between the pace of innovation and the development of rules of the road governing the most consequential domains of technology.”
“Democrats will avoid a race to the bottom, where countries hostile to democratic values shape our future,” it notes.
Harris echoed that final point in her DNC keynote address. “I will make sure that we lead the world into the future on space and artificial intelligence,” she said. “That America, not China, wins the competition for the 21st century, and that we strengthen, not abdicate our global leadership.”
The Republican Party platform, by contrast, promises to repeal Biden’s 2023 executive order on AI, calling it “dangerous,” hindering innovation, and imposing “radical left-wing ideas” on the technology. “In its place, Republicans support AI development rooted in free speech and human flourishing,” it says. (The platform doesn’t go into specifics about how the executive order is harmful or what a free speech-oriented AI policy would entail.) In his RNC address, Donald Trump didn’t mention artificial intelligence or tech policy but talked at length about beating back China economically.
GZERO asked Don Beyer, the Virginia Democratic congressman going back to school to study artificial intelligence, what he thought of his party’s platform and Harris’ remarks on AI. Beyer said that Harris has struck the right balance between promoting American competitiveness and outlining guardrails to minimize the technology’s risks. “The vice president has been personally involved in many of the administration’s efforts to ensure American leadership in AI, from establishing the US AI Safety Institute to launching new philanthropic initiatives for public interest AI, and I expect her future administration to continue that leadership,” he said.California wants to prevent an AI “catastrophe”
The Golden State may be close to passing AI safety regulation — and Silicon Valley isn’t pleased.
The proposed AI safety bill, SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aims to establish “common sense safety standards” for powerful AI models.
The bill would require companies developing high-powered AI models to implement safety measures, conduct rigorous testing, and provide assurances against "critical harms," such as the use of models to execute mass-casualty events and cyberattacks that lead to $500 million in damages. It warns that the California attorney general can take civil action against violators, though rules would only apply to models that cost $100 million to train and pass a certain computing threshold.
A group of prominent academics, including AI pioneers Geoffrey Hinton and Yoshua Bengio,published a letter last week to California’s political leaders supporting the bill. “There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,“ they wrote, saying that regulations are necessary not only to rein in the potential harms of AI but also to restore public confidence in the emerging technology.
Critics, including many in Silicon Valley, argue the bill is overly vague and could stifle innovation. In June, the influential startup incubator Y Combinator, wrote a public letter outlining its concerns. It said that liability should lie with those who abuse AI tools, not developers, that the threshold for inclusion under the law is arbitrary, and that a requirement that developers include a “kill switch” allowing them to turn off the model would be a “de facto ban on open-source AI development.”
Steven Tiell, a nonresident senior fellow with the Atlantic Council's GeoTech Center, thinks the bill is “a good start” but points to “some pitfalls.” He appreciates that it only applies to the largest models but has concerns about the bill’s approach to “full shutdown” capabilities – aka the kill switch.
“The way SB 1047 talks about the ability for a ‘full shutdown’ of a model – and derivative models – seems to assume foundation models would have some ability to control derivative models,” Tiell says. He warned this could “materially impact the commercial viability of foundation models across wide swaths of the industry.”
Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation, acknowledges the tech industry’s concerns. “AI is changing rapidly, so it’s hard to know whether — even with the flexibility in the bill — the regulation it’s proposing will age well with the industry,” she says.
“The whole idea of open-source is that you’re making a tool for people to use as they see fit,” she says, emphasizing the burden on open-source developers. “And it’s both harder to make that assurance and also less likely that you’ll be able to deal with penalties in the bill because open-source projects are often less funded and less able to spend money on compliance.”
State Sen. Scott Wiener, the bill’s sponsor, told Bloomberg he’s heard industry criticisms and made adjustments to its language to clarify that open-source developers aren’t entirely liable for all the ways their models are adapted, but he stood by the bill’s intentions. “I’m a strong supporter of AI. I’m a strong supporter of open source. I’m not looking in any way to impede that innovation,” Wiener said. “But I think it’s important, as these developments happen, for people to be mindful of safety.” Spokespeople for Wiener did not respond to GZERO’s request for comment.
In the past few months, Utah and Colorado have passed their own AI laws, but they’ve both focused on consumer protection rather than liability for catastrophic results of the technology. California, which houses many of the biggest companies in AI, has broader ambitions. But while California has been able to lead the nation — and the federal government on data privacy — it might need industry support to get its AI bill fully approved in the legislature and signed into law. California’s Senate passed the bill last month, and the Assembly is set to vote on it before the end of August.
California Gov. Gavin Newsom hasn’t signaled whether or not he’ll sign the bill should it pass both houses of the legislature, but in May, he publicly warned against over-regulating AI and ceding America’s advantage to rival nations: “If we over-regulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”The FEC kicks AI down the road
That means that the job of keeping deepfakes out of political ads will largely fall to tech platforms and AI developers. Tech companies signed an agreement at the Munich Security Conference in February, vowing to take “reasonable precautions” to prevent their AI tools from being used to disrupt elections. It’s also a task that could potentially fall to broadcasters. The Federal Communications Commission is still considering new rules for AI-generated content in political ads on broadcast television and radio stations. That’s caused tension between the two agencies too: The FEC doesn’t believe the FCC has the statutory authority to act, but the FCC maintains that it does.
After a deepfake version of Joe Biden’s voice was used in a robocall in the run-up to the New Hampshire Democratic primary, intended to trick voters into staying home, the FCC asserted that AI-generated robocalls were illegal under existing law. But time is ticking for further action since other AI-manipulated media may not be covered currently under the law. At this point, it seems likely that serious regulation from either agency might only come after Donald Trump and Kamala Harris square off in November — and perhaps only if Harris wins, as another Trump presidency might mean a further rollback of election rules.Apple signs Joe Biden’s pledge
Apple signed on to the Biden administration’s voluntary pledge for artificial intelligence companies on July 26.
President Joe Biden and Vice President Kamala Harrisfirst announced that they secured commitments from seven major AI developers — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — a year ago in what the administration says laid the groundwork for its executive order on AI adopted in October. The voluntary commitments included safety testing, information sharing on safety risks (with government, academia, and civil society groups), cybersecurity investments, watermarking systems AI-generated content, and a general agreement to “develop and deploy advanced AI systems to help address society’s greatest challenges.”
Until now, Apple wasn’t on the list. Now, as Apple prepares to release new AI-enabled iPhones (powered by OpenAI’s systems as well as its own), the Cupertino-based tech giant is playing nice with the Biden administration, signaling that they’ll be a responsible actor, even without formal legislation on the books.