America’s first data security executive order ... underwhelms

​FILE PHOTO: U.S. President Joe Biden looks on before speaking during a roundtable discussion on public safety at the State Dining Room at the White House in Washington, U.S., February 28, 2024.
FILE PHOTO: U.S. President Joe Biden looks on before speaking during a roundtable discussion on public safety at the State Dining Room at the White House in Washington, U.S., February 28, 2024.
REUTERS/Tom Brenner

President Joe Biden issued an executive order last week targeting entities that affect every web user, whether they realize it or not. The order empowers the Justice Department to stop companies called data brokers from collecting and selling Americans’ personal data to “countries of concern” like China, Russia, Iran, North Korea, and Cuba.

What data brokers do: Compile massive amounts of sensitive user data (browsing history, biometric scans, geolocation) and sell it to advertisers. One study showed that Facebook took in personal data on a single user from 48,000 companies, a reflection of how the social media giant attempts to track down every detail of a potential consumer’s lifestyle and habits.

Why that’s dangerous: As AI improves, bad actors’ ability to sift through vast amounts of this data to track and pry into the personal lives of Americans — including service members and government officials — will also improve. The Biden administration is hoping to prevent “intrusive surveillance, scams, blackmail, and other violations of privacy.”

What’s missing: Concrete regulations, like Europe’s GDPR framework that requires explicit documentation on how all EU citizens' data is used and stored. Instead, the executive order empowers bureaucrats to start a complex and months long rule making process. We'll only know details about how the executive order will be enforced afterward.

When it comes to data, Americans are still living in the Wild Wild West. While this order aims to prevent privacy violations from some of America’s adversaries, there’s nothing stopping other countries, companies, and the federal government itself from doing the exact same thing.

More from GZERO Media

Elon Musk in an America Party hat.
Jess Frampton

Life comes at you fast. Only five weeks after vowing to step back from politics and a month after accusing President Donald Trump of being a pedophile, Elon Musk declared his intention to launch a new political party offering Americans an alternative to the Republicans and Democrats.

Chancellor of the Exchequer Rachel Reeves (right) crying as Prime Minister Sir Keir Starmer speaks during Prime Minister’s Questions in the House of Commons, London, United Kingdom, on July 2, 2025.
PA Images via Reuters Connect

UK Prime Minister Keir Starmer has struggled during his first year in office, an ominous sign for centrists in Western democracies.

- YouTube

“We wanted to be first with a flashy AI law,” says Kai Zenner, digital policy advisor in the European Parliament. Speaking with GZERO's Tony Maciulis at the 2025 AI for Good Summit in Geneva, Zenner explains the ambitions and the complications behind Europe’s landmark AI Act. Designed to create horizontal rules for all AI systems, the legislation aims to set global standards for safety, transparency, and oversight.

More than 60% of Walmart suppliers are small businesses.* Through a $350 billion investment in products made, grown, or assembled in the US, Walmart is helping these businesses expand, create jobs, and thrive. This effort is expected to support the creation of over 750,000 new American jobs by 2030, empowering companies like Athletic Brewing, Bon Appésweet, and Milo’s Tea to grow their teams, scale their production, and strengthen the communities they call home. Learn more about Walmart's commitment to US manufacturing. *See website for additional details.

Last month, Microsoft released its 2025 Responsible AI Transparency Report, demonstrating the company’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.