Section 230 won’t be a savior for Generative AI

Courtesy of Midjourney

In the US, Section 230 of the Communications Decency Act has been called the law that “created the internet.” It provides legal liability protections to internet companies that host third-party speech, such as social media platforms that rely on user-generated content or news websites with comment sections. Essentially, it prevents companies like Meta or X from being on the hook when their users defame one another, or commit certain other civil wrongs, on their site.

In recent years, 230 has become a lightning rod for critics on both sides of the political aisle seeking to punish Big Tech for perceived bad behavior.

But Section 230 likely does not apply to generative AI services like ChatGPT or Claude. While this is still untested in the US courts, many legal experts believe that the output of such chatbots is first-party speech, meaning someone could reasonably sue a company like OpenAI or Anthropic over output, especially if it plays fast and loose with the truth.

Supreme Court Justice Neil Gorsuch suggested during oral arguments last year that AI chatbots would not be protected by Section 230. “Artificial intelligence generates poetry,” Gorsuch said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”

Without those protections, University of North Carolina professor Matt Perault noted in an essay in Lawfare, the companies behind LLMs are in a “compliance minefield.” They might be forced to dramatically narrow the scope and scale of how their products work if any “company that deploys [a large language model] can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk.”

We’ve already seen similar forces at play in the court of public opinion. Facing criticism around political misinformation, racist images, and deepfakes of politicians, many generative AI companies have limited what their programs are willing to generate – in some cases, outlawing political or controversial content entirely.

Lawyer Jess Miers of the industry trade group Chamber of Progress, however, argues in Techdirt that 230 should protect generative AI. She says that because the output depends “entirely upon whatever query or instructions its users may provide, malicious or otherwise,” the users should be the ones left holding the legal bag. But proving that in court would be an uphill battle, she concedes, in part because defendants would have the onerous task of explaining to judges how these technologies actually work.

The picture gets even more complex: Courts will also have to decide whether only the creators of LLMs receive Section 230 protections, or if companies using the tech on their own platforms are also covered, as Washington Post writer Will Oremuspondered on X last week.

In other words, is Meta liable if users post legally problematic AI-generated content on Facebook? Or what about a platform like X, which incorporates the AI tool Grok for its premium users?

Mark Lemley, a Stanford Law School professor, told GZERO that the liability holder depends on the law but that, generally speaking, the liability falls to whoever deploys the technology. “They may in turn have a claim against the company that designed [or] trained the model,” he said, “but a lot will depend on what, if anything, the deploying company does to fine-tune the model after they get it.”

These are all important questions for the courts to decide, but the liability issue for generative AI won’t end with Section 230. The next battle, of course, is copyright law. Even if tech firms are afforded some protections over what their models generate, Section 230 won’t protect them if courts find that generative AI companies are illegally using copyright works.

More from GZERO Media

This summer, Microsoft released the 2025 Responsible AI Transparency Report, demonstrating Microsoft’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.

Supporters of coalition parties PDCI (Democratic Party of Cote d'Ivoire) and PPA-CI (African People's Party of Cote d'Ivoire) march to protest the removal of their leaders names, Tidjane Thiam and Laurent Gbagbo, from the electoral list calling for an inclusive and peaceful election in Abidjan, Ivory Coast, August 9, 2025.
Matrix Image/Joseph Zahui

Africa is one of the youngest regions on earth. Yet several of its most powerful leaders are in their 70s and 80s – and they’re refusing to cede power, despite growing opposition to their rule.

In a first-of-its-kind deal, Nvidia and AMD will hand 15% of revenues from AI chip sales to China over to the US government in exchange for export licenses.

Riley Callanan

In a first-of-its-kind deal, Nvidia and AMD will hand 15% of revenues from AI chip sales to China over to the US government in exchange for export licenses.

Friedrich Merz, leader of the conservative Christian Democratic Union (CDU) party, gives a statement after German Chancellor Olaf Scholz sacked Finance Minister Christian Lindner, before a session of the Bundestag, Germany's lower house of parliament, in Berlin, Germany, November 7, 2024.
Reuters/Liesa Johannssen

Friedrich Merz’s first 100 days as chancellor of Germany have marked an assertive shift in the country’s role on the European and global stage.