Call to crack down on terrorist content
OpenAI and Anthropic, two of AI’s biggest startups, signed on to the Christchurch Call to Action at a summit in Paris on Friday, pledging to suppress terrorist content. The perpetrator of the Christchurch shooting was reportedly radicalized by far-right content on Facebook and YouTube, and he livestreamed the attack on Facebook.
While the companies have agreed to “regular and transparent public reporting” about their efforts, the commitment is voluntary — meaning they won’t face real consequences for any failures to comply. Still, it’s a strong signal that the battle against online extremism, which started with social media companies, is now coming for AI companies.Under US law, internet companies are generally protected from legal liability under Section 230 of the Communications Decency Act. The issue was deflected by the Supreme Court last year in two terrorism-related cases, with the Justices ruling that the plaintiffs didn’t have standing to sue Google and Twitter under US anti-terrorism laws. But there’s a rich debate brewing as to whether Section 230 protects AI chatbots like ChatGPT, a question that’s bound to wind up in court. Sen. Ron Wyden, one of the authors of Section 230, has called AI “unchartered territory” for the law.