We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.

Former Prime Minister of New Zealand and special envoy for the Christchurch Appeal Jacinda Ardern arrives at the 5th Christchurch Appeal Summit, co-chaired by her and French President Emmanuel Macron at the Elysee Palace in Paris, on Nov. 10, 2023.
OpenAI and Anthropic, two of AI’s biggest startups, signed on to the Christchurch Call to Action at a summit in Paris on Friday, pledging to suppress terrorist content. The perpetrator of the Christchurch shooting was reportedly radicalized by far-right content on Facebook and YouTube, and he livestreamed the attack on Facebook.
While the companies have agreed to “regular and transparent public reporting” about their efforts, the commitment is voluntary — meaning they won’t face real consequences for any failures to comply. Still, it’s a strong signal that the battle against online extremism, which started with social media companies, is now coming for AI companies.
Under US law, internet companies are generally protected from legal liability under Section 230 of the Communications Decency Act. The issue was deflected by the Supreme Court last year in two terrorism-related cases, with the Justices ruling that the plaintiffs didn’t have standing to sue Google and Twitter under US anti-terrorism laws. But there’s a rich debate brewing as to whether Section 230 protects AI chatbots like ChatGPT, a question that’s bound to wind up in court. Sen. Ron Wyden, one of the authors of Section 230, has called AI “unchartered territory” for the law.