Meta’s AI is being used by Chinese military researchers

​The Meta logo is seen on a mobile phone with the Chinese flag in the background in this photo illustration.
The Meta logo is seen on a mobile phone with the Chinese flag in the background in this photo illustration.
Photo by Jaap Arriens / SIpa USA via Reuters

Meta, the parent company of Facebook and Instagram, has taken a different approach to the AI boom than many of its Silicon Valley peers. Instead of developing proprietary large language models, Meta has championed open-source models that are free and accessible for anyone to use. (That said, some open-source advocates say it’s not truly open-source because Meta has usage rules.)

But because of Meta’s openness, Chinese researchers were able to develop their own AI model — for military use — using one of Meta’s Llama models, according to a paper they published in June, but first reported by Reuters on Nov. 1.

Chinese university researchers, some of whom have ties to the People's Liberation Army, developed a model called ChatBIT using Llama 2 — first released in February 2023. (Meta’s top model is Llama 3.2, released in September 2024.) In the paper reviewed by Reuters, the researchers said they built a chatbot “optimized for dialogue and question-answering tasks in the military field.” It will be able to be used for “intelligence analysis, … strategic planning, simulation training, and command decision-making,” the paper said.

Llama’s acceptable use policy prohibits using the models for “military, warfare, nuclear industries or applications [and] espionage.” Meta told Reuters that the use did violate the terms and said it took unspecified action against the developers but also said the discovery was insignificant. “In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI,” Meta said.

Open-source development has already become a hot-button issue for regulators and tech advocates. For example, the California AI safety bill, which was vetoed by Gov. Gavin Newsom, became controversial for mandating developers have a “kill switch” to shut off models — something that’s not possible for open-source developers who publish their code. With an open-source model in China’s hands — even an old one — regulators may have the fodder they need to try and crack down on open-source AI the next time they try to pass and enact AI rules.

More from GZERO Media

- YouTube

A military confrontation between India and Pakistan in May nearly pushed the two nuclear-armed countries to the brink of war. On Ian Explains, Ian Bremmer breaks down the complicated history of the India-Pakistan conflict, one of the most contentious and bitter rivalries in the world.

A combination picture shows Russian President Vladimir Putin during a meeting with Arkhangelsk Region Governor Alexander Tsybulsky in Severodvinsk, Arkhangelsk region, Russia July 24, 2025.
REUTERS/Leah Millis

In negotiations, the most desperate party rarely gets the best terms. As Donald Trump and Vladimir Putin meet in Alaska today to discuss ending the Ukraine War, their diverging timelines may shape what deals emerge – if any.

The Caryn influencer artificial intelligence AI page is seen in this illustration photo taken in Warsaw, Poland on 05 December, 2023.
(Photo by Jaap Arriens/NurPhoto)

Since its inception, generative AI such as ChatGPT has run primarily in the cloud: large data centers run by large companies. In that home, AI is reliant on electricity-hungry computers, robust internet connections, and centralized data.