Meta’s AI is being used by Chinese military researchers

​The Meta logo is seen on a mobile phone with the Chinese flag in the background in this photo illustration.
The Meta logo is seen on a mobile phone with the Chinese flag in the background in this photo illustration.
Photo by Jaap Arriens / SIpa USA via Reuters

Meta, the parent company of Facebook and Instagram, has taken a different approach to the AI boom than many of its Silicon Valley peers. Instead of developing proprietary large language models, Meta has championed open-source models that are free and accessible for anyone to use. (That said, some open-source advocates say it’s not truly open-source because Meta has usage rules.)

But because of Meta’s openness, Chinese researchers were able to develop their own AI model — for military use — using one of Meta’s Llama models, according to a paper they published in June, but first reported by Reuters on Nov. 1.

Chinese university researchers, some of whom have ties to the People's Liberation Army, developed a model called ChatBIT using Llama 2 — first released in February 2023. (Meta’s top model is Llama 3.2, released in September 2024.) In the paper reviewed by Reuters, the researchers said they built a chatbot “optimized for dialogue and question-answering tasks in the military field.” It will be able to be used for “intelligence analysis, … strategic planning, simulation training, and command decision-making,” the paper said.

Llama’s acceptable use policy prohibits using the models for “military, warfare, nuclear industries or applications [and] espionage.” Meta told Reuters that the use did violate the terms and said it took unspecified action against the developers but also said the discovery was insignificant. “In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI,” Meta said.

Open-source development has already become a hot-button issue for regulators and tech advocates. For example, the California AI safety bill, which was vetoed by Gov. Gavin Newsom, became controversial for mandating developers have a “kill switch” to shut off models — something that’s not possible for open-source developers who publish their code. With an open-source model in China’s hands — even an old one — regulators may have the fodder they need to try and crack down on open-source AI the next time they try to pass and enact AI rules.

More from GZERO Media

Hurricane Melissa, which has developed into a Category 5 storm, moves north in the Caribbean Sea towards Jamaica and Cuba in a composite satellite image obtained by Reuters on October 27, 2025.
CIRA/NOAA/Handout via REUTERS

30: Hurricane Melissa, which was upgraded over the weekend to a Category 5 storm, is expected to hit Jamaica on Monday and bring 30 inches of rain and 165-mph winds, in what will be one of the most intense storms to ever hit the island.

US President Donald Trump shakes hands with Vietnam's Prime Minister Pham Minh Chinh as East Timor's Prime Minister Kay Rala Xanana Gusmao and Singapore's Prime Minister Lawrence Wong look on at the ASEAN Summit in Kuala Lumpur, Malaysia, on October 26, 2025.
Vincent Thian/Pool via REUTERS

The US president signed a raft of trade deals on Sunday at the ASEAN summit in Malaysia, but the main event of his Asia trip will be his meeting with Chinese President Xi Jinping on Thursday.

Argentina's President Javier Milei celebrates after the La Libertad Avanza party won the midterm election, which is seen as crucial for Milei's administration after U.S. President Donald Trump warned that future support for Argentina would depend on Milei's party performing well in the vote, in Buenos Aires, Argentina, October 26, 2025.
REUTERS/Cristina Sille
- YouTube

On GZERO World with Ian Bremmer, Tristan Harris of the Center for Humane Technology warns that tech companies are racing to build powerful AI models and ignoring mental health risks and other consequences for society and humanity.

Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to talk about the risks of recklessly rolling out powerful AI tools without guardrails as big tech firms race to build “god in a box.”

- YouTube

The next leap in artificial intelligence is physical. On Ian Explains, Ian Bremmer breaks down how robots and autonomous machines will transform daily life, if we can manage the risks that come with them.