China has reportedly developed a new AI tool, named ChatBIT, potentially for military use by leveraging Meta’s open-source Llama 13B model. According to three research papers reviewed by Reuters, this AI tool was developed with contributions from two institutions connected to the Chinese military and was designed for gathering and processing military intelligence data. In the future, ChatBIT could be employed by the Chinese military for training or analysis purposes, signaling new implications for artificial intelligence in military settings.
Meta has policies restricting Llama’s use for military, warfare, and espionage applications. However, enforcing these limitations outside of the U.S. remains a challenge. In a statement, Meta clarified that “any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.” It’s also unclear how the Chinese researchers gained access to Llama 13B, whether through direct authorization or alternative channels, given that Meta initially restricted its access to research purposes when releasing the model in early 2023.
Despite the potential for military applications, Meta’s AI Vice President Joelle Pineau suggested that ChatBIT’s capabilities might be limited due to a relatively small training dataset, reportedly consisting of just 100,000 military records. By comparison, AI models often require millions of records for optimal performance, raising questions about ChatBIT’s effectiveness. Furthermore, Pineau indicated that the Llama model used in ChatBIT might be outdated, especially when considering China’s rapidly advancing AI technology, which could potentially surpass U.S.-developed models.
The AI-powered development reflects the broader tech rivalry between the U.S. and China, with both countries heavily investing in AI, chip manufacturing, and technology restrictions. While the U.S. has sanctioned various Chinese tech and AI sectors to limit China’s access to advanced semiconductor and AI technologies, China continues to make progress, often finding alternative channels to acquire advanced AI chips and resources.
Some experts argue that open-source AI like Meta’s Llama promotes open innovation and equity, although it also raises the risk of potential misuse. Accessible AI models allow developers worldwide, including those with military intentions, to leverage these tools, leading to concerns over unintended applications. The broader risks associated with open-source AI misuse have already surfaced, with AI technologies being used to create political deepfakes, sway public opinion, and influence elections globally.