Sheejith's Personal Site

Researchers find just 250 malicious documents can leave LLMs vulnerable to backdoors

It doesn't take much for bad actors to influence an AI model during pretraining.

Artificial intelligence companies have been working at breakneck speeds to develop the best and most powerful tools, but that rapid development hasn't always been coupled with clear understandings of AI's limitations or weaknesses. Today, Anthropic released a report on how attackers can influence the development of a large language model.

The study centered on a type of attack called poisoning, where an LLM is pretrained on malicious content intended to make it learn dangerous or unwanted behaviors. The key finding from this study is that a bad actor doesn't need to control a percentage of the pretraining materials to get the LLM to be poisoned. Instead, the researchers found that a small and fairly constant number of malicious documents can poison an LLM, regardless of the size of the model or its training materials. The study was able to successfully backdoor LLMs based on using only 250 malicious documents in the pretraining data set, a much smaller number than expected for models ranging from 600 million to 13 billion parameters.

"We’re sharing these findings to show that data-poisoning attacks might be more practical than believed, and to encourage further research on data poisoning and potential defenses against it," the company said. Anthropic collaborated with the UK AI Security Institute and the Alan Turing Institute on the research.

Posted on: 10/10/2025 4:00:51 AM


Talkbacks

You must be logged in to enter talkback comments.