Sheejith's Personal Site

AI can design toxic proteins. They’re escaping through biosecurity cracks.

In October 2023, two scientists at Microsoft discovered a startling vulnerability in a safety net intended to prevent bad actors from using artificial intelligence tools to concoct hazardous proteins for warfare or terrorism.

Those gaping security holes and how they were discovered were kept confidential until Thursday, when a report in the journal Science detailed how researchers generated thousands of AI-engineered versions of 72 toxins that escaped detection. The research team, a group of leading industry scientists and biosecurity experts, designed a patch to fix this problem found in four different screening methods. But they warn that experts will have to keep searching for future breaches in this safety net.

“This is like a Windows update model for the planet. We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can,” Eric Horvitz, chief scientific officer of Microsoft and one of the leaders of the work, said at a press briefing.

The team considered the incident the first AI and biosecurity “zero day” — borrowing a term from the cyber-world for defense gaps software developers don’t know about that leave them susceptible to an attack.

In recent years, researchers have been using AI to design bespoke proteins. The work has opened up vast potential across many fields of science. With these tools, scientists can create proteins to degrade plastic pollution, fight disease or make crops more resilient.

But with possibility comes risk. That’s why in October 2023, the Microsoft scientists embarked on an initial “adversarial” pilot study, in advance of a protein engineering biosecurity conference. The researchers never manufactured any of the proteins but created digital versions as part of the study.

Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient.

“What’s happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end … in managing the risks,” said David Relman, a microbiologist at Stanford University School of Medicine. “It’s not just that we have a gap — we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we’re already getting further behind.”

Posted on: 10/4/2025 6:32:47 AM


Talkbacks

You must be logged in to enter talkback comments.