2 Comments

User's avatar
Matt Lubin's avatar

In theory, I'm totally on board with the monitor-and-respond approach vs. "no biology for everyone because we are afraid that maybe some evil person will do evil things." But I don't think your footnote 1 is relevant to the claim you make about AI overall. It may be that general purpose LLMs like Claude Opus 4.5 can't design a novel pathogen, but biological design tools like AlphaFold and Evo can design novel proteins predicted to have the same biological targets as known toxins,[1] and Evo has designed an entire working phage genome from scratch.[2]

Also, it seems like the dataset usage test approach is still targeting capabilities, when we really want to be targeting usage. Hopefully the intent is more towards some sort of verification of whose key is accessing the dataset, and then being able to trace that access in case more red flags show up from the same source, like a systematic "know your customer" approach.

[1] https://www.science.org/content/article/made-order-bioweapon-ai-designed-toxins-slip-through-safety-checks-used-companies

[2] https://www.biorxiv.org/content/10.1101/2025.09.12.675911v1

Rainbow Roxy's avatar

Hey, great read as always. What specific if scenarios?

No posts

Ready for more?