Dario Amodei's "The Adolescence of Technology"
A risk-side companion to “Machines of Loving Grace” and a biosecurity primer on AI safety for people who tuned out.
As far as AIxBio goes, I tend to bookmark essays1 that do one of two things: either they expand my mental model of what is possible in biology and AI, or they force me to be more honest about the edge cases that make those possibilities dangerous. Dario Amodei’s “The Adolescence of Technology” does the second, and it does it in a way that complements his earlier “Machines of Loving Grace.” I posted a few thoughts on Bluesky, but with the whirlwind of news surrounding Anthropic and the pentagon over the last 48 hours, I wanted to expand on a few things here.
If “Machines” is the vision of a world where powerful AI is channeled toward human flourishing, “Adolescence” is the other side, the tense “middle” chapter: the period where capability races ahead of governance, norms, and institutional maturity.2
I appreciated the essay’s insistence on specificity. Amodei refers the class of systems he worries about as something like a “country of geniuses in a datacenter”: agentic, fast, scalable, and equipped with the interfaces a human knowledge worker already uses. Once you adopt that frame, a lot of debates stop being abstract. The question is no longer “can models answer biology questions,” it’s what happens when systems can iterate, plan, and troubleshoot over weeks, at scale.
For biosecurity, the key point is the way AI changes the relationship between motive and ability. Historically, the most catastrophic forms of biological misuse were constrained by expertise, tacit knowledge, and operational friction. A sufficiently capable model can compress that friction by coaching a malicious (or merely reckless) actor through the messy parts, not just reciting facts. That shifts the defense posture away from “information control” alone and toward layered mitigation: robust refusal under adversarial pressure, biorisk-specific evaluation, monitoring, and boring but crucial complements like gene synthesis screening and faster detection and response.
You don’t have to agree with every timeline to find this essay an interesting read. It’s a pragmatic look at what it would mean to take AI risk seriously without drifting into either fatalism or hand-wavy optimism.
I’ve long admired the writing of Niko McCarty and others at Asimov Press. Niko occasionally posts about great essays in biology on his newsletter and social media. I’m working on curating my own collection and will share soon.
My friend and colleague Alexander Titus has been writing on biosecurity, innovation, and regulation at The Connected Ideas Project. See this post and those that follow it for more.
