Recommended Essays
I’ve been learning in public and writing about it for nearly 20 years, and one of the best parts of that journey has been stumbling across essays that shift how I think about something. Essays that I bookmark, revisit, and recommend to others. The pieces that stick with me tend to be the ones that articulate an idea I’ve been wrestling with, or introduce a perspective I hadn’t considered, or simply nail the explanation of something complex. This page collects those essays. Some are about science, programming, or AI. Others venture into education, careers, or how we approach learning itself. What they share is that they made me think differently, and I hope they’ll do the same for you.
I. Artificial Intelligence
Machines of Loving Grace (Dario Amodei, 2024)
Anthropic CEO Dario Amodei argues that powerful AI could compress 50-100 years of progress in biology and medicine into 5-10 years, potentially eliminating most diseases and doubling human lifespan. The essay introduces a useful framework of “marginal returns to intelligence” to analyze where being smarter actually accelerates progress versus where physical constraints, data limitations, or human factors become the bottleneck. While acknowledging serious challenges around inequality and governance, Dario makes a compelling case that the same basic human intuitions about fairness and cooperation that led to democracy and rule of law will guide us toward broadly sharing AI’s benefits.
Situational Awareness (Leopold Aschenbrenner, 2024)
Leopold Aschenbrenner’s “Situational Awareness” argues that AGI could arrive by 2027 through continued scaling of compute and algorithmic improvements, followed rapidly by an intelligence explosion leading to superintelligence. The timeline isn’t all that important here. The essay’s national security implications are profound regardless of whether the precise timelines prove accurate. Aschenbrenner contends that current AI lab security is woefully inadequate for protecting what will become America’s most critical defense secrets, with algorithmic breakthroughs and model weights currently vulnerable to state-level espionage that could allow adversaries like China to leapfrog years of research. His call for a Manhattan Project-style government involvement reflects the reality that superintelligence would confer decisive military advantage, potentially enabling capabilities from unhackable systems to novel weapons that could render current arsenals obsolete, making this fundamentally a question of whether democratic or authoritarian powers control the technology that may define this century.
The Adolescence of Technology (Dario Amodei, 2026)
“The Adolescence of Technology” is the risk-side companion to “Machines of Loving Grace”, linked above: less utopia, more battle plan. If you’ve tuned out AI safety, this is a crisp map of what can go wrong and what to do next. Amodei slices risk into buckets: autonomy, misuse, power grabs, economic shock, and weird second-order effects. That taxonomy is handy for teaching and for building actual safety roadmaps. For the biosecurity folks: the sharpest section is misuse-for-destruction. AI breaks the old coupling between motive and ability by turning “average-but-malicious” into “guided expert”. Guardrails need to assume sustained, adversarial pressure. The best part is practicality: layered controls (training + classifiers + monitoring), plus transparency rules to avoid safety theater, and targeted policy levers (e.g. synthesis screening, incident reporting). Takeaway: treat safety as infrastructure, not PR. Measure uplift, publish system cards, monitor in the wild, and legislate transparency first so policy can tighten as evidence gets clearer. Even if you disagree on timelines, the agenda is valuable: build interpretability, robust evals, and lightweight, evidence-driven regulation that can ratchet as risk signals strengthen.
II. Biology
Shock Doctrine in the Life Sciences (Alexander Titus, 2025)
Titus is a long-time colleague and friend. I constantly refer back to and cite this essay in my own writing on AI and biosecurity. This essay argues that the life sciences community repeatedly overestimates catastrophic risks from new biotechnologies while undervaluing their benefits, drawing parallels to Naomi Klein’s “Shock Doctrine” concept. He traces this pattern from the 1975 Asilomar Conference on recombinant DNA through GMO controversies and AI-assisted drug design to the recent warnings about “mirror life” organisms, noting that predicted disasters have consistently failed to materialize while technologies like synthetic insulin and gene therapies have delivered substantial benefits. The piece makes a compelling case that Europe’s precautionary principle approach to GMOs resulted in lost scientific leadership and economic opportunities without corresponding safety gains, and that the current calls to ban mirror life research before it even begins may repeat this mistake. The historical perspective is particularly valuable for computational biologists navigating ongoing debates about AI safety and synthetic biology, showing how fear-driven policy can throttle innovation in ways that carry their own ethical costs, especially for vulnerable populations who stand to benefit most from biotechnological advances.
III. Miscellaneous
Maker’s Schedule, Manager’s Schedule (Paul Graham, 2009)
Paul Graham argues that there are two fundamentally different ways people use time: a manager’s schedule built around hour-long blocks and frequent context switching, and a maker’s schedule that depends on uninterrupted half-day chunks to do deep work like writing or programming. Meetings are cheap for managers but expensive for makers because even a single meeting can fracture a day into pieces too small for sustained concentration, and the mere awareness of an upcoming meeting can sap momentum and ambition. The real cost of a meeting is not the hour itself, but the cognitive “tax” it imposes on everything around it, shrinking the space where my creative or analytical work can actually happen. It also offers a practical lesson I try to adhere to: protect maker-time by batching meetings (office hours style) so collaboration doesn’t crowd out the work that requires depth.
