Wet Bags of Atoms Teaching Sand to Think
On emergence, and the lazy dismissal of AI as "just next token prediction" as a thought-terminating reductive cliche that we need to abandon.
I see a particular type of AI dismissal permeating social media, especially on Bluesky, that deserves closer examination. Variations on the theme: AI is “just a glorified search engine,” “fancy autocomplete,” “just next-token prediction,” or my personal favorite, “teaching sand to think.”
The technical descriptions aren’t entirely wrong. Large language models do, at a mechanical level, predict the next token in a sequence. The silicon chips they run on are, in fact, made from sand. This dismissive but gets-you-likes-on-Bluesky framing reveals a deep confusion about how complex systems work, and I’d argue it reflects willful ignorance of one of the most important concepts in science: emergence.
Let’s start with you. You, dearest gentle reader, are “just” a collection of atoms. Carbon, oxygen, hydrogen, nitrogen, and a smattering of trace elements, all dutifully obeying the laws of physics. Nothing more. Every thought you’ve ever had, every emotion, every decision, every memory of your grandma’s kitchen or your first day of graduate school —1 these are, at the most fundamental level, just particles interacting according to the Standard Model of particle physics. You have no free will (at least not in the way most people think about it). At a fundamental physical level, consciousness itself is nothing more than electrochemical signaling between neurons.
You are, if we’re being maximally reductive, a wet bag of atoms.
That description is technically accurate, and it’s also entirely useless. The phone you’re reading this on is a collection of silicon, rare earth metals, and glass. The chair you’re sitting in is a configuration of carbon and other atoms. In some narrow, reductive sense, there are no chairs, no phones, no brains. There’s just the quantum wave function of the universe evolving according to the Schrödinger equation.2 And yet we find it useful to talk about chairs, phones, human brains, memories, love, blog posts, and yes, AI systems, because these are real patterns that exist at a higher level of description.
This is emergence. Physicist/cosmologist Sean Carroll defines emergence3 as a relationship between theories at different levels: a micro-level theory (like particle physics) and a macro-level theory (like biology or psychology). The macro theory uses its own vocabulary and its own rules, but those rules are compatible with (and in principle derivable from) the lower-level description. You can describe the motion of the Earth around the Sun using just its center of mass, position, and velocity. You don’t need to know the position of every atom on the planet. The higher-level description works because it captures patterns that are implicit in the lower-level dynamics.
Phil Anderson, the Nobel Prize-winning condensed matter physicist, captured the core idea in a 1972 paper titled “More Is Different.” His point was that knowing the fundamental laws of physics doesn’t automatically give you an understanding of superconductivity, or fluid dynamics, or evolutionary biology. Each level of complexity introduces genuinely new concepts, new vocabularies, new explanatory frameworks. These are real patterns in the world, not just illusions or conveniences.
So when someone on dismissively refers to a frontier LLM is “just next-token prediction,” they’re making the same thoughtless error as saying you are “just a collection of atoms obeying the laws of physics.” Sure, it’s a technically defensible statement but it deliberately strips away every interesting feature of the system being described.
Yes, Claude Opus 4.6 “just” predicts the next token. Your human brain “just” propagates action potentials across synaptic gaps. What’s happening at the lowest level of description is boring compared to what patterns and capabilities emerge from that process at higher levels. And IMHO those patterns are interesting, useful, or important enough to deserve their own vocabulary.
None of this means we should be uncritical about AI. There are real limitations, real failure modes, and real risks worth discussing seriously. But “it’s just autocomplete” is not a serious critique. It’s a profoundly unscientific thought-terminating cliche thoughtlessly gussied up as technical sophistication. It takes a legitimate fact about the mechanism and uses it to shut down any discussion about what emerges from that mechanism.
The most complex structure in the known universe is (arguably) the human brain. It’s just atoms. It’s also more than just atoms. Both things are true, and you can, in fact, hold both things in your cerebral bag of atoms simultaneously. And when you do so, this is the beginning of understanding emergence. We should extend the same courtesy to other complex systems, even the ones made of sand.
The em dash here and elsewhere in this newsletter reflects my stylistic preference and should not be interpreted as evidence of AI-assisted text generation.
Sean Carroll’s Something Deeply Hidden explains this at a level that a non-theoretical physicist can really latch onto.
I really like how Sean Carroll explains emergence, here on his podcast, and here in his “biggest ideas in the universe” playlist.
