Incomplete Nature Pt 3: Entropy and Absence
Why we should listen for the notes the universe doesn't play
This is part 3 of a series on Terrence Deacon’s Incomplete Nature. If you haven’t already, read part 1 and part 2 first.
A quick recap: in part 2, we introduced the terms “orthograde” and “contragrade”, which Deacon uses to refer to things that will happen without resistance and things that resist those outcomes, respectively. We also reviewed equilibrium and the mechanisms that cause every system to move toward it in every moment, and discussed the fact that these processes can occur at any scale and with any components, from atoms to buyers and sellers in a market. We left off pondering how constraints could ever become consistent and self-reproducing—as they must for ententionality to emerge—in a universe ultimately driven by random chaos.
Entropy in Negative Terms
The relationship between energy and shapes in physics is described by a concept called entropy. Introductions to entropy often gloss it as a measure of the disorder of a system. This framing gives the unfortunate implication that entropy is primarily about the arrangements of matter, which leaves the many cases of spontaneous order we’re going to encounter somewhat mystifying.
A better frame is that entropy is really about energy. Energy is, more or less by definition, the thing that causes orthograde change. If a lot of energy is gathered in one place, it will necessarily spread out. Entropy gives us a way to measure that spread. It describes the number of ways the energy in a system can be distributed to make the same overall shape.
If the system is in a shape that requires energy to be concentrated in one specific way, it has low entropy. In a low entropy state, inertia will cause the energy to spread out until it either bounces off a boundary or its neighbors pull it back in. Both results cause the energy to spread out to other particles, which will in turn spread it further, increasing entropy. The Second Law of Thermodynamics tells us that, in an isolated system, this is the result that will always ensue.
Liquid water contains a lot of energy, so it’s in constant motion. At room temperature, water molecules form and break bonds at a rate of something like 100 billion to 1 trillion times per second. In our glass of water, the particular arrangement of molecules relative to each other is constantly changing. Despite that constant change, they always form the same overall shape: the shape of the glass. As they move, the water molecules are randomly cycling through all of the possible ways they can make the shape of the glass.
Because the molecules are moving randomly, there’s a chance they could make a different shape. They could bulge up on one side of the glass just a little bit, say. In order to get there, though, the water molecules in that bulge would need more energy than the other molecules. They have to resist gravity and obtain some potential energy, and they have to overcome the resistance of the other water molecules pulling them down. It just isn’t very likely for the energy to come together in one place by random chance.
If it does happen, the increase in potential energy concentrated in the bulge also corresponds to an increase in the rate at which water moves downward, which causes the system to move back toward equilibrium. The observation that the system always moves toward equilibrium is therefore conceptually interchangeable with the Second Law of Thermodynamics, which tells us that the entropy of an isolated system always increases over time. Being at equilibrium maximizes entropy, and entropy always increases, so systems always move toward equilibrium.
There’s something interesting about this interchangeability. We started by treating equilibrium processes as a scale-neutral meta-process that applied just as much to trading in markets as it does to water molecules. It seems like a useful analogy, but is it any more than that? Deacon argues that it is. A key implication of his framework is that it is literally the Second Law of Thermodynamics that causes markets to find their equilibrium price. Every equilibrium process is the same process, regardless of scale or composition.
To make that universality clearer, Deacon applies his concept of a constraint to redefine entropy in negative terms. Recall that Deacon’s definition of a constraint is simply anything that prevents an interaction from occurring. Since interactions are the path to equilibrium, constraints are the obstacles that currently stand in the way of equilibrium. The intermolecular forces in an ice cube constrain the distribution of energy (keeping energy away from the water molecules in the center) until the cube melts.
Deacon restates the Second Law in these terms: within an isolated system, every constraint that can be dissipated will be dissipated. Higher temperature systems have higher entropy because they have the energy to overcome more constraints. The only constraints that persist will be those that require more energy to overcome than the system possesses. On a hot day, the ice cube will melt, but the water molecules will remain intact.
The notes you don’t play
Defining entropy in negative terms is a significant frameshift that unlocks another core concept for Deacon: the causal role of absence.
Entropy describes the distribution of energy and matter. Distribution is just as much about where things aren’t as where they are. When a fast particle hits a slow particle, we have two choices. We can follow the energy to where it is now, like we did before. But we could also follow the trajectory of the fast particle and say that the slow particle prevented that outcome from occurring. By interrupting the orthograde motion of the fast particle, an absence of its future positions was created.
That absence is necessary for the system to reach equilibrium; as long as there are single particles with lots of energy in the system, energy remains concentrated in an unlikely arrangement. That energy needs to spread out to maximize entropy. Increasing entropy is just as comprehensible if we frame it as the dispersal of new absences to areas of high concentration as it is if we talk about the dispersal of presences to areas of low concentration.
In a conserved quantity like energy, focusing on absence rather than presence is an arbitrary perceptual shift. The only way to cause an absence of energy is to cause a presence of energy somewhere else. It has to be somewhere at all times. The advantage of the absence-based (in Deacon’s coining, absential) framing is that it still works when we’re talking about structures that aren’t conserved. Unlike energy, a molecule’s position can be constrained in two ways. There can be a physical barrier that causes it to bounce back into the system, like energy. But there can also be a chemical reaction that destroys the molecule entirely by transforming it into something else. These two events create different presences, but they create the same absence, and the absence is what causes the overall distribution of the molecule.
At the risk of jumping ahead, we can already glimpse a peek at the payoff to Deacon’s reasoning here. As structures scale up in complexity, they contain more and more energy in less and less likely distributions, which makes it easier to generate corresponding absences. Like a tack popping a balloon (an example we’ll come back to), small, inert constraints can create a pattern of absences in far more complex structures simply by releasing the high concentrations of energy they contain. As things become more complex, presence and absence become increasingly asymmetrical.
It might seem like this asymmetry makes order a fragile thing, but it’s quite the opposite. The increasing entropic instability of more complex arrangements is precisely what lets them recapitulate the same equilibrium-seeking pattern of orthograde randomness pushing against absence-generating constraints at more complex and slower-moving scales of organization.
For instance, think about the species composition of an ecological community. There are no maple trees in west Texas, not because a physical barrier prevents their seeds from arriving there, but because environmental constraints prevent them from surviving when they do. The lack of sufficient moisture generates an absence, which prevents a certain combination of plants from occurring. Species have an orthograde tendency to expand outward, and only absence-generating constraints on their survival can explain the limited and predictable combinations of species we actually observe.
Species, in turn, are just consistent bundles of smaller components. The conditions that generate absences of maples in west Texas therefore generate absences of their leaves, their stems, their roots, and of the characteristic molecules and gene sequences they contain. Because there are absence-generating constraints that affect the species as a whole, all the possible combinations of those organs and molecules and genes with the ones that do occur in west Texas are prevented. We know that those combinations are otherwise possible, because we can water a maple tree and grow it in west Texas.
Because this single constraint precludes so many combinations of lower-level objects simultaneously, it dramatically reduces the number of arrangements that can be explored before equilibrium is reached, and thus minimizes the number of interactions that must occur to reach it. This asymmetry allows larger, more complex systems to move toward equilibrium even though they contain fewer objects that move much slower. Water remains at equilibrium because its particles interact 100 billion times per second and there are moles and moles of them, so any statistically likely behavior is essentially guaranteed to occur nigh-instantaneously. Markets, ecosystems, cultures, and genes involve far smaller numbers of far more complex objects that take far longer to interact, but those interactions are far more tightly constrained, so movement toward equilibrium remains the relevant causal logic.
Systems of more complex patterns differ from thermodynamics in another important way: they can move toward equilibria that aren’t practically reachable. Under the right conditions, they can stay in a state of constant motion forever, always moving toward equilibrium but never coming close to attaining it. This happens when the act of moving toward equilibrium changes the constraints that set that equilibrium. For instance, in the classic lynx-hare predator-prey cycle, the lynx population increases because there are lots of hares—equilibrium is higher than the population, so it moves toward it by growing. But more lynx kill more hares, so the equilibrium decreases, and keeps decreasing until it’s below the current lynx population, which means the lynx population starts to decrease. Populations in such a scenario never reach a stable value because equilibrium necessarily changes faster than populations move toward it.
This kind of non-equilibrium stability is the building block out of which Deacon constructs his model of life. In our next post, we’ll discuss these processes in more detail and see how they can paradoxically apply consistent constraints by increasing entropy.