Bobby Azarian is a cognitive neuroscientist, a science journalist and the author of the book “The Romance of Reality: How the Universe Organizes Itself to Create Life, Consciousness and Cosmic Complexity.”
Perhaps the most depressing scientific idea that has ever been put forth is the infamous “heat death hypothesis.” It is a theory about the future of the universe based on the second law of thermodynamics, which in its most well-known form states that entropy, a complicated and confusing term commonly understood to simply mean “disorder,” tends to increase over time in a closed system. Therefore, if we consider that the universe is itself a closed system, the law seems to suggest that the cosmos is becoming increasingly disorganized. It has also been described by many as “winding down.”
As such, the second law appears to hold a chilling prophecy for humanity in the very long term. Essentially, it would seem to imply that life is doomed — not just life on Earth, but life anywhere in the cosmos. Consciousness, creativity, love — all of these things are destined to disappear as the universe becomes increasingly disordered and dissolves into entropy. Life would merely be a transient statistical fluctuation, one that will fade away, along with all dreams of our existence having some kind of eternal meaning, purpose or permanence. This bleak idea is known as the “heat death hypothesis,” and the prophecy foretells a future where all pattern and organization has ceased to be. In this cosmological model, everything must come to an end. There is simply no possibility for continual existence.
Fortunately, the gloomiest theory of all time may just be a speculative assumption based on a misunderstanding of the second law of thermodynamics. For one thing, the law may not be applicable to the universe as a whole, because the types of systems on which it has been empirically tested have well-defined boundaries. The expanding universe does not. Secondly, depending on how one interprets the second law, the inevitable increase in entropy may not correspond to an increase in cosmic disorder.
In fact, some leading scientists are beginning to think that the cosmos is becoming increasingly complex and organized over time as a result of the laws of physics and the evolutionary dynamics that emerge from them. Seth Lloyd, Eric Chaisson and Freeman Dyson are among the well-known names who have questioned whether “disorder” is increasing in the cosmos. Outside of physics, complexity theorist Stuart Kauffman, neuroscientist Christof Koch and Google’s director of engineering Ray Kurzweil all believe that the universe is not destined to grow more disorganized forever, but more complex and rich with information. Many of them have a computational view of the universe, in which life plays a special role.
As Paul Davies, a prolific author and a highly respected theoretical physicist, wrote: “We now see how it is possible for the universe to increase both organization and entropy at the same time. The optimistic and pessimistic arrows of time can coexist: The universe can display creative unidirectional progress even in the face of the second law.” In other words, if we understand the second law better, we can see that it does not actually prohibit the continual growth of complexity and order in nature.
This is the cosmic narrative that the theoretical physicist and author Julian Barbour proposes in his new book “The Janus Point: A New Theory of Time,” which has received praise by some trusted names in the physics world, such as Martin Rees, Sean Carroll and Lee Smolin. Barbour believes that the second law — at least as it is popularly interpreted — does not apply to the universe as a whole, since it is always expanding due to the mysterious force known as dark energy. The old story of increasing cosmic disorder, Barbour concludes, may turn out to be the complete opposite of what is actually happening. Because the universe is not a bounded system, order can continue to increase indefinitely.
Barbour is not alone. David Deutsch, the father of quantum computation, has expressed a similar view in his bestselling mindbender “The Beginning of Infinity,” in which he argues that there are no fundamental limits to knowledge creation. This is a much stronger claim than Barbour’s, because it specifically suggests that life in the universe need not come to an end.
Life is a crucial part of the cosmic story because the growth of complexity and organization enters a new phase when biology emerges. Life is a special form of complexity: It has the ability to create more complexity and to maintain organization against the tendency toward disorder. In a universe expanding without limit, the ability of intelligent life to continually construct complex order may not be limited by the laws of thermodynamics in the way once imagined.
This story of continual complexification would seem to go against the second law, a rock-solid pillar of physics. Remember, though, that both the first and second laws of thermodynamics were conceived before we knew the universe was expanding. To understand if these laws are applicable to the universe as a whole — and not just systems inside the universe — we must briefly explore the history of thermodynamics and understand its relationship with the phenomenon we call life.
In the two fields of thermodynamics — classical and statistical — there are subtly different versions of the second law. The former emerged about a half a century before the latter, and it was concerned with the flow of heat and energy. Statistical thermodynamics attempted to explain the findings of classical thermodynamics in terms of the behavior of ensembles of molecules and atoms, and it was more concerned with how configurations of particles evolve over time.
You could say that the original version of the second law, the classical version, was about the spreading out of energy, where the statistical version was more about ordered configurations of particles becoming more disordered. While the two versions are intimately related and in many instances become equivalent, they do not have the same cosmic implications.
The ideas that would become the second law can be traced back to the work of the French engineer Sadi Carnot in the early 1800s. Carnot wanted to understand how to make steam engines more efficient by analyzing how they used energy. He recognized that heat would spontaneously flow from hotter to colder systems, but never in reverse. We all experience this phenomenon on a daily basis, whenever a hot bath or cup of coffee inevitably cools to room temperature as heat is lost to the surrounding air. Carnot pointed out that this flow of heat creates a motive force, which can be harnessed to power machines. Through a cycle of heating and cooling steam inside a chamber (known as a cylinder) with a movable wall on one side (known as a piston), you can create a force of motion that can power an engine.
What Carnot astutely noticed about this process was that it couldn’t be made 100% efficient. This is the basis for the original second law of thermodynamics. The conversion of thermal energy into mechanical energy always involves the loss of some useful energy to the environment in the form of heat. Once this useful energy is dissipated, meaning it gets spread and lost to its surroundings, it can no longer be harnessed to do physical work. The lost energy still technically exists somewhere out there in the universe, but it can’t be extracted to do anything useful, like sustaining an engine or some other machine. Since life is a machine of sorts, this has implications for how long it can persist in the universe.
Because Carnot was an engineer, his insights were largely unknown or ignored by the physics community for decades, until two giants in the field — Lord Kelvin and Rudolph Clausius — explained their significance and relevance to the emerging science of thermodynamics.
The new field proposed two major laws that, when put together, seem to have cosmic implications. The first law says that energy is conserved. That means it cannot be created or destroyed — implying that the total amount is fixed — though it can be transformed from one form to another. The second essentially says that there is “free energy” — or energy available to do work — but as that energy is used for mechanical work, some of it inevitably gets dissipated as it is converted into heat, a form of energy that is no longer useful. Once energy is dispersed in this way, it becomes impossible to be used to do mechanical work, like creating a force that could power a system.
In 1852, Lord Kelvin wrote a paper with what is considered to be the first statement of the second law, which he described as a universal tendency toward the dissipation of mechanical energy. The term “entropy,” introduced by Clausius in 1865, was originally defined as a measure of the energy in a system that is no longer available for work. Entropy, then, referred to dissipated energy, not structural disorder.
Essentially, these discoveries suggested that a limited supply of free energy was always spreading out and dissipating, so there would come a time when no further mechanical work could be done, including the work required to sustain the biological machinery that we call “life.” One by one, the stars that supply the energy that powers biology would radiate away their usable energy, and life would cease to be.
This sad story isn’t just local; all the stars throughout the cosmos will eventually burn out, causing any biosphere, anywhere, to degrade. Even if some form of life could develop the technology to explore the cosmos, eventually all useful energy in the universe would be converted into heat, leaving no energetic fuel for advanced forms of sentience to consume.
At least, that was the assumption in the second half of the 19th century. This scenario became known as the “heat death” of the universe, and it seemed to be the nail in the coffin for any optimistic cosmology that promised, or even allowed, eternal life and consciousness. For example, one of the most popular cosmological models of the time was put forth by the evolutionary theorist Herbert Spencer, a contemporary of Charles Darwin who was actually more famous than him during their time. Spencer believed that the flow of energy through the universe was organizing it. He argued that biological evolution was just part of a larger process of cosmic evolution, and that life and human civilization were the current products of a process of continual cosmic complexification, which would ultimately lead to a state of maximal complexity, integration and balance among all things.
When the prominent Irish physicist John Tyndall told Spencer about the heat death hypothesis in a letter in 1858,” Spencer wrote him back to say it left him “staggered”: “Indeed, not seeing my way out of the conclusion, I remember being out of spirits for some days afterwards. I still feel unsettled about the matter.”
Things got even gloomier when the Austrian physicist Ludwig Boltzmann put forward a new statistical interpretation of the second law in the latter half of the 19th century. That was when the idea that the universe is growing more disordered came into the picture. Boltzmann took the classical version of the second law — that useful energy inevitably dissipates — and tried to give it a statistical explanation on the level of molecules colliding and spreading out. He used one of the simplest models possible: a gas confined to a box.
How does the evolution of a gas in a box explain the dissipation of useful energy? First, it should be understood that a gas is a collection of molecules moving around rapidly and chaotically, particles that Boltzmann assumed were like little billiard balls following fixed trajectories. Since the great Scottish physicist James Clerk Maxwell had recently shown that the kinetic energy of a molecule is determined by how fast it is moving, Boltzmann assumed the dissipation of usable energy described by Lord Kelvin was caused by pockets of excited molecular motion spreading out in space due to random collisions between neighboring molecules.
For example, if a pocket of highly excited gas molecules starts out in some orderly configuration — let’s say the molecules are bunched together in one corner of the box — over time, the ensemble of particles will evolve to become increasingly spread out, or “disordered.” When an ordered pocket of excited molecular motion exists, there is an energy gradient in the system and the potential to do some work, but as these molecules interact with their neighbors and that excited motion gets dispersed, the gradient disappears. This dissipation of molecular order and free energy continues until the gas approaches a state of maximum entropy and disorder known as thermodynamic equilibrium. Paradoxically, this state of “total disorder” looks like a uniform distribution of gas molecules.
The gas molecules spread out in this way due to a simple statistical reason: There are many more ways for the gas molecules to be arranged in a disordered mess than in some orderly configuration. In other words, an orderly arrangement of particles moving around randomly will naturally become more disorganized. Just like in pool, where the balls start off in an ordered formation but spread out and mix up as collisions occur.
Boltzmann, like Clausius and Kelvin before him, tried to apply his version of the second law to the entire universe — which, he assumed, must be a giant closed system of atoms and molecules bouncing around chaotically, not all that different from his gas in a box. According to his version of the second law of thermodynamics, the entire universe — as a system composed of atoms moving according to physical laws — must eventually tend toward a more disordered and random configuration, just like his box of gas molecules. To explain why there was so much complexity and order in the universe around him, he suggested that the universe must have started out in an extremely ordered state that had since evolved into what we see today, or that the ordered state of affairs we see in our neck of the cosmic woods was the result of a temporary statistical fluctuation away from the general trend toward disorder.
Of course, there were many problems with comparing Boltzmann’s gas-in-a-box model to the universe. The order-to-disorder transition only occurs when the particles in the system do not become statistically correlated with each other over time. Boltzmann’s H-theorem, which the idea of a natural tendency toward disorder is based on, assumes “molecular chaos.” But molecular and chemical forces often cause atoms and molecules to clump together into larger, more complex structures — meaning a gas evolving in a box is not an accurate representation of all the dynamics in nature.
Boltzmann’s model also ignored the influence of gravity, which is often described as an anti-entropic force due to its clumping effects on matter. Gravity’s effects on small objects like gas molecules are essentially so tiny that they are negligible for all practical purposes, meaning you can leave the force out of the model and still make accurate predictions about the state of the system. But at the scale of the universe, the effects of gravity become extremely important to the evolving structure of the system. Gravity is one factor driving the growth of order in the cosmos, and a good example of why the evolution of the universe looks very different from a gas spreading out in a box.
Of course, the attractive force of gravity doesn’t explain the emergence of life, which has been defying Boltzmann’s tendency toward disorder for about four billion years. Not only does life represent the formation of complexity, it constructs more of it. What explains this paradox? How does the biosphere grow more complex and organized if there’s a tendency for organized systems to fall apart? If cosmic complexity is to grow continuously, the process would then seem to curiously depend on life, the only form of complexity that can create more organization and actively sustain itself.
The quantum physicist Erwin Schrodinger explained this paradox in his 1944 book “What is Life?”. What Schrödinger noticed was that instead of drifting toward thermodynamic equilibrium — which for life means a state of death and decay — biological organisms maintained their ordered living state by consuming free energy from the environment (which he called “negative entropy”). Boltzmann’s law of increasing disorder only applies to closed systems, and life on Earth is an open system. It is constantly receiving usable energy from the sun, which drives it away from thermodynamic equilibrium.
Of course, without a steady supply of incoming energy, equilibrium ensues and life perishes. But by feasting on the free energy in the environment, ordered systems can pay the physical price of staying organized and functional, just like burning more coal will allow a steam engine to continue to function. The cost is the dissipation of free energy and the production of thermal entropy, in the form of heat, which is constantly being released into the environment.
Therefore, the continual growth of complexity in the form of biological and technological organization — in other words, the biosphere and the layer of industry and technology that sits on top of it — does not violate the classical version of the second law of thermodynamics. Because the biosphere is an open system that is continually getting energy from the sun, it can continuously build and maintain order. Local reductions in configurational entropy (disorder) are paid for by the simultaneous increase in thermal entropy (heat) caused by life’s constant use of free energy. As long as free energy continues to be used and dispersed, the total amount of entropy in the universe increases, and the classical version of the second law remains intact.
However, it is important to note that the production of heat is not the same as the creation of structural disorder. Energy gets more dispersed as the universe organizes itself, and that is all the second law requires in this context. One could say that energetic disorder increases as structural order grows.
What this means is that the universe can grow increasingly organized through the spread of intelligent life, as long as it can find the free energy it needs to build and maintain the cosmic organization it constructs. Luckily, the universe offers a vast ocean of exploitable energy to beings that are intelligent enough to know how to extract it. In theory, a hyperintelligent civilization could spread through the cosmos, transforming all the matter in its midst into exotic forms of biological and computational machinery. This scenario might be hard to visualize, but it would not be very different from how life went from existing at just a single point on the Earth, not even visible with the naked eye, to covering the entire planet.
But how long could this go on for? The great science fiction writer Isaac Asimov called that “The Last Question” in a critically acclaimed short story about the fate of life in the universe. The story questions the prevailing view of the second law’s applicability to the entire universe, an assumption made by a series of characters in the story: “However it may be husbanded, however stretched out, the energy once expended is gone and cannot be restored. Entropy must increase to the maximum.” Asimov’s skepticism may have been one of his most prescient insights. In his 1964 biographical sketch of Clausius, Asimov called the heat death hypothesis the “scientific analog of the Last Judgement” and notes that “its validity is less certain now than it was a century ago. Though the laws of thermodynamics stand as firmly as ever, cosmologists are far less certain that the laws, as deduced in this small segment of the universe, necessarily apply to the universe as a whole and there is a certain willingness to suspend judgment on the matter of the heat-death.”
In the 1960s, the Harvard cosmologist David Layzer pointed out that although the entropy of the universe will continue to increase in accord with the second law of thermodynamics — that is, an expanding intelligence will always be converting more free energy into thermal entropy — the maximum possible entropy of the expanding universe will presumably increase at a faster rate than the actual entropy increase, allowing for the continual growth of order and complexity. He called this an “entropy gap” — the difference between the universe’s actual entropy and its maximum possible entropy. As long as that gap exists, the universe will not be in thermodynamic equilibrium, and that means there will be energy gradients that life can extract work from.
Now we know the universe is not just expanding, which the Hubble telescope confirmed in 1929, but that the expansion is accelerating at an increasing rate due to the mysterious force known as “dark energy,” the presence of which was theorized before the turn of the millennium. These developments give us reason to believe that the entropy gap will persist into the future, such that the universe may never come to the state of equilibrium predicted by the heat death hypothesis.
In his 2016 book “Humanity in a Creative Universe,” the complexity theorist Stuart Kauffman explained the significance of this: “[W]e do not have to worry about enough free energy. As the universe becomes larger, its maximum entropy increases faster than the loss of free energy by the second law, so there is always more than enough free energy to do work.”
But where does this seemingly unlimited free energy come from, if the first law of thermodynamics suggests that nature has a fixed and finite amount?
Well, it turns out that first law of thermodynamics may also not apply to the universe as a whole, as was assumed, even though conservation of energy applies to systems within the universe. Challenges to our traditional notion of the first law are not uncommon in modern physics. For example, cosmic inflation theory — the leading cosmological model for how the universe became filled with all its energy and matter — proposes that during the early period of expansion, miniscule fractions of a second after the Big Bang, new matter and energy was being continuously created from nothing. In fact, the theory of cosmic inflation suggests more and more universes are being created, so in the totality of reality envisioned by this model, matter creation never ends.
The only way cosmic inflation theory can coexist with the first law is if we divide all the energy in the world into two opposing categories of energy: positive and negative. The so-called “positive energy” associated with new matter is balanced out by the “negative energy” of the gravitational force associated with that matter. According to this model, the sum total of energy of the universe is zero. It may seem like a desperate attempt by cosmologists to salvage the first law, but it works out mathematically. For this reason, Alan Guth calls the universe “the ultimate free lunch.” In principle, new energy can be continuously created, as long as the ratio of positive to negative energy remains balanced. While the implications of this concept are foggy, it is clear that applying the first and second laws of thermodynamics to the cosmos as a whole can get very tricky.
Deutsch speculates over whether life could harness dark energy directly itself to power computation forever in his 2011 book “The Beginning of Infinity”: “Depending on what dark energy turns out to be, it may well be possible to harness it in the distant future, to provide energy for knowledge-creation to continue forever.”
Some physicists have since argued that in theory, it is possible that dark energy could be used as a power source. A conference paper published by the American Astronomical Society proposes that “simple machines could, in theory, extract local power from the gravitationally repulsive cosmological constant,” even if “the amount of energy that could be liberated in a local setting is many orders of magnitude too small to be useful or even detectable.”
Whatever dark energy turns out to be, the cosmic expansion it is driving serves to keep the universe out of thermodynamic equilibrium, and a system not in equilibrium is a system that still has some energy and the capacity to do work.
At his blog Preposterous Universe, Sean Carroll writes: “If there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve.”
If there’s no inevitable equilibrium state, then there seems to be no reason to assume that an evolving intelligence must necessarily come to an end. In his 2006 book “Programming the Universe,” MIT’s Seth Lloyd speculates along these lines: “By scavenging farther and farther afield, our descendants will collect more and more matter and extract its energy. Some fraction of this energy will inevitability be wasted or lost in transmission. Some cosmological models allow the continued collection of energy ad infinitum, but others do not.”
While some cosmologists believe dark energy and the accelerating expansion will ultimately dilute the matter and energy in the universe to such a degree that life must come to an end, a popular new theory known as quintessence suggests that the accelerating expansion may begin to slow, creating even more uncertainty around any predictions for life’s future. Perhaps the dynamics of the universe’s expansion are what they need to be to allow for the continual growth of cosmic complexity? In a 2020 Nature article about quintessence, Carroll is quoted saying, “We’re back to a situation where we have zero idea about how the universe is going to end.”
If Isaac Asimov were alive today, I believe he would be delighted to know that his “last question” is still open. The increase in entropy in the universe is not equivalent to increasing cosmic disorganization. Complexity and entropy can grow together, and perhaps even without limit. I like to believe that this means that the universe is on our side.