The Bayesian Second Law of Thermodynamics

Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.

At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.

That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.

Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)

Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.

Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:

S = - \int \rho \log \rho.

That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.

In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?

I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.

But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)

The Bayesian Second Law of Thermodynamics
Anthony Bartolotta, Sean M. Carroll, Stefan Leichenauer, and Jason Pollack

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.

The crucial word “Bayesian” here refers to Bayes’s Theorem, a central result in probability theory. Bayes’s theorem tells us how to update the probability we assign to any given idea, after we’ve received relevant new information. In the case of statistical mechanics, we start with some probability distribution for the system, then let it evolve (by being influenced by the outside world, or simply by interacting with a heat bath). Then we make some measurement — but a realistic measurement, which tells us something about the system but not everything. So we can use Bayes’s Theorem to update our knowledge and get a new probability distribution.

So far, all perfectly standard. We go a bit farther by also updating the initial distribution that we started with — our knowledge of the measurement outcome influences what we think we know about the system at the beginning of the experiment. Then we derive the Bayesian Second Law of Thermodynamics, which relates the original (un-updated) distribution at initial and final times to the updated distribution at initial and final times.

That relationship makes use of the cross entropy between two distributions, which you actually don’t see that often in information theory. Think of how much you would expect to learn by being told the specific state of a system, when all you originally knew was some probability distribution. If that distribution were sharply peaked around some value, you don’t expect to learn very much — you basically already know what state the system is in. But if it’s spread out, you expect to learn a bit more. Indeed, we can think of the Gibbs/Shannon entropy S(ρ) as “the average amount we expect to learn by being told the exact state of the system, given that it is described by a probability distribution ρ.”

By contrast, the cross-entropy H(ρ, ω) is a function of two distributions: the “assumed” distribution ω, and the “true” distribution ρ. Now we’re imagining that there are two sources of uncertainty: one because the actual distribution has a nonzero entropy, and another because we’re not even using the right distribution! The cross entropy between those two distributions is “the average amount we expect to learn by being told the exact state of the system, given that we think it is described by a probability distribution ω but it is actually described by a probability distribution ρ.” And the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat). So the BSL gives us a nice information-theoretic way of incorporating the act of “looking at the system” into the formalism of statistical mechanics.

I’m very happy with how this paper turned out, and as usual my hard-working collaborators deserve most of the credit. Of course, none of us actually does statistical mechanics for a living — we’re all particle/field theorists who have wandered off the reservation. What inspired our wandering was actually this article by Natalie Wolchover in Quanta magazine, about work by Jeremy England at MIT. I had read the Quanta article, and Stefan had seen a discussion of it on Reddit, so we got to talking about it at lunch. We thought there was more we could do along these lines, and here we are.

It will be interesting to see what we can do with the BSL, now that we have it. As mentioned, occasional fluctuations downward in entropy happen all the time in small systems, and are especially important in biophysics, perhaps even for the origin of life. While we have phrased the BSL in terms of a measurement carried out by an observer, it’s certainly not necessary to have an actual person there doing the observing. All of our equations hold perfectly well if we simply ask “what happens, given that the system ends up in a certain kind of probability distribution.” That final conditioning might be “a bacteria has replicated,” or “an RNA molecule has assembled itself.” It’s an exciting connection between fundamental principles of physics and the messy reality of our fluctuating world.

49 Comments

49 thoughts on “The Bayesian Second Law of Thermodynamics”

  1. Most sci-tech blogs I read tend to satisfy my immediate curiosity, and I quickly move on to the next post or blog.

    Sean’s posts lock down my attention, inspire me to dig through the references and pedagogical texts to understand to at least the hand-waving level, and, when there’s some overlap with my own skills, beyond.

    It seems more like self-imposed Summer School than a blog.

    Thanks!

  2. Dear Professor Carroll,

    A colleague just sent me your 11 August post, and I noticed that your opening line,

    “Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics”

    is incorrect, in two ways:

    1. The second law says no such thing. The second law is reproduced below, after my signature. It says nothing about “closed systems, entropy, disorder, and increasingly”. It is simply a summary of the commonly observed natural tendency (the “phenomenon”) of one-way flow, or irreversibility, in ANY system, closed or open, steady or unsteady, and without any specified configuration (such as order or disorder).

    2. Entropy does not increase (in closed systems) over time. It does increase in the highly restricted class called adiabatic closed systems (zero heat transfer), and in the even more restricted class called isolated systems (closed, zero heat transfer, zero work transfer). Clausius, when he wrote about entropy increase (1867), referred to an isolated system (the universe), not to a closed system.

    A system, in general, can exhibit decreasing, increasing, or constant entropy, depending on the flows that cross its boundary. For example, a closed system that is being cooled by contact with its colder environment, and without work transfer, (for example, a hot brick), undergoes a thermodynamic process in which its entropy decreases.

    The use of the term ‘disorder’, which is from late-1800s statistical thermodynamics, happens to be timely today because order (organization, evolution, design change, bio and non bio) is known and obvious to everyone: it is macroscopic, palpable and not requiring a physics education.

    This, the evolution of organization, is a self standing phenomenon and law (the constructal law), entirely distinct from the phenomenon of irreversibility (second law). It is described briefly in these articles and their references:

    http://www.nature.com/srep/2014/140210/srep04017/full/srep04017.html (“Maxwell’s Demons Everywhere: Evolving Design as the Arrow of Time”)

    http://www.nature.com/srep/2012/120821/srep00594/full/srep00594.html (“Why the bigger live longer and travel farther: animals, vehicles, rivers and the winds”)

    Adrian Bejan
    Duke University
    http://www.ae-info.org/ae/User/Bejan_Adrian

    The Second Law:

    Clausius: No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature.

    Kelvin: Spontaneously, heat cannot flow from cold regions to hot regions without external work being performed on the system.

  3. My argument against the claim that entropy makes the origin of life inevitable is that it would mean that we live in a universe that is so finely tuned that we don’t find obvious evidence of life everywhere (other than on our own world). There may be other life in our universe, even in our own solar system in habitats that we haven’t yet sampled (for example in subsurface oceans or reservoirs). But why are conditions so finely tuned that life isn’t omnipresent or at least not so prolific that we could easily discover it? Why aren’t the conditions not just slightly more favorable so that everywhere we find organics we also find life? Why aren’t conditions not favorable enough for us to be able to create life in petri dishes?

  4. Wow, as a non-physicist, I really enjoyed listening in on this thread (thank you) as the experts hash it out. The possible link between occasional “downward slips” in entropy and the origin of life is something I will be pondering for some time. I think, too, most of us non-experts think of the 2nd law of thermodynamics in terms of what Adrian Bejan has to say but the body of your article seems to hint there is far more to consider in terms of entropy. Am I sensing links to quantum uncertainty at least in very tiny very simple systems? Brad Jackson I think your grand-daughter “experiment” is adorable by the way!

  5. The inclusion of Bayesian updating, to me, highlights the importance of a choice of prior for far-from-equilibrium systems or tiny complex systems like cells. This suggests that down fluctuation of entropy corresponds to the notion that our prior distributions underestimate the “unlikely” events. It seems to me that if you began an experiment with an updated Bayesian probability assignment for the initial state of a molecular motor or cell (based on previous experiments of similar preparation), down fluctuations would not occur or at least with less probability. Perhaps I’m not understanding this result fully as I have not spent the time with it that I should, but it seems like in this definition, a down fluctuation corresponds to being ignorant of the prepared (initial) state or of the deterministic evolution law for the microstates of a particular system.

    If you have a really good physical model for these systems, your probability assignments would correspond more accurately to observed frequencies, so a Brownian ratchet wouldn’t look “unlikely” from this kind of a paradigm if these “unlikely” events happen with great frequency in experiments. Seems to me this approach could empirically bypass a good model simply by updating what the initial probability distribution should be for a given system and what the transition probabilities are between its microstates. This would certainly help in formulating models of such complex systems I would imagine, again, if I’m understanding this correctly.

  6. I told my mom once that I couldn’t clean my room, because entropy always increases. Well, she is still not buying it. If you think about it, entropy doesn’t really go along with our real world experiences on the macroscopic level. Most of our everyday lives revolve completely around lowering entropy (working).

    It made me think about life after death. Say billions of years passed by in a wink of an eye until the day you were born, and after death time could pass by just as quickly. Then if you had the rest of eternity to wait and you could bypass it all in a second, wouldn’t it then be inevitable for you to wake up again and experience a new life (or to reincarnate) in an oscillating universe?

  7. Very interesting. Not a mathematician or scientists, this is a subject that greatly interest me, so I’ve spent some time looking into it. I’m not sure Bernard’s Paradox has been sufficiently overcome. Either way, many have recognized a need for what appears to be some sort of reverse or stalled entropy (negative energy later coined negentropy), like Schrodinger in his 1944 book titled “What is Life?” But, there seems to be this persistent nagging artifact called “free energy,” noted by Schrodinger and Gibbs. I’m thinking there are deeper issues here, but I’d sure be happy enough with a approximation. Thanks for the post.

  8. Sean,

    Does the fact that entropy can spontaneously decrease, even in very large systems, over macroscopic time scales, imply an expected amount of time until another Big Bang-type event? If so, could you give us a sense of how long that time period is? In your 2004 paper on the subject (The Origin of the Thermodynamic Arrow of Time), you calculated the instantaneous probability of such an occurrence, as approximately 10^-5600. But what does that translate to, in terms of expected number of years? I have seen the figure 10^560 years on Wikipedia, with a citation to that same paper, but the math of how we get from an instantaneous probability of 10^-5600 to an expectation of 10^560 years isn’t clear to me. For that matter, how many “instants” (as in instantaneous probability) are in a year?

    Thank you!

    Kevin

  9. Dr. Carroll,
    An interesting article. I’ve always been very curious about the relationship between the physical ideas of statistical mechanics and the related, but purely mathematical concepts. When I was a math grad student in the early 70’s I was introduced to Bayesian calculations with conditional entropy on sub sigma-fields as topics in ergodic theory and Markov chains (none of my math professors ever mentioned statistical mechanics!). I’m guessing that a sub sigma-field is analogous to what a physicist might call a set of macro states. Do you find the math literature of any help in your investigations of this stuff?

  10. Sean, this is incredible new physics. How are they ever going to divide up the Nobel Prize?

  11. Dear Professor Bejan,

    1. Yes, entropy does not always increase in closed systems; entropy for the {system + surroundings} taken together does, in the irreversible processes. I would suppose that the incorrect statement is a blogsome mistake on Sean’s part. (I have not read his paper, nor am I likely to—I am ignorant of the field.)

    2. From whatever I was taught (and learnt!), an isolated system is not a sub-class of a closed system; it falls in an entirely different class all by itself. Basically, I was taught, that there are three kinds of systems: open, closed, and isolated.

    They never much clarified what an isolated system meant, and so, I thought a lot about it. What I write below is, thus, following my own thinking. (Don’t blame my teachers for any lacunae.)

    Following my understanding, a system is a certain specific matter (Lagrangian) or region of space (Eulerian) under focus/study; the surrounding is everything else that exists in the physical universe. A surrounding is a derivative concept; it refers to the concept of a system for its meaning.

    The system may be taken to interact with surroundings. An interaction means an exchange of something: matter, energy, work, heat (is there anything else?). For such—interacting—systems, in thermodynamic theory, we place some “theoretical sentries” that keep a watch on what is being exchanged at the system boundaries, their directions, and amounts.

    Such (i.e. the interacting) systems may be classified as open (the sentry registers an exchange of matter), or closed (the sentry does not register any exchange of matter).

    The idea of interacting systems was developed in reference to the studies of heat engines. The implicit assumption here is that there can be any number of interacting systems present in the universe—each mill had (at least) one heat engine. Theory is built to study each system separately. Since the idea of surroundings is dependent on that of the system, there can be, simultaneously, more than one interacting system present in the universe.

    In contrast, an isolated system is an abstract concept. Its purpose is to bring the entirety of the universe within the purview of thermodynamics.

    Here, the universe, taken in its entirety, itself forms the system; nothing is left for the surroundings. The idea of the surroundings thus becomes in principle meaningless. The surroundings is now, literally and logically, a (perfect) naught. That’s what the definition of the (thermodynamic) universe entails.

    Since everything physical that exists is already captured in the idea of the universe, there can in principle be nothing left for it to interact with. So, the very idea of interaction also is, in principle, to be taken out of the scope.

    As such, the definition of isolated system does not refer to that of the closed system (which in turn refers to the idea of the surrounding and the system boundary, and the interactions that the former has with the latter).

    It’s only an interacting system that requires theoretical sentries to be put at the system boundary. An isolated system drops the very idea of surroundings and system boundaries, and therefore, also of interaction—right in the beginning, right at the stage of defining the concept.

    3. Thus, an isolated system is not a sub-class of a closed system—not when the term is used in the context of the universe.

    Of course, one can always idealize situations badly and more badly, and mix the terms in their various senses, by relying more and further more on context. (To make the bad situation worse, we only implicitly rely on context(s).)

    For instance, we can think of a term like the heat reservoir (heat goes in or goes out, but the temperature everywhere within it remains the same at all times!). This idea is nothing but an idealization that permits us certain mathematical simplifications concerning the boundary conditions. Our mathematical ability is limited—and so are the mathematical tools available at our disposal. Thus, the only combinations of boundary conditions we can easily handle are unrealistic. But calculate, we must. And, we want our system to be realistic. So, what we do is we dump this entire awkwardness into a new un-physical idea of the reservoir. Our mathematics becomes simpler, feasible (or in the monographic terms, “tractable”)—and, as compensation, our students become more desperate.

    Similarly, we are not philosophically clear about what the terms like existence and universe mean. So, we choose to dump the awkwardness of this short-coming at the position of the boundary conditions in the theory. So, we say that there is a boundary, but it’s just that this boundary doesn’t allow an exchange of anything. And since our student has no chance of understanding us here, we go further in compounding our error and even put forth the ideas of the ideal insulator, the ideal impermeable membrane, the ideally rigid boundary (that lives next to the literal nothing!) etc. This line of thinking naturally leads to the idea of an isolated system as a sub-sub-(and possibly further sub-) class of closed systems.

    4. The above argument does not mean that nothing like “closedness” may ever be used in the context of the universe or the isolated system. The term “closed” may indeed be used, but not in a physical sense. It may be used, but only in the logical sense—in the sense of the theoretical closure. An isolated system indeed is closed, but only in the sense that it brings the basics of thermodynamic theory building to a logical closure—that’s all.

    Ummm…. Too long a reply, I know.

    Best,

    –Ajit
    [E&OE]

  12. Dear Ajit,

    Thank you for commenting. Unfortunately, your comments are not correct:

    1. The statement that entropy in a closed system increases is not a blog mistake. One finds it regularly in physics papers that appear in peer reviewed physics journals and books.

    2 & 3. An isolated system is a special case of closed system. Why, because any isolated system is closed, while a closed system is not necessarily isolated.

    Here is the hierarchy of system types in thermodynamics. It comes from the physics (the nature) of the boundary of any system.

    A boundary, in general, may be crossed by mass flow (m), heat transfer (Q), and work transfer (W):

    When mass flow (any m) is present, the system is OPEN.

    When mass flow is prohibited by the boundary (m = 0), the system is CLOSED.

    Clearly, the closed systems are a subclass of the open systems.

    Next, any closed system (m = 0) may experience heat transfer (any Q) and work transfer (any W).

    A closed system without interactions (m = 0, Q = 0, and W = 0) is an ISOLATED system.

    Clearly, the isolated systems (zero Q and W) are a subclass of the class of closed systems (any Q and W). Again, any isolated system is closed, while any closed system is not necessarily isolated.

    4. “Context”, engines vs. universe, has nothing to do with it. Thermodynamics owes is immense power—its utmost generality—to the fact that its laws apply to ANY system.

    The thermodynamics terminology (open, closed, isolated, adiabatic, etc) is precise, unequivocal, because it must be so, to distinguish between the various kinds of real systems out there, and between the analyses that apply to each of them.

    At bottom, thermodynamics is a discipline. It has precise rules, words, and laws. Any analysis, any discussion, must begin with defining the system unambiguously, and sticking with it. Changing the system in mid-course, to win an argument, is not allowed.

    For those who would like to read more, here is how I teach the discipline:

    A. Bejan, Advanced Engineering Thermodynamics, 3rd ed., Wiley, 2006.

    Yet, my first recommendation is to the readers is this article about the “arrow of time” of evolution, which is the phenomenon of actual interest here:

    http://www.nature.com/articles/srep04017

    Adrian Bejan
    Duke University
    http://www.ae-info.org/ae/User/Bejan_Adrian

  13. Dear Prof. Bejan,

    0. Thanks for your reply.

    1. I said I thought that in this case and on Sean’s part, it would be a blogsome error. There were two reasons behind that. (i) First reason: Though I hadn’t read this paper, I had downloaded it, and had done a lex-search on the string: “closed”. It appears exactly twice in this paper (both on p. 34, I now notice). The first usage was correct (or at least it seemed so to me; the authors here talk about reversible processes, even though the exact term they employ is: “reversible system”—system, not process). The second usage (on the same page), I didn’t understand, and so, in good faith, I presumed it to be correct. (ii) Second reason: Sean usually writes quite precisely. For instance, he is well known to accept even the many-worlds interpretation of QM. Yet, even while writing on MWs, his writing remains lucid and precise. He actually makes it easier to expose the scam that is MWI, IMHonestO.

    Yes, I am aware that a lot of physicists, including the well-published among them, (esp. those from California) sometimes seem to carry no clarity on certain basic ideas from thermodynamics. Apart from thinking that the entropy of a closed system always increases, they also think that starting from a non-absolute zero temperature, the absolute zero can be reached in a finite process. (On blogs, I have in the past pointed out both these errors.)

    (BTW, they also think that the universe is a closed system, not an isolated one.)

    Yet, my on-the-fly judgment in this case (about this usage of closed system) was that while it was in error, it was only a blogsome error.

    2. You said: “An isolated system is a special case of closed system. Why, because any isolated system is closed, while a closed system is not necessarily isolated.”

    What are the theoretical grounds to believe that isolated systems are closed?

    Or is it the case that such a definition (of an isolated system) is to be accepted as a matter of axiomatics—as the starting point itself, so that no question may at all be raised towards its analysis?

    And, is that the way that Clausius (or earlier, Helmholtz) think about it?

    3. Knowledge is contextual. All knowledge is so. And, so, any branch or discipline of knowledge. If context—even essential context—is to be dropped, or never to be brought up for discussion, then, well, I cannot be available for such a “discussion.”

    Best,

    –Ajit
    [E&OE]

  14. Ben,

    Regarding entropy / life, late last year or earlier this year there was a mathematics paper published in which the author(s) developed some new maths that indicate that living systems are better at increasing entropy than non-living systems and therefore in systems that have a steady inflow of energy the system should naturally tend towards developement of living systems. Not to be taken as definitive by any means, but a very interesting idea to continue looking into.

  15. @ darrelle Despite the fact of living systems increasing entropy more quickly than those of non living systems, there is wide gap in understanding of what differentiate living system from non living systems. There is no evidence of conversion of non living inorganic or organic systems into living biological organic systems. Than the key difference of consciousness between living and non living systems can’t be explained by any quantum-mechanical-chemical system

  16. I have major problems with entropy on cosmological scales. It’s clear how entropy works in small systems, e.g. an iron rod heated on one end. That’s the easy case. Problems occur when you go up the scale ladder.
    What I find exceptionally misleading is the statement that “entropy is measure of disorder” (sadly reiterated in the very first link in this article). That too only applies in our small human world. The state of very high entropy in Universe is a black hole – and that’s actually the state when things are very neatly organized. For black holes, entropy is introduced “ad hoc” as size of the event horizon. It works, but it’s a hack to make our equations work.
    The notion of entropy increasing in isolated system also comes with implicit requirement of that isolated system having constant volume. That too does not hold for universe. How exactly is entropy increasing in expanding universe? You have a box initially packed with matter, well mixed and in state near highest possible entropy, then the box starts growing so rapidly that the matter is no more filling it and instead it starts clumping and occupying states that are far from “maximum possible disorder”, with the “distance” increasing as time goes on. That’s in my opinion sign of decreasing entropy on Universe scales, not increasing. Yes, coffee is unmixing and eggs are unbreaking at high rate in our universe and second law of thermodynamics only holds under conditions that are not typical for the universe as a whole.
    I believe we will need to change or extend our understanding of entropy substantially before we can tackle quantum gravity.

  17. Pingback: Outside in - Involvements with reality » Blog Archive » Chaos Patch (#77)

  18. Pingback: Links for August 2015 - foreXiv

  19. -Kasuha

    I think you have a point. The second law of thermodynamics does sound like it would break down at universal scales since the moment of the Big Bang. It has become more likely and more highly expected that the universe started out as pure energy. From that, eggs and coffee and apple pies were formed from complete chaos. If they hadn’t, we wouldn’t be able to break eggs or put lumps of sugar and cream in our caffeinated beverages…

    Even the very nature of the expansion itself would seem to go against the second law. It is uniform in all directions and every point appears the same, as though it is the center. Which reminds me that it could probably be said that General Relativity breaks down on universal scales in the same sense. Galaxies moving away from us at speeds close to the speed of light near the edge of the visible universe do not appear to be galactic size black holes due to relativistic mass increase…

    Maybe there is something I am missing there, but it seems like universal expansion doesn’t like to play well with the laws of physics at all. Then scientist have been unable to explain it accurately. In the meantime, maybe we shouldn’t try to dwell on it or think about it too much and try to keep the faith that scientist know what they are doing… kappa.

  20. Hey Sean,

    You sucka electrons you think you matter
    Till I shoot my laser and yo masses scatter
    When it comes to photons I got a split personality
    That’ll make your reality wave-particle duality

    -outta Compton

  21. Since i was a little kid I have thought entropy was increasing, except living things make time go backwards locally, by speeding up entropy elsewhere.

    The difference between my childlike model and Sean’s professorial understanding will not make any difference when you are dead.

  22. It’s going to take a lot more than a quick read (on first visit to you blog), but your and your collaborators ideas and what you get back, look really interesting and the work conducted at a reasonable scientific standard. I may begin a process of what slow people have to subject themselves to in order to get their tiny heads around something they covet to know. In which case the plague will reign down on your blog; daft question busking.

Comments are closed.

Scroll to Top