Boltzmann’s Anthropic Brain

A recent post of Jen-Luc’s reminded me of Huw Price and his work on temporal asymmetry. The problem of the arrow of time — why is the past different from the future, or equivalently, why was the entropy in the early universe so much smaller than it could have been? — has attracted physicists’ attention (although not as much as it might have) ever since Boltzmann explained the statistical origin of entropy over a hundred years ago. It’s a deceptively easy problem to state, and correspondingly difficult to address, largely because the difference between the past and the future is so deeply ingrained in our understanding of the world that it’s too easy to beg the question by somehow assuming temporal asymmetry in one’s purported explanation thereof. Price, an Australian philosopher of science, has made a specialty of uncovering the hidden assumptions in the work of numerous cosmologists on the problem. Boltzmann himself managed to avoid such pitfalls, proposing an origin for the arrow of time that did not secretly assume any sort of temporal asymmetry. He did, however, invoke the anthropic principle — probably one of the earliest examples of the use of anthropic reasoning to help explain a purportedly-finely-tuned feature of our observable universe. But Boltzmann’s anthropic explanation for the arrow of time does not, as it turns out, actually work, and it provides an interesting cautionary tale for modern physicists who are tempted to travel down that same road.

The Second Law of Thermodynamics — the entropy of a closed system will not spontaneously decrease — was understood well before Boltzmann. But it was a phenomenological statement about the behavior of gasses, lacking a deeper interpretation in terms of the microscopic behavior of matter. That’s what Boltzmann provided. Pre-Boltzmann, entropy was thought of as a measure of the uselessness of arrangements of energy. If all of the gas in a certain box happens to be located in one half of the box, we can extract useful work from it by letting it leak into the other half — that’s low entropy. If the gas is already spread uniformly throughout the box, anything we could do to it would cost us energy — that’s high entropy. The Second Law tells us that the universe is winding down to a state of maximum uselessness.

Ludwig Boltzmann Boltzmann suggested that the entropy was really counting the number of ways we could arrange the components of a system (atoms or whatever) so that it really didn’t matter. That is, the number of different microscopic states that were macroscopically indistinguishable. (If you’re worried that “indistinguishable” is in the eye of the beholder, you have every right to be, but that’s a separate puzzle.) There are far fewer ways for the molecules of air in a box to arrange themselves exclusively on one side than there are for the molecules to spread out throughout the entire volume; the entropy is therefore much higher in the latter case than the former. With this understanding, Boltzmann was able to “derive” the Second Law in a statistical sense — roughly, there are simply far more ways to be high-entropy than to be low-entropy, so it’s no surprise that low-entropy states will spontaneously evolve into high-entropy ones, but not vice-versa. (Promoting this sensible statement into a rigorous result is a lot harder than it looks, and debates about Boltzmann’s H-theorem continue merrily to this day.)

Boltzmann’s understanding led to both a deep puzzle and an unexpected consequence. The microscopic definition explained why entropy would tend to increase, but didn’t offer any insight into why it was so low in the first place. Suddenly, a thermodynamics problem became a puzzle for cosmology: why did the early universe have such a low entropy? Over and over, physicists have proposed one or another argument for why a low-entropy initial condition is somehow “natural” at early times. Of course, the definition of “early” is “low-entropy”! That is, given a change in entropy from one end of time to the other, we would always define the direction of lower entropy to be the past, and higher entropy to be the future. (Another fascinating but separate issue — the process of “remembering” involves establishing correlations that inevitably increase the entropy, so the direction of time that we remember [and therefore label “the past”] is always the lower-entropy direction.) The real puzzle is why there is such a change — why are conditions at one end of time so dramatically different from those at the other? If we do not assume temporal asymmetry a priori, it is impossible in principle to answer this question by suggesting why a certain initial condition is “natural” — without temporal aymmetry, the same condition would be equally natural at late times. Nevertheless, very smart people make this mistake over and over, leading Price to emphasize what he calls the Double Standard Principle: any purportedly natural initial condition for the universe would be equally natural as a final condition.

The unexpected consequence of Boltzmann’s microscopic definition of entropy is that the Second Law is not iron-clad — it only holds statistically. In a box filled with uniformly-distributed air molecules, random motions will occasionally (although very rarely) bring them all to one side of the box. It is a traditional undergraduate physics problem to calculate how often this is likely to happen in a typical classroom-sized box; reasurringly, the air is likely to be nice and uniform for a period much much much longer than the age of the observable universe.

Faced with the deep puzzle of why the early universe had a low entropy, Boltzmann hit on the bright idea of taking advantage of the statistical nature of the Second Law. Instead of a box of gas, think of the whole universe. Imagine that it is in thermal equilibrium, the state in which the entropy is as large as possible. By construction the entropy can’t possibly increase, but it will tend to fluctuate, every so often diminishing just a bit and then returning to its maximum. We can even calculate how likely the fluctuations are; larger downward fluctuations of the entropy are much (exponentially) less likely than smaller ones. But eventually every kind of fluctuation will happen.

Entropy Fluctuations

You can see where this is going: maybe our universe is in the midst of a fluctuation away from its typical state of equilibrium. The low entropy of the early universe, in other words, might just be a statistical accident, the kind of thing that happens every now and then. On the diagram, we are imagining that we live either at point A or point B, in the midst of the entropy evolving between a small value and its maximum. It’s worth emphasizing that A and B are utterly indistinguishable. People living in A would call the direction to the left on the diagram “the past,” since that’s the region of lower entropy; people living at B, meanwhile, would call the direction to the right “the past.”

During the overwhelming majority of such a universe’s history, there is no entropy gradient at all — everything just sits there in a tranquil equilibrium. So why should we find ourselves living in those extremely rare bits where things are evolving through a fluctuation? The same reason why we find ourselves living in a relatively pleasant planetary atmosphere, rather than the forbiddingly dilute cold of intergalactic space, even though there’s much more of the latter than the former — because that’s where we can live. Here Boltzmann makes an unambiguously anthropic move. There exists, he posits, a much bigger universe than we can see; a multiverse, if you will, although it extends through time rather than in pockets scattered through space. Much of that universe is inhospitable to life, in a very basic way that doesn’t depend on the neutron-proton mass difference or other minutiae of particle physics. Nothing worthy of being called “life” can possibly exist in thermal equilibrium, where conditions are thoroughly static and boring. Life requires motion and evolution, riding the wave of increasing entropy. But, Boltzmann reasons, because of occasional fluctuations there will always be some points in time where the entropy is temporarily evolving (there is an entropy gradient), allowing for the existence of life — we can live there, and that’s what matters.

Here is where, like it or not, we have to think carefully about what anthropic reasoning can and cannot buy us. On the one hand, Boltzmann’s fluctuations of entropy around equilibrium allow for the existence of dynamical regions, where the entropy is (just by chance) in the midst of evolving to or from a low-entropy minimum. And we could certainly live in one of those regions — nothing problematic about that. The fact that we can’t directly see the far past (before the big bang) or the far future in such a scenario seems to me to be quite beside the point. There is almost certainly a lot of universe out there that we can’t see; light moves at a finite speed, and the surface of last scattering is opaque, so there is literally a screen around us past which we can’t see. Maybe all of the unobserved universe is just like the observed bit, but maybe not; it would seem the height of hubris to assume that everything we don’t see must be just like what we do. Boltzmann’s goal is perfectly reasonable: to describe a history of the universe on ultra-large scales that is on the one hand perfectly natural and not finely-tuned, and on the other features patches that look just like what we see.

But, having taken a bite of the apple, we have no choice but to swallow. If the only thing that one’s multiverse does is to allow for regions that resemble our observed universe, we haven’t accomplished anything; it would have been just as sensible to simply posit that our universe looks the way it does, and that’s the end of it. We haven’t truly explained any of the features we observed, simply provided a context in which they can exist; but it would have been just as acceptable to say “that’s the way it is” and stop there. If the anthropic move is to be meaningful, we have to go further, and explain why within this ensemble it makes sense to observe the conditions we do. In other words, we have to make some conditional predictions: given that our observable universe exhibits property X (like “substantial entropy gradient”), what other properties Y should we expect to measure, given the characteristics of the ensemble as a whole?

And this is where Boltzmann’s program crashes and burns. (In a way that is ominous for similar attempts to understand the cosmological constant, but that’s for another day.) Let’s posit that the universe is typically in thermal equilibrium, with occasional fluctuations down to low-entropy states, and that we live in the midst of one of those fluctuations because that’s the only place hospitable to life. What follows?

The most basic problem has been colorfully labeled “Boltzmann’s Brain” by Albrecht and Sorbo. Remember that the low-entropy fluctuations we are talking about are incredibly rare, and the lower the entropy goes, the rarer they are. If it almost never happens that the air molecules in a room all randomly zip to one half, it is just as unlikely (although still inevitable, given enough time) that, given that they did end up in half, they will continue on to collect in one quarter of the room. On the diagram above, points like C are overwhelmingly more common than points like A or B. So if we are explaining our low-entropy universe by appealing to the anthropic criterion that it must be possible for intelligent life to exist, quite a strong prediction follows: we should find ourselves in the minimum possible entropy fluctuation consistent with life’s existence.

And that minimum fluctuation would be a “Boltzmann Brain.” Out of the background thermal equilibrium, a fluctuation randomly appears that collects some degrees of freedom into the form of a conscious brain, with just enough sensory apparatus to look around and say “Hey! I exist!”, before dissolving back into the equilibrated ooze.

You might object that such a fluctuation is very rare, and indeed it is. But so would be a fluctuation into our whole universe — in fact, quite a bit more rare. The momentary decrease in entropy required to produce such a brain is fantastically less than that required to make our whole universe. Within the infinite ensemble envisioned by Boltzmann, the overwhelming majority of brains will find themselves disembodied and alone, not happily ensconsed in a warm and welcoming universe filled with other souls. (You know, like ours.)

This is the general thrust of argument with which many anthropic claims run into trouble. Our observed universe has something like a hundred billion galaxies with something like a hundred billion stars each. That’s an extremely expansive and profligate universe, if its features are constrained solely by the demand that we exist. Very roughly speaking, anthropic arguments would be more persuasive if our universe was minimally constructed to allow for our existence; e.g. if the vacuum energy were small enough to allow for a single galaxy to arise out of a really rare density fluctuation. Instead we have a hundred billion such galaxies, not to count all of those outside our Hubble radius — an embarassment of riches, really.

But, returning to Boltzmann, it gets worse, in an interesting and profound way. Let’s put aside the Brain argument for a moment, and insist for some reason that our universe did fluctuate somehow into the kind of state in which we currently find ourselves. That is, here we are, with all of our knowledge of the past, and our observations indicating a certain history of the observable cosmos. But, to be fair, we don’t have detailed knowledge of the microstate corresponding to this universe — the position and momentum of each and every particle within our past light cone. Rather, we know some gross features of the macrostate, in which individual atoms can be safely re-arranged without our noticing anything.

Now we can ask: assuming that we got to this macrostate via some fluctuation out of thermal equilibrium, what kind of trajectory is likely to have gotten us here? Sure, we think that the universe was smaller and smoother in the past, galaxies evolved gradually from tiny density perturbations, etc. But what we actually have access to are the positions and momenta of the photons that are currently reaching our telescopes. And the fact is, given all of the possible past histories of the universe consistent with those photons reaching us, in the vast majority of them the impression that we are observing an even-lower-entropy past is an accident. If all pasts consistent with our current macrostate are equally likely, there are many more in which the past was a chaotic mess, in which a vast conspiracy gave rise to our false impression that the past was orderly. In other words, if we ask “What kind of early universe tends to naturally evolve into what we see?”, the answer is the ordinary smooth and low-entropy Big Bang. But here we are asking “What do most of the states that could possibly evolve into our current universe look like?”, and the answer there is a chaotic high-entropy mess.

Of course, nobody in their right minds believes that we really did pop out of a chaotic mess into a finely-tuned state with false memories about the Big Bang (although young-Earth creationists do believe that things were arranged by God to trick us into thinking that the universe is much older than it really is, which seems about as plausible). We assume instead that our apparent memories are basically reliable, which is a necessary assumption to make sensible statements of any form. Boltzmann’s scenario just doesn’t quite fit together, unfortunately.

Price’s conclusion from all this (pdf) is that we should take seriously the Gold universe, in which there is a low-entropy future collapsing state that mirrors our low-entropy Big Bang in the past. It’s an uncomfortable answer, as nobody knows any reason why there should be low-entropy boundary conditions in both the past and the future, which would involve an absurd amount of fine-tuning of our particular microstate at every instant of time. (Not to mention that the universe shows no sign of wanting to recollapse.) The loophole that Price and many other people (quite understandably) overlook is that the Big Bang need not be the true beginning of the universe. If the Bang was a localized baby universe in a larger background spacetime, as Jennie Chen and I have suggested (paper here), we can comply with the Double Standard Princple by having high-entropy conditions in both the far past and the far future. That doesn’t mean that we have completely avoided the problem that doomed Boltzmann’s idea; it is still necessary to show that baby universes would most often look like what we see around us, rather than (for example) much smaller spaces with only one galaxy each. And this whole “baby universe” idea is, shall we say, a mite speculative. But explaining the difference in entropy between the past and future is at least as fundamental, if not more so, as explaining the horizon and flatness problems with which cosmologists are so enamored. If we’re going to presume to talk sensibly and scientifically about the entire history of the universe, we have to take Boltzmann’s legacy seriously.

102 Comments

102 thoughts on “Boltzmann’s Anthropic Brain”

  1. Awesomely lame. What kind of superhero is rendered incompetent by the color yellow? A two year old with a Crayola could take him down and steal his lunch money.

  2. Sean “according to the Second Law, things are always moving toward greater entropy, in accordance with the laws of physics. If you change the laws of physics (e.g. by removing gravity), everything changes, including what counts as a “high entropy configuration.” Without gravity, the number of degrees of freedom are different, as are the conservation laws.”

    Accordingly:Area of a future, is larger than the present, present Area is larger than the past.

    Area (expansion_energy) of the future should contain “less” energy signature than a past, as it is dispersed over a greater area?

    Thus, Maximum Entropy of the Physical Universe configures, specifically over Time, and tends to a future direction resolution.

    The only way for physical systems to lose energy (anhilation), is during a Phase Transition?

    Even if the Physical Laws (1st,2nd..) hold true “now”, there is more Entropy in the future in the form of potential conversion processess, namely Negative Energy?

    Positive Energy is decreasing from the big-bang to “now” , and negative energy is increasing from “now” to the next critical “future” phase.

    The geometric considerations for Gravity in a future “low_thermal_entropy” Universe, is that it ends as a Wavfunction Collapse process, which neatly ties in with the initial early Condition?..

    Question:How much ‘Anti-matter’ was lost during the early Universe phase transition, and is the total quantity responsible for the Arrow of Time?

  3. I was going to put together a long reply to Aaron and Gavin, but luckily Sean said it better first. What I was trying to say was that difficulties about the definition of entropy in the presence of gravity, while real and interesting, don’t really matter to the arrow of time question. Clearly the early Universe was in an incredibly “special” state, and clearly this is the origin of the arrow of time. Whether this special initial state can justly be described by saying that “the entropy of the Universe was initially low” is beside the point. Actually, though, we do know how to measure the entropy of a black hole, which is not a “local” object in the relevant sense, so I would guess that problems defining “the entropy of the Universe” are not insuperable, provided that the Universe is spatially finite — which it obviously is. 🙂

    Gavin: I agree entirely with everything you said about energy. I don’t see why those specific problems with the concept of “gravitational energy” should extend to entropy.There’s no such thing as “gravitational force” in GR, so gravitational energy is bound to be problematic. There’s no analogy with entropy.

  4. I believe the multiple uses of the word entropy in both physics and information theory only add to the confusion in this arena. (at least for me)

    Thanks for the response.

    Elliot

  5. Evidence from independent sources is that c has been slowing. Redshofts are the only evidence of cosmic acceleration, and these are related to c. This is, I understand, not a popular thing to say.

  6. So, when ratio v/c appears to be accelerate, we are not seeing v increase but c decrease. The CMB suppots this too. Has this not occurred to anyone?

  7. O. K. at the risk of beating a dead horse…. Sean re: your response in #49, would that still hold if gravity was “dialed” up or down in strength instead of an on/off switch? If gravity was still there there but weaker or stronger, the conservation laws and degrees of freedom would not change. But the entropy would vary with the strength of the gravitational field.

    In information theory entropy is strictly defined in terms of the compressiblity of the information describing an ensemble. Maximum entropy means you need to describe each thing where lower entropy allows for the # of bits describing the ensemble to be less than a list of every item.

    Sorry if I am pressing too hard on this but it seems there maybe something very fundamental here and I am just trying to understand.

    Thanks,

    Elliot

  8. Yes, it would still hold — you’d still be able to increase the entropy of a system in a new way, namely by expanding the universe. The strength doesn’t really matter.

  9. Sean,

    We know that the universe was in a very special state close to the Big Bang, because it rapidly evolved into something else.

    Special? This only shows that the state is special if we believe that there is a second law of thermodynamics that applies close to the Big Bang, describing some total entropy of the universe. Since I am skeptical about a definition of total entropy that obeys the second law in this situation, the “special” claim seems like a matter of opinion.

    The only thing I see special about the early universe is that it is expanding very rapidly. If I take a tiny piece of rapidly expanding universe and consider all the ways of putting some GUT temperature stuff into it, nearly all of them are smooth. It looks just like what I would see in a furnace set to “GUT.” Smooth does not seem special to me; smooth seems perfectly typical.

    The state rapidly evolves into something else because it is, as I mentioned, expanding very rapidly. I don’t know why it is expanding rapidly, but that seems like a question we can at least define, and possibly answer. The question of why the starting state is “special” doesn’t even seem well defined.

    Jack,

    You don’t see any obstacles to showing that there is a definition of the total entropy of the universe that obeys the second law of thermodynamics. Maybe I am just not that smart, but I see a lot of trouble ahead. However, I’ve learned never to say something can’t be done, so I’ll look for you on the arXiv. Until then we have a difference of opinion, and it seems premature to use second law arguments that may not apply to discuss an entropy that may not exist. (Note: black hole entropy is treated in spaces with asymptotic symmetries, something our universe doesn’t have. In these cases gravitational energy makes good sense as well.)

    This is all very relevant to the arrow of time question. The universe could have started in a rapidly expanding but otherwise very typical state. The rapid expansion throws it out of equilibrium, leading to an opportunity for increase in the local entropy density everywhere, leading to an arrow of time everywhere. No low-entropy, fine-tuned, very special starting point required.

    Gavin

  10. For an individual, life starts with a cell, and then it multiplies while it adapts to the environment. A being is an adaptive semi-autonomous system. It works with a feedback. Schorodinger’s Equation has also a feedback aspect to it;
    i*hb*(d/dt)*Psi = H*Psi
    The universe also seems to have started small. Maybe we are making a mistake by trying to include everything in the beginning into a very small container. If the universe is a being, it would have started from an embryo and eaten it’s environment to accumulate entropy.

    Consider the mapping of M6 classification of sciences to aspects of being and to elementary particles;

    0 Mathematics … Measure ,,, Higgs
    1 Physics … Move ,,, Light
    2 Chemistry … Feed ,,, Weak
    3 Biology … Sense ,,, Quark
    4 Psychology … Feel ,,, Lepton
    5 Intelligence … Think ,,, Space-Time
    6 Aesthetics … Love ,,, Mass-Gravity

  11. I’d like to follow up on a criticism of Sean’s work raised, very sportingly, by Sean himself. Namely: if the Universe was born as a baby universe from some pre-existing de Sitter space, then why does it contain more than one galaxy? The Boltzmann’s Baby Paradox…..is there any known way of attacking this problem?

  12. There are only 7 natural objects with atomic symmetry;

    0 Higgs
    1 Atom
    2 Cell
    3 Planet
    4 Star
    5 Galaxy
    6 Universe

    Assuming Higgs level would also be quantised. Maybe the reason for another level after galaxy is that this atom fractal obeys the same overall symmetry of elementary categories.

  13. Torbjörn Larsson

    Sean:
    “The proto-inflationary state is incredibly finely-tuned; invoking inflation just begs the question.”

    I don’t see that it needs to be question-begging.

    If inflation needs a special initial state to occur, for example for baby-universe creation, endless inflation will impose that state on each new universe or region that succeeds to start inflate. The other states will be sorted out by the failure. (It seems to me to be like a maxwellian demon who opens a gate when a state is acceptable; except that the “inflationary demon” is without memory. 🙂

    Aaron’s commentary seems to suggest such a special state: “the initial conditions for inflation seem to require uniformity on scales larger than the horizon at the time”.

  14. I’ve been thinking about this concept for some time, but I had no idea anyone else was. I took what I read in “Brief History of Time” and ran with it. I feel less crazy for reading this, though I’m not sure how much I agree with. I do agree that there need not be a corresponding crunch for the big bang; nor must there be a corresponding bang if it’s really a crunch.

    Questions: Does the expansion of the universe affect the rate at which time proceeds? If there is a universal cycle (hypothetically) from bang to crunch, with a corresponding increase and then decrease in entropy, observers on either side of the maximum would perceive time proceeding towards that maximum, no? And “at” the maximum, would the perception of time stop? More accurately, as one approached from either side, would time slow down such that you could never reach the maximum (sort of like falling into a black hole)? Because if so, how can information travel from one side of the max. to the other? And wouldn’t this create the perception of a constantly expanding universe (very simply, rate of expansion r over time t as t goes to zero remains positive; my calculus is beyond rusty, though, which is why I’m asking)? Time for our observers wouldn’t actually slow, though, would it? They’d still see it as one second per second, so how would they know?

    Can the unidirectionality of gravity have any relation to the arrow of time? In a time-reversed view, gravity is always repulsive, right? I’ve heard of lab experiments in “anti-gravity” with superconductors, but from what I’ve read, they have more to do with cancelling the effects of gravity, or shielding objects from them.

    I’ll have to think out loud more on this in my livejournal.

  15. To answer N@: Expansion of Space is indistinguishable from the forward flow of time. There is no known function for t; perhaps one of you smart people will someday figure it out. No special initial state is necessary, at all times we have R = ct and GM = tc^3.

  16. Pingback: Nonoscience / Another Localized Blackout on Blogspot?

  17. Hi Louise

    I liked the wombat picture! Since Island managed to say huh twice without thinking about it, I thought I might add a few words.

    In QG we don’t like to think of the universe as something fixed and objective. There is no universal observer – except the universe itself in some sense. However, if one asks what it would be like to observe a universe (or rather, take a large number of observations that resemble the classical universe) then the identification of time’s arrow with entropy makes perfect sense, as you say. To put it another way: the estimation of cosmological epoch is, like anything, a local measurement. A different class of observer in our present on Earth is quite capable of perceiving itself to inhabit a different epoch, and there is no contradiction here because the universe is not something that’s out there somewhere. The implementation of Mach’s principle cannot be achieved with such a classical view of spacetime observables.

    Nice to know the data agrees, heh?

  18. Louise, by “independently supported”, I meant that the kinematic interpretation of the SN Ia sample provided strong evidence that there was a transition between deceleration and acceleration that they called a “cosmic jerk”.

    And then there’s dark energy…

    Otherwise, Eddington lives!… 😉

  19. Since you mention it, Island, GM=tc^3 also explains Eddington’s large-number hypothesis. One could question the wisdom of calling something a “cosmic jerk,” naming a “dark energy” after what Sith Lords peddled, or christening “Concordance” cosmology after a plane that crashed and doesn’t fly anymore.

    However, it is time to question the “independently supported” mantra. Evidence of cosmic acceleration was published simultaneously by two groups in 1998. One group was headed by Saul Perlmutter of LBL. The other lead author was Adam Reiss of Berkeley, whose office was only 500 m from Perlmutter’s. Both groups looked at the same phenomenon, redshifts of Type Ia supernovae. The independence was that Perlmutter was head of one group and Riess part of another.

    Redshifts are the only evidence of cosmic acceleration. The CMB says nothing about acceleration. In fact, the CMB is good evidence that c has changed. Average temperature is the same over large parts of the sky, indicating that large regions were in causal contact. Even at the time of recombination 300,000 years after the Big Bang, c was much greater.

    Additional evidence comes from Active Galactic Nuclei. These massive primordial Black Holes are observed to have formed shortly after the Big Bang. This could only occur if the horizon distance determined by c was much greater. There is even more precise evidence from a nearby star.

    If one knows how to read a Graph, it can be seen that GM = tc^3 precisely predicts redshifts of Type Ia supernovae, even in the transitional “jerk” period. All this has been shown without inferring repulsive ‘dark’ energies.

    Kea, you are right, they never learn. It is good that QG is now considered a subject of research. True independence comes from knowing the data and reaching our own conclusions rather than repeating the herd. Someone has to be the child saying that the emperor has no clothes. (A physicist would say that they were dark energy clothes!)

  20. Okay Louise, I give, but I think that you meant Dirac’s Large Numbers Hypothesis, which I am actually a bigger fan of than you might think for it’s deep relevance to the anthropic principle. Maybe I’m missing something obvious, but I can’t find anything wrong with your idea, except that it takes the much hated fine-tuning issue to a new extreme. So, yeah, popularity is everything and you’re screwed.

    I don’t find causality in your model though, so I’m guessing that you’re going with a random quantum fluctuation to get all that negative pressure to appear from nowhere.

    In which case, I don’t think that it’s going to be necessarily preferred when all gets said and done.

  21. Brian Greene wrote a terrible book, with a chapter dedicated to this same topic! Shame!

    The reasoning couldn’t be more flawed! Coming from a brilliant mind, it hurts!

    Please don’t let smart people write stupid books on intriguing ideas!

Comments are closed.

Scroll to Top