Now that we’ve softened you up by explaining a bit about eternal inflation and its puzzles, we’re very happy to host a guest post by Tom Banks in which he really hits on some of these problems hard.

Tom is a professor at Rutgers and UC Santa Cruz, an extremely accomplished researcher in field theory and string theory, and the author of a textbook on quantum field theory. In collaboration with Fischler, Shenker, and Susskind, he proposed the (M)atrix Theory non-perturbative formulation of string theory. Most recently, he (often working with Willy Fischler) has been exploring the connections between holography and cosmology, developing a detailed model of the evolution of the universe that is compatible with the holographic principle. Here is video of a lecture Tom recently gave on holographic cosmology.

This post is at a more technical level than most of our entries here at CV, and we’re going to try to keep the discussion useful for workers in the field. Sincere questions are welcome, but we’ll be deleting any unproductive philosophical gripes or advertisements for anyone’s personal outsider theories.

——————————————————

**Why I Don’t Believe in Eternal Inflation**

A lot of research in high energy theory has been devoted to the topic of eternal inflation. More and more, over the last few years, I’ve come to regard this as an enormous waste of intellectual resources and I’ve chosen *Cosmic Variance* as a very public way to make my objections to this theoretical mistake clear. The theory was developed in the 1980s, when it seemed plausible that quantum field theory in curved space-time was a good approximation to a real theory of quantum gravity whenever the energy densities and curvatures of the background geometry were small in Planck units. This idea is simply wrong. The fact that its falsification came through a back door, the rather philosophical discussion of whether black hole evaporation violates the rules of quantum mechanics, has led to a widespread but unfortunate tendency to ignore this FACT.

There are two other psychological reasons for the widespread interest in Eternal Inflation, which I will discuss below. They have led even the inventors of the resolution of the black hole information paradox through the notion of holography, to try to find a sensible holographic theory which incorporates the notion of EI. While this attempt itself is subject to a number of objections, I will not go into them here. Instead, I’ll concentrate on evidence from the seminal Coleman-De Luccia (CDL) theory of tunneling in quantum gravity, which is one of the two biggest clues to what the theory of quantum gravity really is.

There are, in my opinion, two serious conceptual errors behind the theory of EI. The first is the notion that space-time geometry is a fluctuating quantum variable. The second is that de Sitter (dS) space is a system with an ever increasing number of quantum degrees of freedom. The increase is supposed to take place as the global dS time coordinate, or the time coordinate in flat coordinates, goes to future infinity. I’ll end this post with a brief discussion of the formalism of Holographic Space Time (HST), in which both of these ideas are seen to be false, in a very explicit manner. The fact that the HST formalism is able to give an approximate description of particle physics in a curved space-time background is by itself enough to falsify any claim that the semi-classical ideas that lead to EI are inevitable consequences of ANY sensible theory of quantum gravity. For this purpose, it’s not even necessary that HST be right, only that it have a limit in which it reduces to QFT in curved space-time.

There are two flavors of EI. The first comes from applying the theory of vacuum tunneling in QFT to individual horizon volumes at late times in an exponentially expanding space-time. In QFT, tunneling proceeds, like the boiling of water, via the formation of bubbles. It is argued that as long as the bubble radius is much smaller than the horizon scale, then bubbles will form independently in each horizon volume, since these are causally separated regions. This is true in QFT is a fixed background dS space , but the question is which background dS space are we talking about. The whole formalism of QFT in curved space-time is predicated on the notion that fluctuations of space-time geometry are negligible and small, and can be treated as gravitons propagating in a fixed space-time background. This is true even when we “derive” QFT in curved space-time from a formalism in which space-time geometry is assumed to be a fluctuating quantum variable (the Wheeler-DeWitt (WD) equation). In other words, the formalism of QFT in curved space-time is not the proper tool for evaluating the validity of EI, whose fundamental premise is that tunneling events change the asymptotic structure of space-time by changing the cosmological constant (c.c.).

The purpose of the CDL theory is precisely to deal with tunneling events that change the c.c., and the fact that in every case its results are drastically different from those of tunneling in QFT in a fixed space-time background are universally ignored by proponents of EI. Let us begin with the case of a potential on field space, all of whose minima have positive c.c. If there are many minima, there are many instantons (imaginary time classical solutions describing tunneling events from one minimum to another), but they all share several characteristics. They are all compact Euclidean manifolds with negative Euclidean action, *I* < 0. The compactness means that there are no collective coordinates for translation invariance, and the dilute instanton gas approximation (DGA), which is the mathematical basis for the claim that there are multiple bubbles in disjoint space-like regions of the real space-time does not work. Even when the bubble size is much less than the dS radius, one finds fewer than (*R*_{dS}/*R*_{bubble})^{3} bubbles. This estimate is based on sticking instantons into a fixed dS background, but in fact in the CDL instanton, the geometry responds to the instanton and the fixed background approximation breaks down for multiple instantons. Furthermore, there are important interactions between the bubbles, which are neglected in the DGA. I suspect that they correspond to attractive forces (in the DGA sense) between instanton centers, which means that no multi-instanton solutions exist. The standard theory of EI has well known problems with infinities (THE MEASURE PROBLEM) and one might have thought that these observations about CDL instantons would be relevant to regulating the infinite numbers of bubbles encountered in the conventional treatment. In fact, these obvious facts about CDL instantons are mentioned nowhere in that literature. The upshot of this is that there is really no theoretical basis for a picture of bubbles nucleating independently in causally disconnected patches of an exponentially expanding universe.

The problem with negative action is well known, and has a very illuminating solution. As in any tunneling problem, one must subtract the action of the meta-stable configuration. In this case this is dS space, whose imaginary time version is just a sphere. The sphere is a compact Euclidean manifold with negative action, and the difference *I*_{inst} − *I*_{F} is positive so the the reciprocal of its exponential can be a decay probability. Here we’ve subtracted the action of the higher or false dS minimum, but the action of the lower minimum is even larger so exp(*I*_{T} −*I*_{inst}) can be interpreted as the probability of transition between the lower and upper state. The thermal nature of dS space leads us to expect such upward fluctuations. The ratio of forward to backward rates is given by exp(*I*_{T} −*I*_{F}) , and this is rather miraculous, because the absolute value of the Euclidean action for dS space is precisely the Bekenstein- Gibbons-Hawking entropy of the state. This entropy, like that of black holes, was initially rather mysterious, but black hole entropy counting in string theory has given us confidence in the idea that it really does come from a counting of quantum states. In the black hole context we already knew from the thermodynamics of Hawking radiation that black hole entropy was necessary in order to obey the laws of thermodynamics. The CDL instanton formula is the analogous statement for dS space. In the dS case, there is no process which continuously varies the c.c., but the CDL process tells us how to change it in discrete jumps. The formula for the ratio of forward and backward rates is simply the principle of detailed balance for a system at infinite temperature. There is a matrix element for transitions between the states represented by the two dS minima of the potential. Fermi’s golden rule tells us that the ratio of forward to backward transitions is just the ratio of the number of available quantum microstates corresponding to each macrostate. The fact that entropies, rather than free energies, enter the rate equations, tells us that the system is at infinite temperature, and that the total number of quantum states is finite. This bolsters two other arguments that dS space has a finite number of states.

The first is simply that the Covariant Entropy Bound tells us that no observer in dS space can ever probe more than a finite number of states. The Observer Complementarity Principle tells us that different observers in dS space simply use different non-commuting Hamiltonians. Each explores the same finite space of states in a different manner. Taken together these two principles imply that the quantum theory of all possible observers in dS space is finite dimensional. A less abstract argument for the same point comes from examining ”scattering” in dS space. Consider incoming small perturbations of the dS solution of any Lagrangian. In global coordinates this solution looks like a sphere which contracts down from infinite radius to some finite radius and then re-expands to infinite radius. In the remote past or future we can look at small amplitude wave packets. However, as we approach the neck of dS space, the wave packets are pushed together. If we put too much information into the space in the remote past, then the packets will collide and form a black hole whose horizon is larger than the neck. The actual solution is singular and does not resemble dS space in the future. Thus, it is extremely plausible, given the Bekenstein Hawking entropy formula for black holes, that the quantum theory of a space-time , which is dS in both the remote past and remote future, has a finite dimensional Hilbert space.

Readers used to the statement that the inverse dS temperature is the circumference of the sphere may be puzzled by the statement that the temperature is infinite. This is, quite literally, a question of point of view. One can look at the Hamiltonian corresponding to an observer following a trajectory of fixed distance from the origin of the static coordinates, which cover a single horizon volume of dS space. The trajectory at the origin is a geodesic, but the others are all accelerated and see an Unruh temperature different from the conventional dS temperature. As the radius approaches the horizon the temperature goes to infinity. The Hamiltonian *P*_{0} describing the geodesic observer approximately decouples the particle degrees of freedom of the system from those on the horizon. The usual dS temperature is a reflection of the degeneracy of *P*_{0} eigenstates in the dS vacuum ensemble. This will all be discussed in some detail in a forthcoming paper on the Unruh effect in Holographic Space-time.

This interpretation of the CDL instanton can be generalized to a large class of potentials which have both positive and negative energy densities at their minima. Given such a potential, one can add a negative constant so that the lowest positive c.c. minimum is brought down to zero. We can then ask if there is a positive energy theorem for the resulting Minkowski space solution. There obviously is if all the other minima are still positive. The results of the CDL paper show us that this persists for SOME potentials with negative minima. We say that potentials with a positive energy theorem when the lowest positive c.c. is translated to zero energy are Above the Great Divide. It is easy to see that one can move a potential from below to above the Great Divide by tuning a single parameter. Let *f* be any smooth non-negative function on the space of fields, which vanishes exponentially rapidly when one moves away from those minima of the potential *V*, which have negative energy density. Then for sufficiently large ε, *V* + ε*f* is above the Great Divide. It is easy to show, for potentials above the great divide, that CDL transitions from the lowest dS minimum to the negative region of the potential (which, as shown by CDL always leads to a Big Crunch solution in which the field is driven out of the negative region as one approaches the singularity), are suppressed by an entropy factor. That is to say: just like the upward transitions to high positive c.c., for these potentials the downward jumps to Big Crunches are low entropy transitions of a finite system, analogous to all of the air in a room gathering into one cubic centimeter. In fact, when the covariant entropy bound is applied to the crunching region of space-time, one finds that no observer in that region can observe an entropy comparable to that of the minimal c.c. dS space.

To summarize: for potentials above the great divide, all of the evidence from CDL instantons points to a quantum theory with a finite number of states, most of which resemble the empty dS vacuum with the lowest c.c. All CDL transitions away from this state have an entropic suppression, and the principle of detailed balance assures us that the inverse transition will occur much more rapidly. In the case of upward transitions the CDL formula confirms this, but reverse transitions from the Big Crunch to dS space are not amenable to purely semi-classical analysis. For these potentials, if we accept this evidence, the entire formalism of conventional EI is invalid. Indeed, in that formalism, the difference between a potential above and below the divide is merely quantitative, so potentials above the divide have the same issues with infinite numbers of bubbles as potentials below the divide. The standard discussion of EI uses a treatment appropriate for field theory instantons in a fixed space-time background, to discuss CDL transitions in which the final state has a radically different space-time geometry than the initial state. For potentials above the great divide, EI gets the physics infinitely wrong.

I think that CDL transitions for potentials below the divide are unlikely to have a sensible interpretation in quantum theory. They are like quantum Hamiltonians for systems that do not have energy bounded from below. This leaves the question of whether there is a sensible quantum interpretation of low energy Lagrangians which allow a meta-stable dS space to decay to a region of the potential with vanishing energy density. These transitions are central to the FRW/CFT program of the Stanford group. As I said above, there are numerous issues with the attempts to make a theory of these transitions, but I won’t discuss them here. I should note that for “phenomenological” reasons (see the discussion of Boltzmann Brains below) these authors often insist that dS solutions in their models be below the great divide for transitions to negative c.c. Crunches in addition to being unstable to decay into the vanishing energy density region.

Before going on to the second class of EI models I want to mention the only response I got from EI theorists to my complaint that the multi-instanton solutions they are positing simply don’t exist in the CDL formalism. This response was that the same issue exists for quantum field theory in a fixed dS background, where we ”know” the EI discussion is correct. This is simply not correct. QFT on a large sphere has the symmetries of the sphere. Every point-like instanton configuration will have rotational collective coordinates, which approach the translational collective coordinates of flat Euclidean space in the large radius limit. If the size of the instanton is much smaller than the radius of the sphere then there are approximate solutions which are just superpositions of solutions with a rotated center and large distances between the centers. That is to say, the DGA configurations are approximately correct. By contrast, the CDL instanton for decay of dS space has no exact collective coordinates. When the instanton size is much smaller than the dS radius, then the single instanton solution looks like a small cap cut out of the dS sphere. But when we put in another instanton, the geometry changes in response to its matter content. Once one starts thinking about solutions with many instantons one loses all control over the shape of the solution or whether it exists. Above I gave the bound (*R*_{dS}/*R*_{inst})^{3} as an upper bound on the number of instantons, based on the field theory in a fixed dS background. My guess is that the actual number is much smaller than this, and that inter-instanton forces may preclude the existence of any multi-instanton CDL solutions at all. At any rate, without a thorough investigation of this question it seems to me that the standard discussion of EI is simply wild conjecture. The alternative interpretation of CDL transitions above the great divide, in terms of a system with a finite number of states, suggests strongly that EI is a wrong conjecture for those potentials.

The second flavor of EI is called chaotic eternal inflation or *the self-reproducing inflationary universe*. In these models, the formalism for calculating small fluctuations around a slow roll inflation model is extended to the regime of large fluctuations. In particular, Starobinsky’s proposal that one view the fluctuations in each independent horizon volume as a stochastic force added to the classical slow roll equations leads, for a certain range of values of the parameters in the potential, to a regime in which the stochastic kicks up the potential are more important than the force coming from the slope of the potential. Any given point will eventually roll down and inflation will end, but there are “always” (The Starobinsky equations have a global time for all points. One has to ask how this time is defined in a global picture of the space-time geometry. This leads to one aspect of the dreaded Measure Problem) points which are still inflating. This formalism takes for granted that the global picture of the inflationary space-time, with independent degrees of freedom in an infinite number of horizon volumes in the flat slicing of dS space, is valid. It also ignores the ultimate fate of the post-inflationary regimes. Indeed, in the eternal inflation regime of parameters, it is guaranteed that fluctuations on large enough scales become of order one, and there is probability one that every post-inflationary regime will be crunched inside a black hole. It is argued that those crunches occur on a time scale much longer than the age of our universe, so we can ignore them. However, if one is looking for a mathematical definition of the theory one cannot be so cavalier.

Advocates of this point of view will counter that the formalism has been validated by its application to the calculation of observable fluctuations in the Cosmic Microwave Background. I don’t think this argument is correct. The CMB fluctuations can be equally well accounted for by slow roll inflation models in the self reproducing regime, and by models without self reproduction. The CMB fluctuations are very small and extend over a range of scales that is at most 10^{5} in size (if one includes the fluctuations that form galaxies). The fact that a set of equations works over this limited regime says nothing about the validity of its use in a much more ambitious context. There have, for example, been suggestions that “string theory may never give rise to potentials in the self-reproducing regime”. I no longer think that that is the correct way to think about the problem, but it seems to me to be an indicator that this version of EI is on no firmer footing than that based on tunneling.

**The Psychology of EI**

I believe that there are two rather different psychological arguments that are driving the renewed interest in EI and the rejection of a more holographic point of view (or the attempt to shoehorn EI into a holographic theory via FRW/CFT). These are the success of the calculation of inflationary fluctuations, and that of Weinberg’s anthropic estimate of the c.c. Andrei Linde and I independently invented the first inflationary models which led to a mechanism for anthropic determination of the c.c. Subsequently, Weinberg estimated an upper bound on the c.c. from the requirement that galaxies form, assuming the size of primordial fluctuations and the dark matter density at the beginning of matter domination were fixed. Many of the proponents of the String Landscape and EI view it as the only framework which will give rise to a distribution of universes, with varying c.c., on which one can do anthropic selection. This is incorrect and the HST formalism I will describe below is an explicit counterexample. The string landscape version of anthropic selection seems to point to the likelihood that all of the parameters in low energy effective field theory are random variables constrained only by anthropic selection. Such a proposal has enormous phenomenological problems, which to my knowledge have only been addressed by recourse to our ignorance about the nature of the (so far entirely hypothetical) String Landscape. I think it is likely that only a very small number of parameters in low energy physics are random numbers fixed by selection effects.

The success of the inflationary predictions for CMB fluctuations points to the reality of independent degrees of freedom in different inflationary horizon volumes. However, as I emphasized above, there is nothing in the data that attests to more than about 25 e-foldings of inflation and a finite number of horizon volumes. It’s been evident since Bousso’s first papers on the covariant entropy bound that inflationary cosmology is not dS space. Indeed, it’s precisely the observability of the CMB fluctuations that shows us the difference. The maximal causal diamond of an observer in an inflationary universe is determined not be the inflationary c.c. but by the value of the c.c., which dominates the asymptotic future. Using the covariant entropy bound to estimate the number of allowed degrees of freedom we find that about 85 e-folds of GUT scale inflation are compatible with the covariant entropy bound and the value of the c.c. suggested by cosmological data. Thus, the success of the CMB calculations gives us no reason to believe the hyperbolic claims of EI theorists.

**Holographic Space Time**

I want to outline the theory of Holographic Space Time (HST) and how it addresses the problems of dS space and inflation. HST is an infinite collection of quantum systems, each of which describes the entire universe as seen from the perspective of a different time-like trajectory in space-time. A segment of such a trajectory, with finite proper time defines a causal diamond – the region in space-time that can be probed by a detector following that trajectory. The diamond is the intersection of the interior of the backward light cone emanating from the future end of the trajectory and that of the forward light cone of its past endpoint. Alternatively, we can think of the trajectory as being defined by a nested sequence of causal diamonds, each one larger than the former. The Holographic Principle tells us that the Hilbert space corresponding to all observations in the diamond is finite dimensional. When the dimension is large, it defines the largest area 2-surface on the boundary of the diamond (this surface is called the holographic screen of the diamond), via the asymptotic formula *D → e*^{A/4}, where the area is measured in Planck units (10^{−66}cm^{2}). Using this formula we say that a time-like trajectory is equivalent to a sequence of nested Hilbert spaces, each containing the previous one as a tensor factor. The maximal dimension Hilbert space, for infinite proper time along the trajectory, has dimension D_{max}, which might be infinite. The time evolution operator must be time dependent, since causality tells us that for proper time *T* it must factor into an operator acting only on the Hilbert space of measurements that could have been done during that time, and one which acts on degrees of freedom that could not yet have been measured at that time, because they are at space-like separation. Mathematically, the latter space is the tensor complement of the Hilbert space accessible at time *T* in the Hilbert space at infinite proper time.

Now consider another time-like trajectory, which doesn’t intersect the first. It consists of a sequence of Hilbert spaces and evolution operators, as above. HST also specifies, at each time, an overlap Hilbert space, which corresponds geometrically to the Hilbert space associated with the maximal area causal diamond in the intersection between the causal diamonds of the individual trajectories. Thus, for any pair of trajectories there is a sequence of overlap Hilbert spaces. The dynamics associated with each individual trajectory, plus a choice of initial state, determines a sequence of density matrices in this sequence of overlap spaces. The basic dynamical constraint of the theory is that the two different sequences of density matrices, be unitarily equivalent to each other. Finally, we put a topology on the space of trajectories, which we can think about as defining the topology of a Cauchy surface in space-time. In the limit of large areas, each HST quantum system defines a Lorentzian metric. The space-time metric is not a fluctuating quantum variable, but is implicit in the dynamics of the individual quantum systems and their relations. We have found several solutions of the consistency conditions, and each corresponds to a simple space-time geometry, satisfying Einstein’s equations with a simple stress tensor.

The fluctuating quantum variables of HST are related to the holographic screen of the diamond they describe. They define a non-commutative approximation to the algebra of functions on the screen, which becomes commutative as the screen gets large. If we insist on Lorentz invariance for large spherical screens (the Lorentz group is the conformal group of a sphere, and we need some kind of conformal invariance to control the large sphere limit), then we can show that the degrees of freedom describe (in the limit) an infinite number of massless supersymmetric particles, plus a set of “horizon degrees of freedom”. The horizon degrees of freedom saturate the covariant entropy bound, but to get a sensible large time (which means large area causal diamond) limit, the particles must decouple from the horizon variables. This leads to a scattering matrix for particle states. The particle interactions are mediated by the much more numerous horizon degrees of freedom.

The HST formalism is also capable of describing situations in which particles do not decouple from the horizon, which occur in the presence of black holes and in the very early universe. Fischler and I are in the midst of working out a model for inflationary dynamics in the HST formalism, and we can already see how to get small (approximately) Gaussian fluctuations from a manifestly finite system with none of the paradoxes of EI. The dS invariance of the fluctuation spectrum, which gives the correct answer for the CMB, can be at best approximate. We do not yet understand how to parametrize the deviations from dS invariance and whether our models can reproduce the “red tilted, approximately scale invariant” spectrum, which is seen in the data.

The HST formalism has much to say about numerous other problems in string theory and particle physics. Here I will only outline it’s version of a “multiverse”. The original solution of the HST constraints has a coarse grained description as a homogeneous isotropic spatially flat cosmology, with equation of state that pressure equals energy density. This is what one would expect heuristically for a universe in which every horizon volume is filled with a maximal size black hole, and these black holes merge as the horizon expands, so that the horizon filling black hole condition is satisfied at every moment. We called this solution the D(ense) B(lack) H(ole) F(luid). We also discussed a solution to Einstein’s equations which was a black hole with de Sitter interior embedded in this homogeneous isotropic cosmology. In the paper referred to above, we have found an exact quantum model, satisfying all the consistency conditions of HST, which corresponds to that solution. There is a one parameter family of models corresponding to the choice of dS c.c. We can also find approximate solutions of the consistency conditions corresponding to two or more such black holes, separated by a large distance. Using the Einstein equations as a guide, we surmise that these multi black hole solutions will evolve in a way that depends on the initial positions and velocities of the black holes in the underlying cosmological space-time. The black holes will be stable, except for collisions, when we expect them to merge to form a black hole with larger area. So we can construct models in which there are many values of the c.c. depending on which black hole interior one resides in. Each mini dS universe will be stable, unless it collides with another. Such a model is ripe for anthropic selection arguments, but the parameters of low energy physics that get anthropically selected may be few. In the paper referred to above, Fischler and I conjecture that only the inflationary and asymptotic values of the c.c. vary among these models, with all other parameters determined in terms of them. The basis of this conjecture is that the limiting theory for small asymptotic c.c. is a super-Poincare invariant theory of gravity, with no moduli, and a discrete R symmetry. There are no known theories of this type coming from string theory, and the conditions that define them in low energy supergravity are non-generic (*N*+1 equations for *N* unknowns). The fact that the theory becomes supersymmetric for vanishing c.c. puts a strong lower bound, and probably also an upper bound on the c.c. The scales of SUSY breaking and electroweak physics depend strongly on the c.c., and the QCD scale does also, but in a different way. So having stable atoms and stars requires a special value of the c.c. . The inflationary c.c. determines the size of primordial fluctuations in the universe, and for fixed asymptotic c.c., Weinberg’s galaxy formation bound constrains this to be roughly its observed value.

This version of the multi-verse also resolves the Boltzmann Brain problem, which has been claimed to invalidate any theory of stable dS space. In fact the argument is that Boltzmann’s Brains rule out any theory in which the universe eventually comes to thermodynamic equilibrium. This is an extreme version of anthropic reasoning. Once one has bought into trying to explain a lot of the physical world by anthropic arguments, one tends to be forced to start trying to estimate the number of observers that can exist under various hypotheses and saying that, since we could be any one of those observers, we must be a typical one. In a universe that comes to thermodynamic equilbrium, with finite entropy, the most typical observer is a brain with all of YOUR memories, which spontaneously pops from the vacuum by a thermal fluctuation. This happens an infinite number of times, and with much higher probability than a rerun of the history of the universe as it really happened. The BB quickly dies, and according to this way of thinking, we’re constantly doing the experiment that shows we’re not Boltzmann Brains, and so falsifying a theory which predicts that typical observers are BBs.

I think this argument is silly, and mistakes our theory of the universe for the universe itself. Given the dS temperature corresponding to the observed c.c., the probability of fluctuating even the most modest sized BB is of order exp(−10^{66}). This number is so small that the time we would have to wait to see it happen is so long that it’s essentially the same number measured in Planck times as it is in units of the age of the universe. Thus, this theoretical event is not a part of real physics. We can modify our theory in an infinite number of ways (with a time dependent Hamiltonian) which will make identical predictions for everything we or anyone else will measure over the entire history of the universe (including the events which have not yet happened when our local group of galaxies collapses into a black hole, which then evaporates back to the dS vacuum), but change what happens over the time scales on which BB’s are nucleated, and makes the probability of a BB exactly zero. The HST multiverse has an explicit mechanism for making this happen. In order to make a model which explains the history of the universe up to some point in time, it is only necessary that our particular dS black hole interior last for that period of time. Depending on the initial conditions for black hole positions and velocities defining the HST model, we can make one in which our black hole suffers a collision after that point in time. The result of that collision will be catastrophic, and will result in a new equilibrium state with a substantially smaller value of the c.c.. Given the connection between the c.c. and particle physics, it is likely that atoms are no longer meta-stable excitations of this system and so there are no BB’s after that point. The proponents of EI, fixated on CDL instantons as the only mechanism for making a dS space decay, instead argue that the low energy effective field theory MUST be below the Great Divide, in order to eliminate BBs. There are more things in heaven and earth than are dreamed of in their philosophy.

**The Takeway**

The brevity of a blog post prevents me from expanding on the manifold virtues of the HST approach to quantum gravity. No matter. You can read about them in hep-th 1109.2435 and references therein. Within the next year, I’ll also put out a long review article on HST. What you should take away from this post is the following message: *the very popular idea of Eternal Inflation is built on a foundation of sand. One version of it applies intuition about field theory instantons in a fixed space-time to transitions in which the shape of space-time changes dramatically. The actual theory of such transitions, adumbrated by Coleman and De Lucia in 1981, does not support that intuition. The analysis of CDL instantons for roughly half the potentials one can write on the space of scalar fields instead supports a picture of dS space as a quantum system with a finite dimensional Hilbert space, most of whose states resemble the dS vacuum. All CDL transitions are short lived excursions into a low entropy state of the system. Most other CDL transitions are likely to be fictitious. There is no real theory of quantum gravity in which these transitions occur (for example, almost all instabilities of Anti-de Sitter space are of this type). Transitions to a state with zero c.c. are still not well understood.*

*The other flavor of eternal inflation is based on extrapolating a theory of small fluctuations, without modification, to a regime where the fluctuations are large. It’s really rather hard to check its consistency since the evident disasters caused by the large fluctuations are always ignored by arguing that they will only occur much later in the history of the universe. This sort of phenomenological attitude prevents one from discussing the mathematical meaning of the formalism.*

*Both types of EI lead to infinities, and I have argued that there is nothing in either theory or observation which should lead us to believe that these infinities are part of a well defined model. I gave the example of HST as a formalism that can probably explain the data within the context of a manifestly finite model of the entire universe we can observe. HST also provides a new venue for applying anthropic reasoning to the determination of the c.c. and many ways to resolve the putative Boltzmann Brain problem of an asymptotically dS universe.*

Let me add a few quick reactions of my own to Tom’s post. Overall, I’m pretty sympathetic (without being fully committed) to the general claim that we should pay attention to what happens inside a fixed horizon volume (a causal diamond), rather than taking a God’s-eye view of the multiverse. And I do think that throws a wrench into almost all contemporary discussions of eternal inflation. I’m less enthusiastic about the holographic cosmology scenario — but then again, it’s always easier to find problems than to pose solutions.

The arguments that “de Sitter has a finite number of states” seem like good ones to me. Tom’s argument from a careful analysis of Coleman-de Luccia instantons is novel and interesting. But I still don’t know whether I should interpret this as “the quantum state of the universe lives in a finite-dimensional Hilbert space” or “a de Sitter observer has access to a finite-dimensional subspace of the larger Hilbert space of the universe.” That’s a crucial distinction, which I think is why we diverge later on. Tom’s model for HST features a time-dependent Hamiltonian operating on a time-dependent Hilbert space. That’s a radical departure from the conventional way we do quantum mechanics, and I’d rather not make that leap until it seems absolutely inevitable. In particular, I’m skeptical that time-dependent Hamiltonians are the right way to think about cosmology. A Hamiltonian is a generator of time evolution on a state space: given one point in the state space, it tells you the direction in which you move. It doesn’t really make sense to talk about a “time-dependent Hamiltonian” along a trajectory where you don’t ever pass through any specific state more than once. Any such evolution can be equally well characterized in terms of a single time-independent Hamiltonian (although the description might look complicated). What sets this “time” with respect to which the Hamiltonian is evolving?

Also, I’m generally more cautious about what can and cannot work, given our current state of knowledge. Tom says “potentials below the divide … are like quantum Hamiltonians for systems that do not have energy bounded below.” In laboratory physics, where life is empirically pretty stable, we have good reasons to reject Hamiltonians that are unbounded below. But I’m not at all clear that cosmology works the same way. It might … but we might just have to learn to live with a more general class of possibilities.

Finally a note about Boltzmann Brains. Tom is in the minority among people in this field, in not thinking the BB problem is a serious one for inflation. He points out that BB fluctuations are very rare, which is certainly true. He also argues that a tiny time-dependence in the Hamiltonian can remove them altogether, which is also true. But arbitrary time-dependent changes in the Hamiltonian can do lots of things! I’d like to see a predictive theory of how the Hamiltonian actually changes with time (taking into account the above caveats) before this would count as a solution for me.

Hopefully others will chime in. Thanks to Tom for a provocative and stimulating post.

Thank you for writing this; there is much here to digest. At the moment I want to comment on one issue, that underlies much of what you have to say, which involves the interpretation of the CDL instanton. Traditionally, the instanton has been seen as a solution describing global spacetime, and upon first reading it sounds like you interpret this way too. Brown and Weinberg have suggested a different interpretation, where the instanton describes only spacetime within the static patch (arXiv:0706.1573). In this interpretation, the possibility of a multiple-instanton description of global spacetime seems more plausible. I am curious if you have given this thought.

I wanted to correct one misconception in Sean’s criticism of HST. For any given observer, the HST formalism is just standard quantum mechanics in a single Hilbert space. This is the Hilbert space corresponding to the maximal area causal diamond which that observer can access. The time dependent Hamiltonian is, at any instant, a sum $$H_{in} (t) + H_{out} (t)$$ of two commuting pieces. The “growing Hilbert space” is just the Hilbert space on which $H_{in} (t)$ operates. The two commuting operators define a tensor factorization of the full Hilbert space since we can label the states by the simultaneous eigenvalues of the the two operators. The time dependence comes from the fact that the {it in} factor of the Hilbert space grows in dimension as time increases, and this corresponds to the physical fact that the observer has been able to be in causal contact with more degrees of freedom as time increases.

The real novelty of HST is the fact that it requires multiple descriptions of what is (at any given time only partially) the same physics, with a mathematical requirement that ensures compatibility of the descriptions. Space-time {it emerges} from putting together these consistent descriptions, but at the end of the day one can describe all experiments done by a given observer, in terms of a single one of these standard but time dependent quantum systems.

It is NOT true that any time dependent Hamiltonian can be replaced by a time independent one. Any time independent Hamiltonian has a complete set of conserved charges, the projection operators on individual eigenstates. No time dependent operator has such a complete set, and most time dependent Hamiltonians have no conserved charges at all.

The answer to Sean’s question about “what sets the time” is that HST works in what is called in the jargon, “a fixed physical gauge”. That is, in the limit in which the space-time geometry makes sense HST is working with some particular set of non-intersecting complete time-like curves in the space-time. The “general coordinate invariance” of the formalism appears in the constraint that the complete descriptions of all physics in terms of the observations done by any one of a complete set of time-like “observers” are equivalent. There is a further question one might ask: in classical GR one can think of two different complete sets of time-like observers and the formulas of classical GR are “covariant under changes from one such set to another”. I think people who try to think naively about incorporating this idea into QM don’t pay enough attention to the fact that, even if I try to implement this idea by thinking about QFT in a fixed space-time background, I find that the Hamiltonian appropriate to one “coordinate system” (which is generally time dependent, since the coordinate time derivative is not a Killing vector), doesn’t commute, at any time, with the Hamiltonian for another. So what precisely does it mean for the physics described by these two classes of observers to be equivalent? Thinking about this in a too naive way leads to the “paradoxes” associated with Hawking radiation. HST follows the idea of black hole complementarity to its logical conclusion and builds in this INEQUIVALENCE of the local physics seen by observers using non-commuting Hamiltonians. Equivalence is only required for shared information, which both observers can access.

The answer to Mike’s question goes along the same lines. Of course I agree with the Brown Weinberg interpretation of CDL, because for me anything outside a horizon is equivalent to something inside a horizon (the principle of observer complementarity, where I use language invented by Erik Verlinde to describe a concept that Willy Fischler and I introduced in our paper on Cosmological Observables for M-theory Space-Times).

But if you follow this logic, the Hilbert space corresponding to stable dS space is finite dimensional. I know this is radical. I didn’t believe it when Susskind Thorlacius and Uglum introduced the idea. But it is actually a very fruitful idea, not only because it resolves the “paradoxes” of Hawking radiation, BUT BECAUSE IT ALLOWS ONE TO CONSTRUCT THE NOTION OF SPACE-TIME FROM PURELY QUANTUM CONCEPTS. Space-like separation is EQUIVALENT to commutivity, and the space-time conformal factor is reconstructed from the dimensions of Hilbert spaces.

I’m sorry if I sound like an AA mentor, a reformed addict, speaking to people who are still addicted. Effective field theory is seductively addictive. It allows us all to stop thinking. But it’s helped us to forget the hard lesson taught by QM: the new concepts will look radically different than the old ones, even though they “reduce to them in appropriate circumstances”. EFT is different from addiction, in that, within its proper range of validity, it will, like Newton’s laws, give all the right answers with all the precision we need. But it’s conceptually wrong, and we have to clear it out of our heads in order to understand its limitations. THE LIMITATIONS ARE NOT SELF EVIDENT IN THE EFT APPROXIMATION AND ARE NOT PARAMETERIZED IN TERMS OF HIGHER ORDER CORRECTIONS TO AN EFFECTIVE ACTION, ANY MORE THAN QUANTUM MECHANICS IS INCORPORATED IN CORRECTIONS TO AN EFFECTIVE ACTION FOR CLASSICAL PATHS.

I concede on the difference between time-dependent and -independent Hamiltonians; that was just a mistake. I was thinking classically, where I still believe my statement; but in QM you can use the metric on Hilbert space to decide whether a Hamiltonian is truly fixed (which I think is equivalent to Tom’s statement about the conserved charges).

I’m not sure whether we are agreeing or disagreeing about the Hilbert space issue. You say “For any given observer, the HST formalism is just standard quantum mechanics in a single Hilbert space.” But that initial caveat is not part of standard quantum mechanics. It might be necessary in quantum gravity, but I think it’s still a dramatic leap. But I’m understanding more and more every time we talk about it.

I should probably have emphasized my real dissatisfaction with the model, which is that I don’t see how we are going to explain (as opposed to positing) the dramatic time-asymmetry of our observable piece of universe. You and Willy have put a lot of effort into finding a solution to the HST constraints that more or less matches the early universe we actually see. But what I don’t see is a route to explaining why it’s that solution rather than some other one. Just as one tiny example, even if you can avoid BB’s by tweaking the Hamiltonian, I don’t see a fundamental reason why would could be a BB in this scenario. (I.e. even if they’re not necessary I don’t see that they’re impossible. Could there be fluctuations in a long-lived dS vacuum that look like us?) In a more conventional approach we have a once-and-for-all time-independent Hamiltonian, and can ask about solutions and semiclassical histories and try to figure out why we’re in one universe rather than some other one. In HST it seems that we’re just trying to find something that fits, but not explain why it’s this particular thing. It might be that I’m being unfair and just asking too much of a young and challenging formalism. Do you think there will eventually be some way to tackle this question?

Sean,

Even in classical gravity, time independent Hamiltonians are special things, which only exist rigorously for space-times with time-like asymptotic Killing vectors. If you look at the Wheeler DeWitt approach to quantum gravity (which I don’t believe is fundamentally correct, but I do think gives some correct results in the leading order semi-classical approximation) you find that the quantum mechanics one defines by expanding around a classical solution without such a Killing vector always has a time dependent Hamiltonian.

It’s only our mistaken belief that all solutions of a single low energy gravitational action are part of the same quantum theory that even leads to the hope of some sort of universal time independent Hamiltonian. This belief is shown to be wrong by all of the string theory work on AdS/CFT, Matrix Theory, etc.

I agree that HST is a radical departure. It may or may not be right. For the purposes of my present post that doesn’t matter. It is definitely a quantum theory. And it definitely has a limit where its space of states becomes a field theory Fock space. So it’s the kind of theory that should have QFT in curved space-time as an approximate description. There are also examples of HST which behave in many ways as one would expect an asymptotic dS universe to behave, but nonetheless have a finite dimensional Hilbert space.

I have to say though that my main point was that CDL just doesn’t justify all of the “approximations” that one uses to defend EI. I think EI partisans should be disturbed by that, even if HST didn’t exist at all.

I would just put this little wrench into the discussion.

I do not like BB’s either, but we have to consider that everything we “observe” must be an artifact of a computable function derived by our brain. If we think about it in these terms, one can imagine yourself as a conscious entity falling into a black hole. We are born as we cross the horizon and die when we reach the singularity. We can even understand that under extreme acceleration effects, one would actually see the vacuum as a soup of particles.

The question then becomes one of why we all look the way we do. It is not hard for one to state that our particular outward physical appearance is completely random, that there is no particular reason that we all general look the some, except by shear cosmic coincidence. This question is the same that is being asked when one thinks of initial conditions. It always a question about privilege.

Now I am not a real fan of anthropic reasoning either, so its nice to think of things in terms of causal connections…that the reason I see everyone looking the same is due to a series of causal events that occur externally to my very particular state of existence. That we all look the same is that we all share a common causal connection.

If one thinks of this in terms of ancestry, it is well known that we all share some common relative that only existed some few hundred years ago (this is a different concept from evolutionary ancestry), the concept works the same though when we speak of horizons. There exists some common key that connects all observers history in some logical way. This is what we think of as our observable horizon.

This gets back to the situation of computation. Our brain is somehow capable of computing an extremely complex solution that connects all observables in a consistent way. That should indicate that there is a fairly interesting algorithm that our brain has programmed that must be natural in some sense…at least to us.

Regardless of one’s background, this set of ideas, no doubt at the cutting edge of physics, is simply too esoteric & voluminous to stimulate any sort of meaningful discussion amongst most CV readers, & as such is more appropos to a conference, rather than a pop-sci magazine blog, like CV is.

Jimbo, quit whining. I think it’s cool to see the real hardcore stuff discussed out here in the open where us plebs can see it. (And, sadly, despite my physics backround I’m pretty much a pleb in this discussion too.) I don’t understand everything they’re discussing either, but it’s nice to get a taste of how these issues are argued by the people who do understand them. It reminds me that at some point when I have time I need to study up to the level where I can grok all of this craziness myself.

I don’t think it hurts the blog to have the occasional post that’s not 100% comprehensible and appealing to everyone, as long as there’s still plenty of content which *is* at a more accessible level. And, as has so often been observed before, if you can’t tolerate a medium which doesn’t perfectly cater to your interests at every single instant, you’re welcome to go somewhere else instead.

Anyway, um, I probably shouldn’t be feeding the trolls here, should I? *sigh* I won’t be offended if this gets deleted.

Tom– I want to reiterate my agreement with your primary point, that the picture of a globally fluctuating semiclassical spacetime invoked by eternal inflation is at best highly suspicious, and at worst dangerously wrong, given what we know about holography and complementarity. That’s the more important issue here. It’s very hard to move beyond that, and HST is a brave attempt in that direction, so it’s worth thinking through carefully.

I also completely agree about what the Wheeler-de Witt equation seems to bequeath us — and that the WdW equation probably isn’t the right approach to quantum gravity. But I take a different lesson from AdS/CFT, matrix theory, etc. Just to focus on AdS/CFT for a second, I can see why someone would say “the lesson is that quantum gravity with different asymptotic boundary conditions corresponds to completely different Hamiltonians — therefore, QG with changing boundary conditions corresponds to an effectively time-dependent Hamiltonian.” But I think (or maybe merely “suspect”) that this is backwards. I would suggest the lesson is that there is a completely consistent theory of QG (with certain boundary conditions) that is

preciselya conventional quantum theory with a time-independent Hamiltonian. It doesn’t seem to be the right theory of the universe, since we don’t seem to live in an AdS vacuum. (Although one could speculate that we don’t really know the boundary conditions.) But I would think that the most conservative hope we might have would be to look for a conventional quantum theory with time-independent Hamiltonian which was not strongly coupled N=4 super-Yang-Mills, but some other theory with a gravity dual that actually resembles the real world.Emphasizing repeatedly, of course, that the sum total of human knowledge on these matters is pretty small, and my own knowledge is a tiny fraction of that, so I could be completely off base.

Anne– Thanks. After years of exposure, I’m still baffled when people feel a need to comment that they are not interested in a post, rather than just getting on with their lives.

Just to clarify BBs = boltzmann brains in the way I used them

I agree 100% with Anne, nobody is forced to read blogs or articles in a blog he does not like (had to learn this for myself too) ;-)…

And it is great that Tom Banks is giving answers and further explanations here 🙂

Sean,

I guess I don’t understand what you’re saying. The conventional approach to cosmology, in terms of which all real data is analyzed, is a QFT with a time dependent Hamiltonian.

GR associates time independent Hamiltonians with special space-times with asymptotic symmetries. All our data about the universe suggest that that is not the case for the space-time we live in.

I apologize to all of the “plebs” (Hanna’s word, not mine, I prefer to think of you all as interested non-professionals, who are the real target of this website), for being too technical. I’ve just been to two professional conferences where some of the smartest people I know said things that seemed to me to be based more on fantasy than evidence. I’ve been writing about this in technical journals for a decade, and I still find people coming up to me at conferences (and I mean really good scientists in fields slightly different than my own) with no idea of what I’ve been saying, and who even identify me as a partisan of just those ideas which I think are wrong. So I figured I’d put myself on a public record and see what happened.

Prof Banks wrote: “There is no real theory of quantum gravity in which these transitions occur (for example, almost all instabilities of Anti-de Sitter space are of this type). ”

Would you care to clarify/expand? Thanks!

I was wondering if you could explain a little bit about time independent Hamiltonian and time-dependent Hamiltonian, and how the differences are crucial to choosing the right QFT to work with?

What’s special about time-independent Hamiltonian in the context of GR and how does the current cosmological data contradict the idea that the universe is based on a time-independent Hamiltonian?

Prof Banks,

I would like to understand this better

“For any given observer, the HST formalism is just standard quantum mechanics in a single Hilbert space … Space-time {it emerges} from putting together these consistent descriptions.”

Do you refer to real observers or ‘all possible observers’ ( so that e.g. to every possible worldline there would be a different Hilbert space).

I assume it is the latter, but then how many observers are a priori possible (without spacetime having ’emerged’ yet) ?

I’ll second Anne C. Hanna’s post. This one goes way over my head. But so what?

There are way too many safety locks on information. I guess we don’t want to accidentally make someone feel a little ignorant … or feel that there might be understanding out there that one has to work for.

Tom– Maybe a glimmer of understanding is dawning on me (or maybe just a different kind of confusion). You say “The conventional approach to cosmology, in terms of which all real data is analyzed, is a QFT with a time dependent Hamiltonian.” I can think of two senses in which that’s true. What most working cosmologists who analyze data do is to look at perturbations of a time-dependent cosmological background; the Hamiltonian for those perturbations is certainly time-dependent. Alternatively, we could imagine more honest cosmologists who start from the Wheeler-de Witt equation (Hamiltonian is time-independent, but annihilates physical states) and derive an effective description that applies in a certain regime, where the effective Hamiltonian is time-dependent.

However, in both cases we think there is an underlying more complete description where nothing is fundamentally time-dependent — the action or Hamiltonian for GR plus a set of quantum fields. The time-dependence isn’t “fundamental,” it just arises within a certain regime, while the deeper description has no explicit dependence on time. Is

thatthe kind of setup you are imagining in HST? If so, I’m going to be much more sympathetic. I always got the impression that the time-dependence of the dynamics was a crucial part of the story, and there wasn’t supposed to be any deeper level.To me, the machinery of conventional QM — fixed Hilbert space, Hamiltonian, algebra of observables, Schroedinger’s equation — seem much more robust and likely to be fundamental than anything we know about gravity or curved spacetime. AdS/CFT is our best-understood example of quantum gravity, and from the QM point of view it’s utterly conventional — just a fixed Hamiltonian (N=4 SYM or whatever) operating on a fixed Hilbert space. And in a certain regime, it looks like quantum gravity. True, the gravity description is related nonlocally to the “original” variables, and also true that it’s not a gravity theory that describes the real world. But still, the QM is conventional. I am led to believe/hope that there is some other conventional QM theory that does describe the real world, even if the relationship might be even more nontrivial. It would have to have the property that, in a certain regime, it looked like fluctuations around a time-dependent RW background.

That might be impossible, and I appreciate the role of horizons in the real world. But if you think of HST as an effective description that is the right way to think about an underlying conventional QM theory without explicit time dependence, I’m more willing to think that we don’t disagree at all.

As to the technical/nontechnical thing, I think it’s great to have different posts at different levels. This reaches different people and has a different impact, but it’s an important part of the communication mosaic. Anyway, I’m learning a lot, and isn’t that what really matters?

Sean,

AdS space has a global timelike Killing vector (actually many). That’s why AdS/CFT has a time independent Hamiltonian. Some people who’ve tried to do cosmology in AdS/CFT have done it by making the Hamiltonian time dependent. I don’t think this corresponds to OUR cosmology, but those people have argued it’s some kind of cosmology. I’m going to respond to your remark about the WD eqn by ordinary email. I think the answer is too technical for CV (besides, I want to say nasty things to you I wouldn’t say in public).

For the non-experts having or not having a time independent Hamiltonian is equivalent to the question of whether energy is conserved. In cosmology, the total energy of the universe is not conserved. In the space-times Sean was talking about, there is a conserved energy.

I want to answer Wolfgang’s question, which is a good one. In HST we have a discrete label for an infinite number of different quantum systems. The space of these labels has a topology, specified by saying which labels are nearest neighbors of which other ones.

So far we’ve always thought of this space of labels as a cubic lattice in 3 dimensional space (we can do other dimensions easily, but mostly stick to the real world. The compact dimensions have a completely different description)

Each label specifies a different world line, and has its own time dependent quantum mechanics, which describes everything that will ever be seen of the universe by a detector traveling along that world line. Then there are overlap conditions

that tell you, at each time, and for each pair of observers, how much quantum information they share, and ensure that the descriptions of that shared information are consistent. This is all stated in the language of quantum mechanics. A postiori, one can then see that all these relations define a space-time, using the two ideas that the logarithm of the dimension of a Hilbert space is one quarter the area of the holographic screen of a causal diamond, and that operators in space-like separated places commute with each other.

Each consistent model of this type, defines a different space-time and all of the observers in it.

Oh, I forgot Flagitious Nebulon. Sorry. The CDL formalism I referred to is very general. It describes quantum tunneling where the initial state is either de Sitter space, the most symmetric space-time with positive cosmological constant, Minkowski space, the most symmetric space-time with zero c.c. or Anti-de Sitter space (no, dS and AdS do not annihilate when they meet 🙂 ), the most symmetric s.t. with negative c.c. As Sean said, our most sophisticated understanding of quantum gravity comes in the AdS case, and in that case you can show that almost all examples of CDL tunneling simply do not make sense, and those that do have an interpretation VERY different from the one used in the EI literature.

I should emphasize that the CDL formalism is not a theory of quantum gravity, even though it refers to quantum tunneling in which the geometry of space-time changes.

It is a guess, based on classical approximations of what such a theory would say. I think it’s remarkable that it actually gets a lot of things right, but it also gets a lot of things wrong.

As long as you’re saying nasty things about me, and not about Wheeler and de Witt. 🙂

Tom,

My naive understanding about the holographic principle leads me to the belief that the area of the screen in planck units is the upper limit of the total information content of the volume within. If this understanding is even partially correct can you comment on the possible relationship between the current value of c.c our own observable universe, and the fact that we seem to live in a universe where complex information processing is in fact occurring. If I am way off base here or this makes no sense at all, that’s fine (I will not be offended) but if you see a possible connection I’d be interested in you comments.

e.

Well, if Sean happens to have 15 hours of spare time, a translation to layspeak would be nice.

In any case, I’ll just nicely state my disagreement with the post above. Sean gave appropriate warning at the top of the post. It’s like when the MythBusters tell you not to try something at home. If you do and it goes badly for you, you have no one to blame but yourself.

Thank you very much for carrying this discussion out in the open – it is really wonderful!

I am strangely reminded of the usual debates between GR and QFT people about general approaches to quantum gravity. Strangely because the cosmologist is taking the field theory point of view and the field theorist the GR point of view :-).

For me it always seems absurd to have a global fundamental description of quantum gravity – since the essence of GR is the local coordinate covariance after all. The patching of causal diamonds with consistency conditions sounds like one attempt that has really learned all the lessons from both GR and QM and i am a bit ashamed to hear about it from a blog first.

So thank you very much for this great post (and for some interesting reading that lies ahead now).

Eliot,

The holographic bound on information, assuming we have a positive c.c. as the explanation for the accelerated expansion of the universe, is about 10^{124} . By contrast, the amount of information in the atomic structure of a human being is of order 10^{2X} (twenty something in the exponent, where the X respresents my ignorance about atomic physics and human beings). Neglecting the difference between 2 and the base of the natural logarithm, this says that if we measure the information content of the universe in bits, then it’s a one with 124 zeroes after it. If we measure it in units of the information content of human beings it’s a 1 with close to a hundred zeroes in it. The number of bits that can actually be processed by our brain is much smaller than the total information content in all the states of the atoms in our bodies (most of which are states in which we’re dead). So there’s no problem of having complex information processing in a universe obeying the information bound with the observed value of the c.c. . In fact, the total information in all the matter and radiation in the universe (excluding black holes) is the exponent of ten to the eighty something.

Tom,

Thanks so much for taking the time to respond. Just to clarify for my own understanding, based on your numbers then, there is an enormous gap (on the order of 10^20 – 10^30) between the information generated by all of humanity and the available bits on the “screen” given our current best calculated values of the size/acceleration of our observable universe. Stated another way: We (humanity) have no quantifiable impact on the c.c.

e.