The Eternally Existing, Self-Reproducing, Frequently Puzzling Inflationary Universe

My inaugural column for Discover discussed the lighting-rod topic of the inflationary multiverse. But there’s only so much you can cover in 1500 words, and there are a number of foundational issues regarding inflation that are keeping cosmologists up at night these days. We have a guest post or two coming up that will highlight some of these issues, so I thought it would be useful to lay a little groundwork. (Post title paraphrased from Andrei Linde.)

This summer I helped organize a conference at the Perimeter Institute on Challenges for Early Universe Cosmology. The talks are online here — have a look, there are a number of really good ones, by the established giants of the field as well as by hungry young up-and-comers. There was also one by me, which starts out okay but got a little rushed at the end.

What kinds of challenges for early universe cosmology are we talking about? Paul Steinhardt pointed out an interesting sociological fact: twenty years ago, you had a coterie of theoretical early-universe cosmologists who had come from a particle/field-theory background, almost all of whom thought that the inflationary universe scenario was the right answer to our problems. (For an intro to inflation, see this paper by Alan Guth, or lecture 5 here.) Meanwhile, you had a bunch of working observational astrophysicists, who didn’t see any evidence for a flat universe (as inflation predicts) and weren’t sure there were any other observational predictions, and were consequently extremely skeptical. Nowadays, on the other hand, cosmologists who work closely with data (collecting it or analyzing it) tend to take for granted that inflation is right, and talk about constraining its parameters to ever-higher precision. Among the more abstract theorists, however, doubt has begun to creep in. Inflation, for all its virtues, has some skeletons in the closet. Either we have to exterminate the skeletons, or get a new closet.

Inflation is a simple idea: imagine that the universe begins in a tiny patch of space dominated by the potential energy of some scalar field, a kind of super-dense dark energy. This causes that patch to expand at a terrifically accelerated rate, smoothing out the density and diluting away any unwanted relics. Eventually the scalar field decays into ordinary matter and radiation, reheating the universe into a conventional Big Bang state, after which things proceed as normal.

Note that the entire point of inflation is to make the initial conditions of our observable universe seem more “natural.” Inflation is a process, not a law of nature. If you don’t care about naturalness, and are willing to say “things just happened that way,” there is absolutely no reason to ever think about inflation. So the success or failure of inflation as a scenario depends on how natural it really is.

This raises a problem, as Roger Penrose has been arguing for years, with people like me occasionally backing him up. Although inflation does seem to create a universe like ours, it needs to start in a very particular kind of state. If the laws of physics are “unitary” (reversible, preserving information over time), then the number of states that would begin to inflate is actually much smaller than the number of states that just look like the hot Big Bang in the first place. So inflation seems to replace a fine-tuning of initial conditions with an even greater fine-tuning.

One possible response to this is to admit that inflation by itself is not the final answer, and we need a theory of why inflation started. Here, it is crucial to note that in conventional non-inflationary cosmology, our current observable universe was about a centimeter across at the Planck time. That’s a huge size by particle physics standards. In inflation, by contrast, the whole universe could have fit into a Planck volume, 10-33 centimeters across, much tinier indeed. So for some people (like me), the benefit of inflation isn’t that it’s more “natural,” it’s that it presents an easier target for a true theory of initial conditions, even if we don’t have such a theory yet.

But there’s another possible response, which is to appeal to eternal inflation. The point here is that most — “essentially all” — models of inflation lead to the prediction that inflation never completely ends. The vicissitudes of quantum fluctuations imply that even inflation doesn’t smooth out everything perfectly. As a result, inflation will end in some places, but in other places it keeps going. Where it keeps going, space expands at a fantastic rate. In some parts of that region, inflation eventually ends, but in others it keeps going. And that process continues forever, with some part of the universe perpetually undergoing inflation. That’s how the multiverse gets off the ground — we’re left with a chaotic jumble consisting of numerous “pocket universes” separated by regions of inflating spacetime.

It’s therefore possible to respond to the “inflation requires even more finely-tuned initial conditions than the ordinary Big Bang” critique by saying “sure, but once it starts, it creates an infinite number of smooth `universes,’ so as long as it starts at least once we win.” A small number (the probability of inflation starting somewhere) times infinity (the number of universes you make each time it starts) is still infinity.

But if eternal inflation offers solutions, it also presents problems, which might be worse than the original disease. These problems are at the heart of the worries that Steinhardt mentioned. Let me just mention three of them.

The one I fret about the most is the “unitarity” or “Liouville” problem. This is essentially Penrose’s original critique, updated to eternal inflation. Liouville’s Theorem in classical mechanics states that if you take a certain number of states and evolve them forward in time, you will end up with precisely the same number of states you started with; states aren’t created or destroyed. So imagine that there is some number of states which qualify as “initial conditions for inflation.” Then eternal inflation says we can evolve them forward and get a collection of universes that grows with time. The problem is that, as this collection grows, there is an increasing number of states that look identical to them, but which didn’t begin with a single tiny inflating patch at all. (Just like an ice cube in a glass of water will evolve to a glass of cooler water, but most glasses of cool water didn’t start with an ice cube in them.) So while it might be true that you can generate an infinite number of universes, at the same time the fraction of such states that actually began in a single inflating patch goes to zero just as quickly. It is far from clear that this picture actually increases the probability that a universe like ours started from inflation.

There is an obvious way out of this challenge, which is to say that all of these “numbers of states” are simply infinite, and this purported calculation just divides infinity by infinity and gets nonsense. And that’s very plausibly true! But if you reject the argument that universes beginning with inflation are an infinitesimally small fraction of all the universes, you are not allowed to accept the argument that there’s some small probability inflation starts and once it does it makes an infinite number of universes. All you can really do is say “we can’t calculate anything.” Which is fine, but we are left without a firm reason for believing that inflation actually solves the naturalness problems it was intended to solve.

A second problem, much more celebrated in the recent cosmological literature and closely related to the first, is known as the measure problem. (Not to be confused with the “measurement problem” in quantum mechanics, which is completely different.) The measure problem isn’t about the probability that inflation starts; it assumes so, and tries to calculate probabilities within the infinite ensemble of universes that eternal inflation creates. The problem is that we would like to calculate probabilities by simply counting the fraction of things that have a certain property — but here we aren’t sure what the “things” are that we should be counting, and even worse we don’t know how to calculate the fraction. Say there are an infinite number of universes in which George W. Bush became President in 2000, and also an infinite number in which Al Gore became President in 2000. To calculate the fraction N(Bush)/N(Gore), we need to have a measure — a way of taming those infinities. Usually this is done by “regularization.” We start with a small piece of universe where all the numbers are finite, calculate the fraction, and then let our piece get bigger, and calculate the limit that our fraction approaches. The problem is that the answer seems to depend very sensitively on how we do that procedure, and we don’t really have any justification at all for preferring one procedure over another. Therefore, in the context of eternal inflation, it’s very hard to predict anything at all.

This quick summary is somewhat unfair, as a number of smart people have tried very hard to propose well-defined measures and use them to calculate within eternal inflation. It may be that one of these measures is simply correct, and there’s actually no problem. Or it may be that the measure problem is a hint that eternal inflation just isn’t on the right track.

The final problem is what we might call the holography/complementarity problem. As I explained a while ago, thinking about black hole entropy has led physicists to propose something called “horizon complementarity” — the idea that one observer can’t sensibly talk about things that are happening outside their horizon. When applied to cosmology, this means we should think locally: talk about one or another pocket universe, but not all of them at the same time. In a very real sense, the implication of complementarity is that things outside our horizon aren’t actually real — all that exists, from our point of view, are degrees of freedom inside the horizon, and on the horizon itself.

If something like that is remotely true, the conventional story of eternal inflation is dramatically off track. There isn’t really an infinite ensemble of pocket universes — or at least, not from the point of view of any single observer, which is all that matters. This helps with the measure problem, obviously, since we don’t have to take fractions over infinitely big ensembles. But one would be right to worry that it brings us back to where we started, wondering why inflation really helps us solve naturalness problems at all.

Personally I suspect (i.e. would happily bet at even money, if there were some way to actually settle the bet) that inflation will turn out to be “right,” in the sense that it will be an ingredient in the final story. But these concerns should help drive home how far away we are from actually telling that story in a complete and compelling way. That should hardly come as a surprise, given the remoteness from our view of the events we’re trying to describe. But the combination of logical consistency and known physics is extremely powerful, and I think there’s a good chance that we’re making legitimate progress toward understanding the origin of the universe.

59 Comments

59 thoughts on “The Eternally Existing, Self-Reproducing, Frequently Puzzling Inflationary Universe”

  1. Moshe, I understand this point. My question is then specific: what is the history of the early universe if these patterns are treated as coincidences? If the answer is specific enough, it can be compared with a model having the basic features of inflation, like any two scientific hypotheses. In principle, inflation could still come out the winner, without invoking “naturalness” as one of the judgment criteria. So does it or not? I was under the impression that inflationary models still come out ahead. But then Sean’s statement is puzzling. That’s what I want to clarify.

  2. Igor, I am not sure I understand. We have an initial value problem, so today’s observations are determined once you specify an initial state at some time in the distant past. If you specify the time to be the beginning of the big bang evolution, with the correct but very contrived initial state (nearly homogeneous with just the right kind of fluctuations) then you get no conflict with observation. By contruction, same applies to inflation, because it reproduces that initial state and all subsequent evolution. The only point of inflation is to make that initial state the outcome of prior evolution. By construction all current observations will then be identical, but the initial state will be more natural and less contrived. As I understand Sean’s statement, quantifying this intuitive notion of naturalness is tricky, and it is not always clear inflation indeed comes ahead. I hope I am not mangling things…

    And, for the record, in my mind the notion of “naturalness” is one instance of “algorithmic compression”, which is the whole point of seeking a scientific explanation. Without invoking such criteria, by definition (for example) the particle data group review book would be always the best “theory” of particle physics, and you’d never need to learn about gauge theories and spontaneous symmetry breaking and all that stuff.

  3. Basically, having too many particles is just as bad as having too few particles. From my reading it appears that supersymmetry requires going to a non-alternative algebra. That in itself might be a signal that one has too many particles, but I have not seen an argument one way or another about whether this is a criterion to determine what can exist. One might consider that if Nature is non-alternative then Ordinary Arithmetic would likely require another rule beyond being commutative and associative. Such notions do have consequences because of the intimate connection of algebra and ordinary arithmetic. One might expect that mathematicians would have detected a need for such a rule if it is pervasive in Nature, as it would have to be if supersymmetry prevails. It is a shame that we do not have an Algebraic Theory of the Structure of Matter so we could just look up whether Inflatons are there or not.
    Moshe (27) I would argue that Naturalness depends a whole lot on whether it makes algebraic sense … algorithms depending rather heavily on algebra – see especially Freeman Dyson’s paper on the S-matrix in Quantum Electrodynamics where he points out “Therefore the absence of ambiguity in the rules of calculation of U(infinity) is achieved by introducing into the theory what is really a new physical hypothesis, namely that the electron-positron field always acts as a unit and not as a combination of two separate fields.” It looks to me like Dirac Algebra is what binds the spin reversal and time reversal into a nice field. The question is why there is anything for Dirac algebra to bind together in the first place ! Since QED is the best tested theory one would certainly ask whether non-alternativity might possibly mess up this nice arrangement.

  4. Igor– I think Moshe has it right. Without inflation, the history of the early universe is just the conventional hot Big Bang, all the way back to the singularity. With inflation, it’s the same thing, except there is a period of inflation and reheating. The comparison between these two theories is not in terms of fitting the data; both fit the data perfectly well. It’s not even in terms of simplicity or algorithmic compression; saying “a hot, smooth, flat universe with scale-free adiabatic perturbations” isn’t obviously more complicated than saying “a state dominated by the potential energy of a scalar with a very smooth potential.” The only possible criterion is in terms of naturalness — the original motivation for inflation was to say that states like this happen all the time. But that’s just not true, it relies on an overly simplistic way of counting.

    Russ– We always look back in time when we look out in space (even here on Earth, forget about cosmology), just because the speed of light is finite. You might be thinking of the fact that inflation provides a mechanism for creating perturbations that stretch farther than the naive light-travel-distance at the moment we see them. (Sometimes misnamed “super-horizon modes.”) It does that, of course, by changing how far light can travel, in turn by changing the expansion history of the universe.

  5. Sean, my comment about algorithmic compression is that since the initial state for the hot big bang has small entropy, it suggests that it can be “compressed”. That is, a language can be found in which it is a generic state; whether or not inflation does that is a different story. This seems to me what is meant by “naturalness” in this context. I don’t see how this is any different from trying to express regularities such as the Hubble law in terms of prior evolution such as the hot big bang.

  6. Moshe, sorry, I don’t get that. Why does “low entropy” mean “compressible”? The hot BB state is “simple,” which is really just a synonym for “low algorithmic complexity,” but so is the high-entropy de Sitter vacuum. There are many non-simple states that would still have low entropy. So I’m misunderstanding something there. And I’m not sure what “a language can be found it which it is a generic state” is supposed to mean. Neither the hot BB nor inflation are “generic” in the sense of “corresponding to a large volume in phase space,” although I suppose they might both be generic if you mean “corresponding to a large volume in phase space subject to some simple macroscopic constraints.” But I’m not sure if that is what you mean.

  7. This is not something I thought about very deeply, so I might just be mixing up two or more notions of entropy. As far as I remember the Shannon entropy of a message tells you how much it can be compressed. In the maximally compressed language, the same message can be conveyed using an order S string of equally likely 0 and 1 symbols. In that compressed language the message is generic in the sense of having maximal entropy. So, in that spirit, low Shannon entropy is what I would then try to use to quantify what is meant by “simple”. Expressing the same state as being generic (in a smaller phase space) is what I would call “compression”. Sorry if this is completely off the mark.

    It may just be that this notion of complexity is not the same as physical entropy, but it seems to me at least related whenever we understand the entropy as resulting from coarse graining in a well understood phase space. Horizon entropy is different, in that we don’t have a complete specification of the state in semiclassical gravity. I’d expect that once we have a complete description of the microstates corresponding to semiclassical desitter space, it will no longer be “simple” in either the colloquial or any more precise sense, precisely because it is a high entropy state.

    Anyhow, sorry to divert the discussion to my own confusions. I think we agree on the main points.

  8. If quantum gravity can be formulated totally within quantum field theory, then is inflation more-or-less mandatory? Otherwise, how can the magnetic monopole problem be resolved?

  9. Moshe and Sean, thanks a lot, I think I understand the basic point now. It is unfortunate that the existing data does not really allow to distinguish between models with and without an early phase of accelerated expansion. I do hope that will eventually become possible with more data, since the two alternatives are really physically different and should hence differ significantly in at least some observations.

    BTW, when I was referring to not appealing to “naturalness” I did not mean to throw out the whole notion of identifying patterns and regularities in natural phenomena. I was referring to the more narrow idea of “naturalness” as usually applied to the values of fundamental constants and initial conditions. This latter notion is, IMHO, is based on much more tenuous reasoning.

    For what it’s worth, here’s an example of an argument that I think supports inflation without appealing to naturalness. The scalar field + slow roll hypothesis explains the CMB homogeneity (via accelerated expansion). A second, somewhat independent, line of evidence that supports the same hypothesis is the spectrum of CMB inhomogeneities (via quantum fluctuations). Perhaps another supporting line of evidence is the statistics of mass distribution in large scale structure. It is the hallmark of a good hypothesis to be supported by multiple independent lines of evidence. Of course, this argument may not be valid if I am misrepresenting the logic behind the modeling, or the independence of these lines of evidence. I hope someone will correct me if that’s the case.

  10. 33 David:

    1) Quantum field theory does not require that magnetic monopoles exist. They are essentially unavoidable in GUTS, but our passion for unification is not necessitated by QFT.

    2) QFT probably cannot describe non perturbative quantum gravity. Gravity is a theory of space and time, of classical manifolds. You want a quantum gravity theory which somehow manages to generate these things. Or at least, I suppose that is what I want? That might not count for much. Asymptotic Safety scenarios inspire some optimism that it might actually work, and I certainly wouldn’t make a wager against Weinberg on questions of QFT.

  11. Igor, perhaps there is a point in one more iteration, so briefly:

    The two scenarios, hot big bang all the way, or hot big bang preceded by inflation, have exactly the same state at one Cauchy surface (by definition), and exactly the same subsequent evolution (after reheating). Therefore they will give exactly the same answer for every cosmological observation ever performed, past or future, more data is not the issue.

    Same is true for a third scenario in which the universe started 6000 years ago in the one particular state on that particular Cauchy surface. If you choose that state correctly, which involved tuning incredible amount of data to be “just so”, you will also reproduce all cosmological observations, past and future. If you think that third scenario is obviously inferior to the standard hot big bang, try to find reasoning that does not rely on any notion of naturalness.

    (I also now realize that I muddied the water by talking above about algorithmic compression, whereas I simply meant compression, not really in the Kolmogorov sense. Apologies, mainly to Sean.)

  12. Is there some simplest possible form of inflation that can be assumed to be the same for each universe in the string landscape? Are there severe problems arising from string landscape + inflation landscape?

  13. I think you said something very interesting about the Planck scales; it would be “natural” if the universe at the Planck time scale was also of the Planck length. I think it would be more natural too, but I also think that physicists should wonder about why they think that way in the first place.

    In fact, if the universe were not so symmetrically organized at the very early stages, i.e., a much larger size relative to its age, it enables you to ask and answer questions that you would otherwise be impossible at that moment. In the symmetrical arrangement, asking any question about space at the Planck time gets the answer, “It’s the Planck scale, stupid.” But in the non-inflationary picture, the first instant at which it becomes meaningful to ask a question necessarily has a miultiplicity of places in which to ask it, and hence a multiplicity of potential answers.

    So for the sake of argument, if you hypothesize that at the beginning of time, there was already space, there might be some logical or measurable consequence to it, and hence a way to verify inflation.

  14. Moshe, perhaps this iteration brings to light an actual disagreement. I have to disagree with what you claim in principle, though you may actually be right in practice. I prefer to clearly separate these two issues.

    In principle, a physical model with a fundamental scalar field is different from a model without one. If we had access to arbitrary observations, we could distinguish between them even today, without relying on cosmological observations. For instance, this field would contribute extra fluctuations to the vacuum, and that’s just off the top of my head. I’m sure that something similar can be said about distinguishing a spacetime metric with an early phase of accelerated expansion, and one without. In practice, the data needed to make the distinction may not be available to us, now or in the foreseeable future. But I’d like to clearly separate the availability of such data from their possibility.

  15. Igor– I don’t think that’s true. We might be able to someday create a particle that could have been the inflaton (although it wouldn’t be easy). But you can’t, even in principle, say with certainty whether inflation happened, unless you literally knew the exact quantum state of the entire universe and all of the fields up to the inflationary energy scale.

    The reason is that inflation is an event, not a dynamical law. Given any current configuration of the universe, vacuum fluctuations or whatever, we can simply use the laws of physics to evolve that backward in time, and we’re bound to end up somewhere. That somewhere could be taken as an initial condition.

  16. Pingback: Guest Post: Tom Banks Contra Eternal Inflation | Cosmic Variance | Discover Magazine

  17. Igor, just augment the big bang with an inflaton, scalar field with a suitable potential, but imagine it does not actually go through a slow roll, it just starts at the stable minimum. The larger point remains: for any finite number of measurements, there are always infinitely many model fits, you always decide between them based on something like naturalness.

  18. There has to be some better explanation then inflation. And this whole infinite universe thing is questionable. The infinities become impossible. When a universe is created, however does it happen if the speed of light is law? As I’m no Physicist your probably right. Just seems rather infinite. Before light existed, was the speed of light law?

  19. Defending the Country

    “Why does “low entropy” mean “compressible”? The hot BB state is “simple,” which is really just a synonym for “low algorithmic complexity,” but so is the high-entropy de Sitter vacuum.”

    There is confusion here between macroscopic and microscopic variables. If I look at an object in a macroscopic sense, then the objected looks more complex at the midpoint between equilibrium states because there is a lot of surface variation that is permitted when it is out of equilibrium. So a lot of additional variables are required to explain its macroscopic appearance when it is furthest away from equilibrium.

    When one considers the object microscopically, what changes is the domain of the microscopic variables. The range of those variables can now have more values as one moves from higher to lower entropy states. It becomes more and more difficult to find ways to describe the actual state without using the entire domain, and the values become more and more random. Once one gets to the highest entropy state, one needs to know the value for every single variable in order to replicate the state.

    If one wants to think about it graphically, one can think about the same way one thinks about probability distribution functions (pdf) and cumulative distribution functions (cdf). As one builds a cdf from a pdf by summing as one moves from left to right, one can think of a an analogous function of the rate of entropy growth. One would expect that the rate of entropy growth reaches its maximum when the system is furthest away from equilibrium. This is what Sean seems to think is analogous to complexity.

  20. Thank you Sean. It will take me months to digest this and your subsequent guest post and I will never completely understand this, but I will still somehow be enriched. Please keep it up.

  21. This sort of think makes me feel gypped by the information, I, as a layperson, am indoctrinated with. Why is there not more alternative representation? I’d like to be less surprised at the level of doubt. It’s like if someone were to tell me that actually only 51 percent of climatologists believe in global warming. And I’m a believer. Meaning, I freaking take in what I’m told and I try to keep skepticism, but you know, I’m told what I’m told, and unless I want to become an astrophysicist then it seems I’m told a lot of doubtful information. That bothers me, and seems unethical. And the last thing I am willing to accept from a magazine is “we’re just journalists”. Where are your standards?

    Disappointed. Why the fkcu should I believe what I read here if this is basically a giant admission of guilt about the fact that we’ve been fed and overly simplistic and optimistic view. Ugh.

  22. Sean, Moshe, thanks again for your replies. But now I think we’re starting to get into semantics. Yes, I agree with both of you that (to paraphrase) we are not Laplace’s demon and never will be. But to claim that two physical models cannot ever be distinguished by our observations requires a stronger argument than the brief one of the previous sentence. If such an argument can be made, then I just don’t see how these models could be distinguished by the scientific method. I am though still optimistic that the hot Big Bang with and without inflation will be distinguishable with future observations. Though that may change as my understanding of the situation improves.

  23. I think inflation is a natural consequence of a very uniform (high entropy) late stage universe, which in effect then rescales or “zooms in” precisely until slight assymetries become evident in some sense and act like sand in the works to bring the expansion grinding to a halt.

    In other words, the assymetries are actually what curbs the inflation. The latter represents a vast number of possibilities, which the slighest assymetries suddenly reduce by many orders of magnitude.

    Conversely, on that basis, a reduction of possibilities should correspond to a convergent tendency. In extreme cases this is manifested in a black hole, which obviously reduces future possibilities to the maximum extent for any energy that crosses the event horizon.

    This is in keeping with the notion I have sketched here in the past of energy quanta in a constant and rapid cycle of dissipation and convergence (both superluminal, like inflationary space itself, but somehow of course without violating GR).

    This picture also accounts nicely for holography, since any energy interaction in a region is faithfully mirrored on any closed boundary around it by cycles of energy quanta crossing and recrossing that boundary – albeit possibly, presumably, with some contribution or “cross-talk” from energy outside.

    Finally, it might be asked why an increasing entropy, and thus a tendency toward uniformity, doesn’t cause a constant expansion. Well, there is “dark energy”, which is precisely that. So presumably the difference between a Hoyle-style steady state (constantly rescaling) universe versus the (supposed) punctuated eternal inflation which actually occurs is simply a measure of how much the slightest inhomogeneities put a spanner in the works and slow the expansion to a crawl.

    But once entropy peaks, in the distant future, perhaps 10^100 years from now, inflation will get up to full steam again for a while, like a considerate driver who slows right down in towns, but floors it to rush as fast as possible through the featureless highways between.

    God, this post has become much longer than I intended – Hopefully it doesn’t sound too repetitive, or kooky 😉

  24. Sean, In answer to my question about whether inflation is needed to look back in time when we look out in space, you said that we always look back in time. I guess it was foolish of me to ignore that. But I still have a problem.

    If the Universe started with a big-bang event, doesn’t that imply that it all happened at one place? How then is it possible to look out in space to where we had been (or at least toward where we had been) and see light that was created when we were there? How can we outrun light in a way that we can turn around and watch it catching up to us?

    To take what’s probably a naively wrong simplification, suppose the universe were 2-dimensional. If it’s an expanding circle, perhaps we can look at other points on the circle and see them as they were at a time closer to the big bang. But isn’t there a maximum amount by which we can look back, i.e., when looking at the point on the direct opposite side of the circle?

    If we fix our gaze at that point, we see it’s history as it moves forward in time. What then is the earliest moment for that point that we could ever have seen? Wouldn’t it be the moment of the big bang when the circle was a point? From that moment on we would be receiving light from that point that left later and later after the big bang. So I still don’t understand how to think about looking arbitrarily far back in time. If we were going to look back and see the big bang itself, wouldn’t that require that we outrun light? If that’s true, mustn’t there be a limit to how far we can look back?

Comments are closed.

Scroll to Top