Time

The Biggest Ideas in the Universe | 20. Entropy and Information

You knew this one was coming, right? Why the past is different from the future, and why we seem to flow through time. Also a bit about how different groups of scientists use the idea of “information” in very different ways.

The Biggest Ideas in the Universe | 20. Entropy and Information

And here is the associated Q&A video:

The Biggest Ideas in the Universe | Q&A 20 - Entropy and Information

The Biggest Ideas in the Universe | 20. Entropy and Information Read More »

32 Comments

The Biggest Ideas in the Universe | 6. Spacetime

This week’s edition of The Biggest Ideas in the Universe completes a little trilogy, following as it does 4. Space and 5. Time. The theory of special relativity brings these two big ideas into one unified notion of four-dimensional spacetime. Learn why I don’t like talking about length contraction and time dilation!

The Biggest Ideas in the Universe | 6. Spacetime

And here’s the Q&A video:

The Biggest Ideas in the Universe | Q&A 6 - Spacetime

The Biggest Ideas in the Universe | 6. Spacetime Read More »

25 Comments

The Biggest Ideas in the Universe | 5. Time

For this installment of The Biggest Ideas in the Universe, we turn to one of my favorite topics, Time. (It’s a natural followup to Space, which we looked at last week.) There is so much to say about time that we have to judiciously choose just a few aspects: are the past and future as real as the present, how do we measure time, and why does it have an arrow? But I suspect we’ll be diving more deeply into the mysteries of time as the series progresses.

The Biggest Ideas in the Universe | 5. Time

And here is the associated Q&A video, where I sneak in a discussion of Newcomb’s Paradox:

The Biggest Ideas in the Universe | Q&A 5 - Time

The Biggest Ideas in the Universe | 5. Time Read More »

43 Comments

What Happened at the Big Bang?

I had the pleasure earlier this month of giving a plenary lecture at a meeting of the American Astronomical Society. Unfortunately, as far as I know they don’t record the lectures on video. So here, at least, are the slides I showed during my talk. I’ve been a little hesitant to put them up, since some subtleties are lost if you only have the slides and not the words that went with them, but perhaps it’s better than nothing.

My assigned topic was “What We Don’t Know About the Beginning of the Universe,” and I focused on the question of whether there could have been space and time even before the Big Bang. Short answer: sure there could have been, but we don’t actually know.

So what I did to fill my time was two things. First, I talked about different ways the universe could have existed before the Big Bang, classifying models into four possibilities (see Slide 7):

  1. Bouncing (the universe collapses to a Big Crunch, then re-expands with a Big Bang)
  2. Cyclic (a series of bounces and crunches, extending forever)
  3. Hibernating (a universe that sits quiescently for a long time, before the Bang begins)
  4. Reproducing (a background empty universe that spits off babies, each of which begins with a Bang)

I don’t claim this is a logically exhaustive set of possibilities, but most semi-popular models I know fit into one of the above categories. Given my own way of thinking about the problem, I emphasized that any decent cosmological model should try to explain why the early universe had a low entropy, and suggested that the Reproducing models did the best job.

My other goal was to talk about how thinking quantum-mechanically affects the problem. There are two questions to ask: is time emergent or fundamental, and is Hilbert space finite- or infinite-dimensional. If time is fundamental, the universe lasts forever; it doesn’t have a beginning. But if time is emergent, there may very well be a first moment. If Hilbert space is finite-dimensional it’s necessary (there are only a finite number of moments of time that can possibly emerge), while if it’s infinite-dimensional the problem is open.

Despite all that we don’t know, I remain optimistic that we are actually making progress here. I’m pretty hopeful that within my lifetime we’ll have settled on a leading theory for what happened at the very beginning of the universe.

What Happened at the Big Bang? Read More »

71 Comments

Entropy and Complexity, Cause and Effect, Life and Time

Finally back from Scotland, where I gave a series of five talks for the Gifford Lectures in Glasgow. The final four, at least, were recorded, and should go up on the web at some point, but I’m not sure when.

Meanwhile, I had a very fun collaboration with Henry Reich, the wizard behind the Minute Physics videos. Henry and I have known each other for a while, and I previously joined forces with him to talk about dark energy and the arrow of time.

This time, we made a series of five videos (sponsored by Google and Audible.com) based on sections of The Big Picture. In particular, we focused on the thread connecting the arrow of time and entropy to such everyday notions of cause and effect and the appearance of complex structures, ending with the origin of life and how low-entropy energy from the Sun powers the biosphere here on Earth. Henry and I wrote the scripts together, based on the book; I read the narration, and of course he did the art.

Enjoy!

  1. Why Doesn’t Time Flow Backwards?
  2. Why Doesn't Time Flow Backwards? (Big Picture Ep. 1/5)

  3. Do Cause and Effect Really Exist?
  4. Do Cause and Effect Really Exist? (Big Picture Ep. 2/5)

  5. Where Does Complexity Come From?
  6. Where Does Complexity Come From? (Big Picture Ep. 3/5)

  7. How Entropy Powers the Earth
  8. How Entropy Powers The Earth (Big Picture Ep. 4/5)

  9. What Is the Purpose of Life?
  10. What is the Purpose of Life? (Big Picture Ep. 5/5)

Entropy and Complexity, Cause and Effect, Life and Time Read More »

27 Comments

Entropic Time

A temporary break from book-related blogging to bring you this delightful video from A Capella Science, in which Tim Blais sings about entropy while apparently violating one of my favorite laws of physics. I don’t even want to think about how much work this was to put together.

Entropic Time (Backwards Billy Joel Parody) | A Capella Science

Tim was gracious enough to tip his hat to a lecture of mine as partial inspiration for the video. And now that I think about it, entropy and the arrow of time play crucial roles in The Big Picture. So this is a book-related blog post after all! Had you fooled.

Entropic Time Read More »

6 Comments

The Bayesian Second Law of Thermodynamics

Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.

At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.

That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.

Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)

Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.

Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:

S = - \int \rho \log \rho.

That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.

In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?

I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.

But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)

The Bayesian Second Law of Thermodynamics
Anthony Bartolotta, Sean M. Carroll, Stefan Leichenauer, and Jason Pollack

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.

The crucial word “Bayesian” here refers to Bayes’s Theorem, a central result in probability theory. …

The Bayesian Second Law of Thermodynamics Read More »

49 Comments

The Reality of Time

The idea that time isn’t “real” is an ancient one — if we’re allowed to refer to things as “ancient” under the supposition that time isn’t real. You will recall the humorous debate we had at our Setting Time Aright conference a few years ago, in which Julian Barbour (the world’s most famous living exponent of the view that time isn’t real) and Tim Maudlin (who believes strongly that time is real, and central) were game enough to argue each other’s position, rather than their own. Confusingly, they were both quite convincing.

smithsonian-mag The subject has come up once again with two new books by Lee Smolin: Time Reborn, all by himself, and The Singular Universe and the Reality of Time, with philosopher Roberto Mangabeira Unger. This new attention prompted me to write a short essay for Smithsonian magazine, laying out the different possibilities.

Personally I think that the whole issue is being framed in a slightly misleading way. (Indeed, this mistaken framing caused me to believe at first that Lee and I were in agreement, until his book actually came out.) The stance of Maudlin and Smolin and others isn’t merely that time is “real,” in the sense that it exists and plays a useful role in how we talk about the world. They want to say something more: that the passage of time is real. That is, that time is more than simply a label on different moments in the history of the universe, all of which are independently pretty much equal. They want to attribute “reality” to the idea of the universe coming into being, moment by moment.

3metaphysics

Such a picture — corresponding roughly to the “possibilism” option in the picture above, although I won’t vouch that any of these people would describe their own views that way — is to be contrasted with the “eternalist” picture of the universe that has been growing in popularity ever since Laplace introduced his Demon. This is the view, in the eyes of many, that is straightforwardly suggested by our best understanding of the laws of physics, which don’t seem to play favorites among different moments of time.

According to eternalism, the apparent “flow” of time from past to future is indeed an illusion, even if the time coordinate in our equations is perfectly real. There is an apparent asymmetry between the past and future (many such asymmetries, really), but that can be traced to the simple fact that the entropy of the universe was very low near the Big Bang — the Past Hypothesis. That’s an empirical feature of the configuration of stuff in the universe, not a defining property of the nature of time itself.

Personally, I find the eternalist block-universe view to be perfectly acceptable, so I think that these folks are working hard to tackle a problem that has already been solved. There are more than enough problems that haven’t been solved to occupy my life for the rest of its natural span of time (as it were), so I’m going to concentrate on those. But who knows? If someone could follow this trail and be led to a truly revolutionary and successful picture of how the universe works, that would be pretty awesome.

The Reality of Time Read More »

84 Comments

Discovering Tesseracts

I still haven’t seen Interstellar yet, but here’s a great interview with Kip Thorne about the movie-making process and what he thinks of the final product. (For a very different view, see Phil Plait [update: now partly recanted].)

tesseract One of the things Kip talks about is that the film refers to the concept of a tesseract, which he thought was fun. A tesseract is a four-dimensional version of a cube; you can’t draw it faithfully in two dimensions, but with a little imagination you can get the idea from the picture on the right. Kip mentions that he first heard of the concept of a tesseract in George Gamow’s classic book One, Two, Three… Infinity. Which made me feel momentarily proud, because I remember reading about it there, too — and only later did I find out that many (presumably less sophisticated) people heard of it in Madeleine L’Engle’s equally classic book, A Wrinkle in Time.

But then I caught myself, because (1) it’s stupid to think that reading about something for the first time in a science book rather than a science fantasy is anything to be proud of, and (2) in reality I suspect I first heard about it in Robert Heinlein’s (classic!) short story, “–And He Built a Crooked House.” Which is just as fantastical as L’Engle’s book.

So — where did you first hear the word “tesseract”? A great excuse for a poll! Feel free to elaborate in the comments.

Discovering Tesseracts Read More »

35 Comments

Squelching Boltzmann Brains (And Maybe Eternal Inflation)

There’s no question that quantum fluctuations play a crucial role in modern cosmology, as the recent BICEP2 observations have reminded us. According to inflation, all of the structures we see in the universe, from galaxies up to superclusters and beyond, originated as tiny quantum fluctuations in the very early universe, as did the gravitational waves seen by BICEP2. But quantum fluctuations are a bit of a mixed blessing: in addition to providing an origin for density perturbations and gravitational waves (good!), they are also supposed to give rise to Boltzmann brains (bad) and eternal inflation (good or bad, depending on taste). Nobody would deny that it behooves cosmologists to understand quantum fluctuations as well as they can, especially since our theories involve mysterious aspects of physics operating at absurdly high energies.

Kim Boddy, Jason Pollack and I have been re-examining how quantum fluctuations work in cosmology, and in a new paper we’ve come to a surprising conclusion: cosmologists have been getting it wrong for decades now. In an expanding universe that has nothing in it but vacuum energy, there simply aren’t any quantum fluctuations at all. Our approach shows that the conventional understanding of inflationary perturbations gets the right answer, although the perturbations aren’t due to “fluctuations”; they’re due to an effective measurement of the quantum state of the inflaton field when the universe reheats at the end of inflation. In contrast, less empirically-grounded ideas such as Boltzmann brains and eternal inflation both rely crucially on treating fluctuations as true dynamical events, occurring in real time — and we say that’s just wrong.

All very dramatically at odds with the conventional wisdom, if we’re right. Which means, of course, that there’s always a chance we’re wrong (although we don’t think it’s a big chance). This paper is pretty conceptual, which a skeptic might take as a euphemism for “hand-waving”; we’re planning on digging into some of the mathematical details in future work, but for the time being our paper should be mostly understandable to anyone who knows undergraduate quantum mechanics. Here’s the abstract:

De Sitter Space Without Quantum Fluctuations
Kimberly K. Boddy, Sean M. Carroll, and Jason Pollack

We argue that, under certain plausible assumptions, de Sitter space settles into a quiescent vacuum in which there are no quantum fluctuations. Quantum fluctuations require time-dependent histories of out-of-equilibrium recording devices, which are absent in stationary states. For a massive scalar field in a fixed de Sitter background, the cosmic no-hair theorem implies that the state of the patch approaches the vacuum, where there are no fluctuations. We argue that an analogous conclusion holds whenever a patch of de Sitter is embedded in a larger theory with an infinite-dimensional Hilbert space, including semiclassical quantum gravity with false vacua or complementarity in theories with at least one Minkowski vacuum. This reasoning provides an escape from the Boltzmann brain problem in such theories. It also implies that vacuum states do not uptunnel to higher-energy vacua and that perturbations do not decohere while slow-roll inflation occurs, suggesting that eternal inflation is much less common than often supposed. On the other hand, if a de Sitter patch is a closed system with a finite-dimensional Hilbert space, there will be Poincaré recurrences and Boltzmann fluctuations into lower-entropy states. Our analysis does not alter the conventional understanding of the origin of density fluctuations from primordial inflation, since reheating naturally generates a high-entropy environment and leads to decoherence.

The basic idea is simple: what we call “quantum fluctuations” aren’t true, dynamical events that occur in isolated quantum systems. Rather, they are a poetic way of describing the fact that when we observe such systems, the outcomes are randomly distributed rather than deterministically predictable. …

Squelching Boltzmann Brains (And Maybe Eternal Inflation) Read More »

67 Comments
Scroll to Top