# What Happened at the Big Bang?

I had the pleasure earlier this month of giving a plenary lecture at a meeting of the American Astronomical Society. Unfortunately, as far as I know they don’t record the lectures on video. So here, at least, are the slides I showed during my talk. I’ve been a little hesitant to put them up, since some subtleties are lost if you only have the slides and not the words that went with them, but perhaps it’s better than nothing.

My assigned topic was “What We Don’t Know About the Beginning of the Universe,” and I focused on the question of whether there could have been space and time even before the Big Bang. Short answer: sure there could have been, but we don’t actually know.

So what I did to fill my time was two things. First, I talked about different ways the universe could have existed before the Big Bang, classifying models into four possibilities (see Slide 7):

1. Bouncing (the universe collapses to a Big Crunch, then re-expands with a Big Bang)
2. Cyclic (a series of bounces and crunches, extending forever)
3. Hibernating (a universe that sits quiescently for a long time, before the Bang begins)
4. Reproducing (a background empty universe that spits off babies, each of which begins with a Bang)

I don’t claim this is a logically exhaustive set of possibilities, but most semi-popular models I know fit into one of the above categories. Given my own way of thinking about the problem, I emphasized that any decent cosmological model should try to explain why the early universe had a low entropy, and suggested that the Reproducing models did the best job.

My other goal was to talk about how thinking quantum-mechanically affects the problem. There are two questions to ask: is time emergent or fundamental, and is Hilbert space finite- or infinite-dimensional. If time is fundamental, the universe lasts forever; it doesn’t have a beginning. But if time is emergent, there may very well be a first moment. If Hilbert space is finite-dimensional it’s necessary (there are only a finite number of moments of time that can possibly emerge), while if it’s infinite-dimensional the problem is open.

Despite all that we don’t know, I remain optimistic that we are actually making progress here. I’m pretty hopeful that within my lifetime we’ll have settled on a leading theory for what happened at the very beginning of the universe.

# Entropy and Complexity, Cause and Effect, Life and Time

Finally back from Scotland, where I gave a series of five talks for the Gifford Lectures in Glasgow. The final four, at least, were recorded, and should go up on the web at some point, but I’m not sure when.

Meanwhile, I had a very fun collaboration with Henry Reich, the wizard behind the Minute Physics videos. Henry and I have known each other for a while, and I previously joined forces with him to talk about dark energy and the arrow of time.

This time, we made a series of five videos (sponsored by Google and Audible.com) based on sections of The Big Picture. In particular, we focused on the thread connecting the arrow of time and entropy to such everyday notions of cause and effect and the appearance of complex structures, ending with the origin of life and how low-entropy energy from the Sun powers the biosphere here on Earth. Henry and I wrote the scripts together, based on the book; I read the narration, and of course he did the art.

Enjoy!

1. Why Doesn’t Time Flow Backwards?
2. Do Cause and Effect Really Exist?
3. Where Does Complexity Come From?
4. How Entropy Powers the Earth
5. What Is the Purpose of Life?

# Entropic Time

A temporary break from book-related blogging to bring you this delightful video from A Capella Science, in which Tim Blais sings about entropy while apparently violating one of my favorite laws of physics. I don’t even want to think about how much work this was to put together.

Tim was gracious enough to tip his hat to a lecture of mine as partial inspiration for the video. And now that I think about it, entropy and the arrow of time play crucial roles in The Big Picture. So this is a book-related blog post after all! Had you fooled.

# The Bayesian Second Law of Thermodynamics

Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.

At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.

That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.

Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)

Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.

Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:

$S = - \int \rho \log \rho.$

That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.

In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?

I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.

But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)

The Bayesian Second Law of Thermodynamics
Anthony Bartolotta, Sean M. Carroll, Stefan Leichenauer, and Jason Pollack

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.

The crucial word “Bayesian” here refers to Bayes’s Theorem, a central result in probability theory. Continue reading

# The Reality of Time

The idea that time isn’t “real” is an ancient one — if we’re allowed to refer to things as “ancient” under the supposition that time isn’t real. You will recall the humorous debate we had at our Setting Time Aright conference a few years ago, in which Julian Barbour (the world’s most famous living exponent of the view that time isn’t real) and Tim Maudlin (who believes strongly that time is real, and central) were game enough to argue each other’s position, rather than their own. Confusingly, they were both quite convincing.

The subject has come up once again with two new books by Lee Smolin: Time Reborn, all by himself, and The Singular Universe and the Reality of Time, with philosopher Roberto Mangabeira Unger. This new attention prompted me to write a short essay for Smithsonian magazine, laying out the different possibilities.

Personally I think that the whole issue is being framed in a slightly misleading way. (Indeed, this mistaken framing caused me to believe at first that Lee and I were in agreement, until his book actually came out.) The stance of Maudlin and Smolin and others isn’t merely that time is “real,” in the sense that it exists and plays a useful role in how we talk about the world. They want to say something more: that the passage of time is real. That is, that time is more than simply a label on different moments in the history of the universe, all of which are independently pretty much equal. They want to attribute “reality” to the idea of the universe coming into being, moment by moment.

Such a picture — corresponding roughly to the “possibilism” option in the picture above, although I won’t vouch that any of these people would describe their own views that way — is to be contrasted with the “eternalist” picture of the universe that has been growing in popularity ever since Laplace introduced his Demon. This is the view, in the eyes of many, that is straightforwardly suggested by our best understanding of the laws of physics, which don’t seem to play favorites among different moments of time.

According to eternalism, the apparent “flow” of time from past to future is indeed an illusion, even if the time coordinate in our equations is perfectly real. There is an apparent asymmetry between the past and future (many such asymmetries, really), but that can be traced to the simple fact that the entropy of the universe was very low near the Big Bang — the Past Hypothesis. That’s an empirical feature of the configuration of stuff in the universe, not a defining property of the nature of time itself.

Personally, I find the eternalist block-universe view to be perfectly acceptable, so I think that these folks are working hard to tackle a problem that has already been solved. There are more than enough problems that haven’t been solved to occupy my life for the rest of its natural span of time (as it were), so I’m going to concentrate on those. But who knows? If someone could follow this trail and be led to a truly revolutionary and successful picture of how the universe works, that would be pretty awesome.