Search Results for: boltzman

You Should Love (or at least respect) the Schrödinger Equation

Over at the twitter dot com website, there has been a briefly-trending topic #fav7films, discussing your favorite seven films. Part of the purpose of being on twitter is to one-up the competition, so I instead listed my #fav7equations. Slightly cleaned up, the equations I chose as my seven favorites are:

  1. {\bf F} = m{\bf a}
  2. \partial L/\partial {\bf x} = \partial_t ({\partial L}/{\partial {\dot {\bf x}}})
  3. {\mathrm d}*F = J
  4. S = k \log W
  5. ds^2 = -{\mathrm d}t^2 + {\mathrm d}{\bf x}^2
  6. G_{ab} = 8\pi G T_{ab}
  7. \hat{H}|\psi\rangle = i\partial_t |\psi\rangle

In order: Newton’s Second Law of motion, the Euler-Lagrange equation, Maxwell’s equations in terms of differential forms, Boltzmann’s definition of entropy, the metric for Minkowski spacetime (special relativity), Einstein’s equation for spacetime curvature (general relativity), and the Schrödinger equation of quantum mechanics. Feel free to Google them for more info, even if equations aren’t your thing. They represent a series of extraordinary insights in the development of physics, from the 1600’s to the present day.

Of course people chimed in with their own favorites, which is all in the spirit of the thing. But one misconception came up that is probably worth correcting: people don’t appreciate how important and all-encompassing the Schrödinger equation is.

I blame society. Or, more accurately, I blame how we teach quantum mechanics. Not that the standard treatment of the Schrödinger equation is fundamentally wrong (as other aspects of how we teach QM are), but that it’s incomplete. And sometimes people get brief introductions to things like the Dirac equation or the Klein-Gordon equation, and come away with the impression that they are somehow relativistic replacements for the Schrödinger equation, which they certainly are not. Dirac et al. may have originally wondered whether they were, but these days we certainly know better.

As I remarked in my post about emergent space, we human beings tend to do quantum mechanics by starting with some classical model, and then “quantizing” it. Nature doesn’t work that way, but we’re not as smart as Nature is. By a “classical model” we mean something that obeys the basic Newtonian paradigm: there is some kind of generalized “position” variable, and also a corresponding “momentum” variable (how fast the position variable is changing), which together obey some deterministic equations of motion that can be solved once we are given initial data. Those equations can be derived from a function called the Hamiltonian, which is basically the energy of the system as a function of positions and momenta; the results are Hamilton’s equations, which are essentially a slick version of Newton’s original {\bf F} = m{\bf a}.

There are various ways of taking such a setup and “quantizing” it, but one way is to take the position variable and consider all possible (normalized, complex-valued) functions of that variable. So instead of, for example, a single position coordinate x and its momentum p, quantum mechanics deals with wave functions ψ(x). That’s the thing that you square to get the probability of observing the system to be at the position x. (We can also transform the wave function to “momentum space,” and calculate the probabilities of observing the system to be at momentum p.) Just as positions and momenta obey Hamilton’s equations, the wave function obeys the Schrödinger equation,

\hat{H}|\psi\rangle = i\partial_t |\psi\rangle.

Indeed, the \hat{H} that appears in the Schrödinger equation is just the quantum version of the Hamiltonian.

The problem is that, when we are first taught about the Schrödinger equation, it is usually in the context of a specific, very simple model: a single non-relativistic particle moving in a potential. In other words, we choose a particular kind of wave function, and a particular Hamiltonian. The corresponding version of the Schrödinger equation is

\displaystyle{\left[-\frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} + V(x)\right]|\psi\rangle = i\partial_t |\psi\rangle}.

If you don’t dig much deeper into the essence of quantum mechanics, you could come away with the impression that this is “the” Schrödinger equation, rather than just “the non-relativistic Schrödinger equation for a single particle.” Which would be a shame.

What happens if we go beyond the world of non-relativistic quantum mechanics? Is the poor little Schrödinger equation still up to the task? Sure! All you need is the right set of wave functions and the right Hamiltonian. Every quantum system obeys a version of the Schrödinger equation; it’s completely general. In particular, there’s no problem talking about relativistic systems or field theories — just don’t use the non-relativistic version of the equation, obviously.

What about the Klein-Gordon and Dirac equations? These were, indeed, originally developed as “relativistic versions of the non-relativistic Schrödinger equation,” but that’s not what they ended up being useful for. (The story is told that Schrödinger himself invented the Klein-Gordon equation even before his non-relativistic version, but discarded it because it didn’t do the job for describing the hydrogen atom. As my old professor Sidney Coleman put it, “Schrödinger was no dummy. He knew about relativity.”)

The Klein-Gordon and Dirac equations are actually not quantum at all — they are classical field equations, just like Maxwell’s equations are for electromagnetism and Einstein’s equation is for the metric tensor of gravity. They aren’t usually taught that way, in part because (unlike E&M and gravity) there aren’t any macroscopic classical fields in Nature that obey those equations. The KG equation governs relativistic scalar fields like the Higgs boson, while the Dirac equation governs spinor fields (spin-1/2 fermions) like the electron and neutrinos and quarks. In Nature, spinor fields are a little subtle, because they are anticommuting Grassmann variables rather than ordinary functions. But make no mistake; the Dirac equation fits perfectly comfortably into the standard Newtonian physical paradigm.

For fields like this, the role of “position” that for a single particle was played by the variable x is now played by an entire configuration of the field throughout space. For a scalar Klein-Gordon field, for example, that might be the values of the field φ(x) at every spatial location x. But otherwise the same story goes through as before. We construct a wave function by attaching a complex number to every possible value of the position variable; to emphasize that it’s a function of functions, we sometimes call it a “wave functional” and write it as a capital letter,

\Psi[\phi(x)].

The absolute-value-squared of this wave functional tells you the probability that you will observe the field to have the value φ(x) at each point x in space. The functional obeys — you guessed it — a version of the Schrödinger equation, with the Hamiltonian being that of a relativistic scalar field. There are likewise versions of the Schrödinger equation for the electromagnetic field, for Dirac fields, for the whole Core Theory, and what have you.

So the Schrödinger equation is not simply a relic of the early days of quantum mechanics, when we didn’t know how to deal with much more than non-relativistic particles orbiting atomic nuclei. It is the foundational equation of quantum dynamics, and applies to every quantum system there is. (There are equivalent ways of doing quantum mechanics, of course, like the Heisenberg picture and the path-integral formulation, but they’re all basically equivalent.) You tell me what the quantum state of your system is, and what is its Hamiltonian, and I will plug into the Schrödinger equation to see how that state will evolve with time. And as far as we know, quantum mechanics is how the universe works. Which makes the Schrödinger equation arguably the most important equation in all of physics.

While we’re at it, people complained that the cosmological constant Λ didn’t appear in Einstein’s equation (6). Of course it does — it’s part of the energy-momentum tensor on the right-hand side. Again, Einstein didn’t necessarily think of it that way, but these days we know better. The whole thing that is great about physics is that we keep learning things; we don’t need to remain stuck with the first ideas that were handed down by the great minds of the past.

You Should Love (or at least respect) the Schrödinger Equation Read More »

47 Comments

The Bayesian Second Law of Thermodynamics

Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.

At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.

That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.

Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)

Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.

Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:

S = - \int \rho \log \rho.

That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.

In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?

I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.

But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)

The Bayesian Second Law of Thermodynamics
Anthony Bartolotta, Sean M. Carroll, Stefan Leichenauer, and Jason Pollack

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.

The crucial word “Bayesian” here refers to Bayes’s Theorem, a central result in probability theory. …

The Bayesian Second Law of Thermodynamics Read More »

49 Comments

A Personal Narrative

I was very pleased to learn that I’m among this year’s recipients of a Guggenheim Fellowship. The Fellowships are mid-career awards, meant “to further the development of scholars and artists by assisting them to engage in research in any field of knowledge and creation in any of the arts, under the freest possible conditions and irrespective of race, color, or creed.” This year 173 Fellowships were awarded, chosen from 3,100 applications. About half of the winners are in the creative arts, and the majority of those remaining are in the humanities and social sciences, leaving eighteen slots for natural scientists. Only two physicists were chosen, so it’s up to Philip Phillips and me to uphold the honor of our discipline.

The Guggenheim application includes a “Career Narrative” as well as a separate research proposal. I don’t like to share my research proposals around, mostly because I’m a theoretical physicist and what I actually end up doing rarely bears much resemblance to what I had previously planned to do. But I thought I could post my career narrative, if only on the chance that it might be useful to future fellowship applicants (or young students embarking on their own research careers). Be warned that it’s more personal than most things I write on the blog here, not to mention that it’s beastly long. Also, keep in mind that the purpose of the document was to convince people to give me money — as such, it falls pretty heavily on the side of grandiosity and self-justification. Be assured that in real life I remain meek and humble.

A Personal Narrative Read More »

42 Comments

Guest Post: Don Page on God and Cosmology

Don Page is one of the world’s leading experts on theoretical gravitational physics and cosmology, as well as a previous guest-blogger around these parts. (There are more world experts in theoretical physics than there are people who have guest-blogged for me, so the latter category is arguably a greater honor.) He is also, somewhat unusually among cosmologists, an Evangelical Christian, and interested in the relationship between cosmology and religious belief.

Longtime readers may have noticed that I’m not very religious myself. But I’m always willing to engage with people with whom I disagree, if the conversation is substantive and proceeds in good faith. I may disagree with Don, but I’m always interested in what he has to say.

Recently Don watched the debate I had with William Lane Craig on “God and Cosmology.” I think these remarks from a devoted Christian who understands the cosmology very well will be of interest to people on either side of the debate.


Open letter to Sean Carroll and William Lane Craig:

I just ran across your debate at the 2014 Greer-Heard Forum, and greatly enjoyed listening to it. Since my own views are often a combination of one or the others of yours (though they also often differ from both of yours), I thought I would give some comments.

I tend to be skeptical of philosophical arguments for the existence of God, since I do not believe there are any that start with assumptions universally accepted. My own attempt at what I call the Optimal Argument for God (one, two, three, four), certainly makes assumptions that only a small fraction of people, and perhaps even only a small fraction of theists, believe in, such as my assumption that the world is the best possible. You know that well, Sean, from my provocative seminar at Caltech in November on “Cosmological Ontology and Epistemology” that included this argument at the end.

I mainly think philosophical arguments might be useful for motivating someone to think about theism in a new way and perhaps raise the prior probability someone might assign to theism. I do think that if one assigns theism not too low a prior probability, the historical evidence for the life, teachings, death, and resurrection of Jesus can lead to a posterior probability for theism (and for Jesus being the Son of God) being quite high. But if one thinks a priori that theism is extremely improbable, then the historical evidence for the Resurrection would be discounted and not lead to a high posterior probability for theism.

I tend to favor a Bayesian approach in which one assigns prior probabilities based on simplicity and then weights these by the likelihoods (the probabilities that different theories assign to our observations) to get, when the product is normalized by dividing by the sum of the products for all theories, the posterior probabilities for the theories. Of course, this is an idealized approach, since we don’t yet have _any_ plausible complete theory for the universe to calculate the conditional probability, given the theory, of any realistic observation.

For me, when I consider evidence from cosmology and physics, I find it remarkable that it seems consistent with all we know that the ultimate theory might be extremely simple and yet lead to sentient experiences such as ours. A Bayesian analysis with Occam’s razor to assign simpler theories higher prior probabilities would favor simpler theories, but the observations we do make preclude the simplest possible theories (such as the theory that nothing concrete exists, or the theory that all logically possible sentient experiences occur with equal probability, which would presumably make ours have zero probability in this theory if there are indeed an infinite number of logically possible sentient experiences). So it seems mysterious why the best theory of the universe (which we don’t have yet) may be extremely simple but yet not maximally simple. I don’t see that naturalism would explain this, though it could well accept it as a brute fact.

One might think that adding the hypothesis that the world (all that exists) includes God would make the theory for the entire world more complex, but it is not obvious that is the case, since it might be that God is even simpler than the universe, so that one would get a simpler explanation starting with God than starting with just the universe. But I agree with your point, Sean, that theism is not very well defined, since for a complete theory of a world that includes God, one would need to specify the nature of God.

For example, I have postulated that God loves mathematical elegance, as well as loving to create sentient beings, so something like this might explain both why the laws of physics, and the quantum state of the universe, and the rules for getting from those to the probabilities of observations, seem much simpler than they might have been, and why there are sentient experiences with a rather high degree of order. However, I admit there is a lot of logically possible variation on what God’s nature could be, so that it seems to me that at least we humans have to take that nature as a brute fact, analogous to the way naturalists would have to take the laws of physics and other aspects of the natural universe as brute facts. I don’t think either theism or naturalism solves this problem, so it seems to me rather a matter of faith which makes more progress toward solving it. That is, theism per se cannot deduce from purely a priori reasoning the full nature of God (e.g., when would He prefer to maintain elegant laws of physics, and when would He prefer to cure someone from cancer in a truly miraculous way that changes the laws of physics), and naturalism per se cannot deduce from purely a priori reasoning the full nature of the universe (e.g., what are the dynamical laws of physics, what are the boundary conditions, what are the rules for getting probabilities, etc.).

In view of these beliefs of mine, I am not convinced that most philosophical arguments for the existence of God are very persuasive. In particular, I am highly skeptical of the Kalam Cosmological Argument, which I shall quote here from one of your slides, Bill:

  1. If the universe began to exist, then there is a transcendent cause
    which brought the universe into existence.
  2. The universe began to exist.
  3. Therefore, there is a transcendent cause which brought the
    universe into existence.

I do not believe that the first premise is metaphysically necessary, and I am also not at all sure that our universe had a beginning. …

Guest Post: Don Page on God and Cosmology Read More »

960 Comments

What Happens Inside the Quantum Wave Function?

Many things can “happen” inside a quantum wave function, of course, including everything that actually does happen — formation of galaxies, origin of life, Lady Gaga concerts, you name it. But given a certain quantum wave function, what actual is happening inside it?

Philosophy of Cosmology

A surprisingly hard problem! Basically because, unlike in classical mechanics, in quantum mechanics the wave function describes superpositions of different possible measurement outcomes. And you can easily cook up situations where a single wave function can be written in many different ways as superpositions of different things. Indeed, it’s inevitable; a humble quantum spin can be written as a superposition of “spinning clockwise” or “spinning counterclockwise” with respect to the z-axis, but it can equally well be written as a superposition of similar behavior with respect to the z-axis, or indeed any axis at all. Which one is “really happening”?

Answer: none of them is “really happening” as opposed to any of the others. The possible measurement outcomes (in this case, spinning clockwise or counterclockwise with respect to some chosen axis) only become “real” when you actually measure the thing. Put more objectively: when the quantum system interacts with a large number of degrees of freedom, becomes entangled with them, and decoherence occurs. But the perfectly general and rigorous picture of all that process is still not completely developed.

So to get some intuition, let’s start with the simplest possible version of the problem: what happens inside a wave function (describing “system” but also “measurement device” and really, the whole universe) that is completely stationary? I.e., what dynamically processes are occurring while the wave function isn’t changing at all?

You’re first guess here — nothing at all “happens” inside a wave function that doesn’t evolve with time — is completely correct. That’s what I explain in the video above, of a talk I gave at the Philosophy of Cosmology workshop in Tenerife. The talk is based on my recent paper with Kim Boddy and Jason Pollack.

Surprisingly, this claim — “nothing is happening if the quantum state isn’t changing with time” — manages to be controversial! People have this idea that a time-independent quantum state has a rich inner life, with civilizations rising and falling within even though the state is literally exactly the same at every moment in time. I’m not precisely sure why. It would be more understandable if that belief got you something good, like an answer to some pressing cosmological problem. But it’s the opposite — believing that all sorts of things are happening inside a time-independent state creates cosmological problems, in particular the Boltzmann Brain problem, where conscious observers keep popping into existence in empty space. So we’re in the funny situation where believing the correct thing — that nothing is happening when the quantum state isn’t changing — solves a problem, and yet some people prefer to believe the incorrect thing, even though that creates problems for them.

Quantum mechanics is a funny thing.

What Happens Inside the Quantum Wave Function? Read More »

60 Comments

Ten Questions for the Philosophy of Cosmology

Last week I spent an enjoyable few days in Tenerife, one of the Canary Islands, for a conference on the Philosophy of Cosmology. The slides for all the talks are now online; videos aren’t up yet, but I understand they are forthcoming.

Stephen Hawking did not actually attend our meeting -- he was at the hotel for a different event. But he stopped by for an informal session on the arrow of time. Photo by Vishnya Maudlin.
Stephen Hawking did not actually attend our meeting — he was at the hotel for a different event. But he stopped by for an informal session on the arrow of time. Photo by Vishnya Maudlin.

It was a thought-provoking meeting, but one of my thoughts was: “We don’t really have a well-defined field called Philosophy of Cosmology.” At least, not yet. Talks were given by philosophers and by cosmologists; the philosophers generally gave good talks on the philosophy of physics, while some of the cosmologists gave solid-but-standard talks on cosmology. Some of the other cosmologists tried their hand at philosophy, and I thought those were generally less successful. Which is to be expected — it’s a sign that we need to do more work to set the foundations for this new subdiscipline.

A big part of defining an area of study is deciding on a set of questions that we all agree are worth thinking about. As a tiny step in that direction, here is my attempt to highlight ten questions — and various sub-questions — that naturally fall under the rubric of Philosophy of Cosmology. They fall under other rubrics as well, of course, as well as featuring significant overlap with each other. So there’s a certain amount of arbitrariness here — suggestions for improvements are welcome.

Here we go:

  1. In what sense, if any, is the universe fine-tuned? When can we say that physical parameters (cosmological constant, scale of electroweak symmetry breaking) or initial conditions are “unnatural”? What sets the appropriate measure with respect to which we judge naturalness of physical and cosmological parameters? Is there an explanation for cosmological coincidences such as the approximate equality between the density of matter and vacuum energy? Does inflation solve these problems, or exacerbate them? What conclusions should we draw from the existence of fine-tuning?
  2. How is the arrow of time related to the special state of the early universe? What is the best way to formulate the past hypothesis (the early universe was in a low entropy state) and the statistical postulate (uniform distribution within macrostates)? Can the early state be explained as a generic feature of dynamical processes, or is it associated with a specific quantum state of the universe, or should it be understood as a separate law of nature? In what way, if any, does the special early state help explain the temporal asymmetries of memory, causality, and quantum measurement?
  3. What is the proper role of the anthropic principle? Can anthropic reasoning be used to make reliable predictions? How do we define the appropriate reference class of observers? Given such a class, is there any reason to think of ourselves as “typical” within it? Does the prediction of freak observers (Boltzmann Brains) count as evidence against a cosmological scenario?
  4. What part should unobservable realms play in cosmological models? Does cosmic evolution naturally generate pocket universes, baby universes, or many branches of the wave function? Are other “universes” part of science if they can never be observed? How do we evaluate such models, and does the traditional process of scientific theory choice need to be adapted to account for non-falsifiable predictions? How confident can we ever be in early-universe scenarios such as inflation?
  5. What is the quantum state of the universe, and how does it evolve? Is there a unique prescription for calculating the wave function of the universe? Under what conditions are different parts of the quantum state “real,” in the sense that observers within them should be counted? What aspects of cosmology depend on competing formulations of quantum mechanics (Everett, dynamical collapse, hidden variables, etc.)? Do quantum fluctuations happen in equilibrium? What role does decoherence play in cosmic evolution? How does do quantum and classical probabilities arise in cosmological predictions? What defines classical histories within the quantum state?
  6. Are space and time emergent or fundamental? Is quantum gravity a theory of quantized spacetime, or is spacetime only an approximation valid in a certain regime? What are the fundamental degrees of freedom? Is there a well-defined Hilbert space for the universe, and what is its dimensionality? Is time evolution fundamental, or does time emerge from correlations within a static state?
  7. What is the role of infinity in cosmology? Can the universe be infinitely big? Are the fundamental laws ultimate discrete? Can there be an essential difference between “infinite” and “really big”? Can the arrow of time be explained if the universe has an infinite amount of room in which to evolve? Are there preferred ways to compare infinitely big subsets of an infinite space of states?
  8. Can the universe have a beginning, or can it be eternal? Does a universe with a first moment require a cause or deeper explanation? Are there reasons why there is something rather than nothing? Can the universe be cyclic, with a consistent arrow of time? Could it be eternal and statistically symmetric around some moment of lowest entropy?
  9. How do physical laws and causality apply to the universe as a whole? Can laws be said to change or evolve? Does the universe as a whole maximize some interesting quantity such as simplicity, goodness, interestingness, or fecundity? Should laws be understood as governing/generative entities, or are they just a convenient way to compactly represent a large number of facts? Is the universe complete in itself, or does it require external factors to sustain it? Do the laws of physics require ultimate explanations, or can they simply be?
  10. How do complex structures and order come into existence and evolve? Is complexity a transient phenomenon that depends on entropy generation? Are there general principles governing physical, biological, and psychological complexity? Is the appearance of life likely or inevitable? Does consciousness play a central role in accounting for the universe?

Chances are very small that anyone else interested in the field, forced at gunpoint to pick the ten biggest questions, would choose exactly these ten. Such are the wild and wooly early days of any field, when the frontier is unexplored and the conventional wisdom has yet to be settled. Feel free to make suggestions.

Ten Questions for the Philosophy of Cosmology Read More »

64 Comments

Quantum Mechanics Smackdown

Greetings from the Big Apple, where the World Science Festival got off to a swinging start with the announcement of the Kavli Prize winners. The local favorite will of course be the Astrophysics prize, which was awarded to Alan Guth, Andrei Linde, and Alexei Starobinsky for pioneering the theory of cosmic inflation. But we should also congratulate Nanoscience winners Thomas Ebbesen, Stefan Hell, and Sir John B. Pendry, as well as Neuroscience winners Brenda Milner, John O’Keefe, and Marcus E. Raichle.

I’m participating in several WSF events, and one of them tonight will be live-streamed in this very internet. The title is Measure for Measure: Quantum Physics and Reality, and we kick off at 8pm Eastern, 5pm Pacific.

[Update: I had previously embedded the video here, but that seems to be broken. It’s still available on the WSF website.]

The other participants are David Albert, Sheldon Goldstein, and Rüdiger Schack, with the conversation moderated by Brian Greene. The group is not merely a randomly-selected collection of people who know and love quantum mechanics; each participant was carefully chosen to defend a certain favorite version of this most mysterious of physical theories.

  • David Albert will propound the idea of dynamical collapse theories, such as the Ghirardi-Rimini-Weber (GRW) model. They posit that QM is truly stochastic, with wave functions really “collapsing” at unpredictable times, with a tiny rate that is negligible for individual particles but becomes rapid for macroscopic objects.
  • Shelly Goldstein will support some version of hidden-variable theories such as Bohmian mechanics. It’s sometimes thought that hidden variables have been ruled out by experimental tests of Bell’s inequalities, but that’s not right; only local hidden variables have been excluded. Non-local hidden variables are still very viable!
  • Rüdiger Schack will be telling us about a relatively new approach called Quantum Bayesianism, or QBism for short. (Don’t love the approach, but the nickname is awesome.) The idea here is that QM is really a theory about our ignorance of the world, similar to what Tom Banks defended here way back when.
  • My job, of course, will be to defend the honor of the Everett (many-worlds) formulation. I’ve done a lot less serious research on this issue than the other folks, but I will make up for that disadvantage by supporting the theory that is actually true. And coincidentally, by the time we’ve started debating I should have my first official paper on the foundations of QM appear on the arxiv: new work on deriving the Born Rule in Everett with Chip Sebens.

(For what it’s worth, I cannot resist quoting David Wallace in this context: when faced with the measurement problem in quantum mechanics, philosophers are eager to change the physics, while physicists are sure it’s just a matter of better philosophy.)

(Note also that both Steven Weinberg and Gerard ‘t Hooft have proposed new approaches to thinking about quantum mechanics. Neither of them were judged to be sufficiently distinguished to appear on our panel.)

It’s not accidental that I call these “formulations” rather than “interpretations” of quantum mechanics. I’d like to see people abandon the phrase “interpretation of quantum mechanics” entirely (though I often slip up and use it myself). The options listed above are not different interpretations of the same underlying structure — they are legitimately different physical theories, with potentially different experimental consequences (as our recent work on quantum fluctuations shows).

Relatedly, I discovered this morning that celebrated philosopher Hilary Putnam has joined the blogosphere, with the whimsically titled “Sardonic Comment.” His very first post shares an email conversation he had about the measurement problem in QM, including my co-panelists David and Shelly, and also Tim Maudlin and Roderich Tumulka (but not me). I therefore had the honor of leaving the very first comment on Hilary Putnam’s blog, encouraging him to bring more Everettians into the discussion!

Quantum Mechanics Smackdown Read More »

53 Comments

A Leap in Energy

The discovery by BICEP2 of the signature of gravitational waves in the cosmic microwave background — if it holds up! — is not only good evidence for inflation in the very early universe, it’s a fairly precise indication that inflation occurred at a very high energy scale. I thought of a vivid way to emphasize just how high that energy is.

Particle physicists like to keep things simple by characterizing all physical quantities in terms of a single kind of unit — typically energy, and typically measured in electron volts. That’s part of the magic of natural units. We live in a world governed by relativity, so the speed of light c provides a natural unit of velocity. We also live in a world governed by quantum mechanics, so Planck’s constant ℏ provides a natural unit of action. And we live in a world governed by statistical mechanics, so Boltzmann’s constant k provides a natural conversion between energy and temperature. We therefore set these quantities equal to unity, ℏ = c = k = 1. Once that’s done, mass and temperature have the same units as energy. Time and distance have units of 1/energy. Energy density is energy per unit spatial volume, which works out to (energy)4. This kind of reasoning makes particle physicists happy, since they like to think of everything in terms of energy scales.

So, thinking about everything in terms of energy scales, what’s the energy of everyday life? It makes sense to choose room temperature, about 295 Kelvin. That works out to about 0.02 electron volts, which we can call the temperature of everyday life:

E_{\rm everyday} = 2 \times 10^{-2}\, {\rm eV}.

One way of thinking about the progress of fundamental physics is to track the progress of our understanding to higher and higher energy scales. The highest energies we’ve ever probed in experiments here on Earth are those at the Large Hadron Collider. The last run of the LHC reached energies of 8 TeV, or 8×1012 eV. But it would be an exaggeration to say that we really understand those energies; when protons collide at the LHC, their energies are distributed among a number of particles in each event. That’s why the heaviest particles we’ve ever found are the Higgs boson and the top quark, both with masses a bit under 0.2 TeV. So let’s call that the highest energy we’ve understood through experiments here on Earth:

E_{\rm understanding} = 2 \times 10^{11}\, {\rm eV}.

Thus, the progress of science has extended our understanding a factor of 1013, thirteen orders of magnitude, above our everyday experience:

E_{\rm understanding}/E_{\rm everyday} = 10^{13}.

Not too shabby, for a species of jumped-up apes with only an intermittent dedication to the ideals of rationality and empiricism.

Now let’s turn to inflation. The great thing about detecting gravitational waves in the CMB is that, in contrast with the density perturbations we’ve known about for some time, the gravitational wave amplitude depends solely on the expansion rate during inflation, not on any details about the scalar-field potential. And the expansion rate is directly related to the energy density (energy to the fourth power) by general relativity itself. So measuring the amplitude, as BICEP2 did, tells us the inflationary energy scale directly. And the answer is:

E_{\rm inflation} = 2 \times 10^{25}\, {\rm eV}.

For comparison, the reduced Planck energy (where “reduced” means “including the factor of 8π where it should be”) is 2×1027 eV, a mere stone’s throw away.

So, you can do the math yourself. Inflation was going on at energy scales that exceed those we explore here on Earth by a factor of about

E_{\rm inflation}/E_{\rm understanding} = 10^{14}.

In other words, BICEP2 has extended our experimental reach, as measured by energy scale, by an amount (1014) slightly larger than the total previous progress of all of science (1013).

We don’t, of course, understand everything between LHC energies and inflationary energies, not even close. But we (the royal “we”) have been able to make an enormous extrapolation, using scientific reasoning, and get the right answer. It’s a big deal.

A Leap in Energy Read More »

12 Comments

Effective Field Theory and Large-Scale Structure

Been falling behind on my favorite thing to do on the blog: post summaries of my own research papers. Back in October I submitted a paper with two Caltech colleagues, postdoc Stefan Leichenauer and grad student Jason Pollack, on the intriguing intersection of effective field theory (EFT) and cosmological large-scale structure (LSS). Now’s a good time to bring it up, as there’s a great popular-level discussion of the idea by Natalie Wolchover in Quanta.

So what is the connection between EFT and LSS? An effective field theory, as loyal readers know, an “effective field theory” is a way to describe what happens at low energies (or, equivalently, long wavelengths) without having a complete picture of what’s going on at higher energies. In particle physics, we can calculate processes in the Standard Model perfectly well without having a complete picture of grand unification or quantum gravity. It’s not that higher energies are unimportant, it’s just that all of their effects on low-energy physics can be summed up in their contributions to just a handful of measurable parameters.

In cosmology, we consider the evolution of LSS from tiny perturbations at early times to the splendor of galaxies and clusters that we see today. It’s really a story of particles — photons, atoms, dark matter particles — more than a field theory (although of course there’s an even deeper description in which everything is a field theory, but that’s far removed from cosmology). So the right tool is the Boltzmann equation — not the entropy formula that appears on his tombstone, but the equation that tells us how a distribution of particles evolves in phase space. However, the number of particles in the universe is very large indeed, so it’s the most obvious thing in the world to make an approximation by “smoothing” the particle distribution into an effective fluid. That fluid has a density and a velocity, but also has parameters like an effective speed of sound and viscosity. As Leonardo Senatore, one of the pioneers of this approach, says in Quanta, the viscosity of the universe is approximately equal to that of chocolate syrup.

So the goal of the EFT of LSS program (which is still in its infancy, although there is an important prehistory) is to derive the correct theory of the effective cosmological fluid. That is, to determine how all of the complicated churning dynamics at the scales of galaxies and clusters feeds back onto what happens at larger distances where things are relatively smooth and well-behaved. It turns out that this is more than a fun thing for theorists to spend their time with; getting the EFT right lets us describe what happens even at some length scales that are formally “nonlinear,” and therefore would conventionally be thought of as inaccessible to anything but numerical simulations. I really think it’s the way forward for comparing theoretical predictions to the wave of precision data we are blessed with in cosmology.

Here is the abstract for the paper I wrote with Stefan and Jason:

A Consistent Effective Theory of Long-Wavelength Cosmological Perturbations
Sean M. Carroll, Stefan Leichenauer, Jason Pollack

Effective field theory provides a perturbative framework to study the evolution of cosmological large-scale structure. We investigate the underpinnings of this approach, and suggest new ways to compute correlation functions of cosmological observables. We find that, in contrast with quantum field theory, the appropriate effective theory of classical cosmological perturbations involves interactions that are nonlocal in time. We describe an alternative to the usual approach of smoothing the perturbations, based on a path-integral formulation of the renormalization group equations. This technique allows for improved handling of short-distance modes that are perturbatively generated by long-distance interactions.

As useful as the EFT of LSS approach is, our own contribution is mostly on the formalism side of things. (You will search in vain for any nice plots comparing predictions to data in our paper — but do check out the references.) We try to be especially careful in establishing the foundations of the approach, and along the way we show that it’s not really a “field” theory in the conventional sense, as there are interactions that are nonlocal in time (a result also found by Carrasco, Foreman, Green, and Senatore). This is a formal worry, but doesn’t necessarily mean that the theory is badly behaved; one just has to work a bit to understand the time-dependence of the effective coupling constants.

Here is a video from a physics colloquium I gave at NYU on our paper. A colloquium is intermediate in level between a public talk and a technical seminar, so there are some heavy equations at the end but the beginning is pretty motivational. Enjoy!

Colloquium October 24th, 2013 -- Effective Field Theory and Cosmological Large-Scale Structure

Effective Field Theory and Large-Scale Structure Read More »

8 Comments
Scroll to Top