Science

New Course: The Many Hidden Worlds of Quantum Mechanics

In past years I’ve done several courses for The Great Courses/Wondrium (formerly The Teaching Company): Dark Matter and Dark Energy, Mysteries of Modern Physics:Time, and The Higgs Boson and Beyond. Now I’m happy to announce a new one, The Many Hidden Worlds of Quantum Mechanics.

This is a series of 24 half-hour lectures, given by me with impressive video effects from the Wondrium folks.

The content will be somewhat familiar if you’ve read my book Something Deeply Hidden — the course follows a similar outline, with a few new additions and elaborations along the way. So it’s both a general introduction to quantum mechanics, and also an in-depth exploration of the Many Worlds approach in particular. It’s meant for absolutely everybody — essentially no equations this time! — but 24 lectures is plenty of time to go into depth.

Check out this trailer:

As I type this on Monday 27 November, I believe there is some kind of sale going on! So move quickly to get your quantum mechanics at unbelievably affordable prices.

Thanksgiving

 This year we give thanks for a feature of nature that is frequently misunderstood: quanta. (We’ve previously given thanks for the Standard Model LagrangianHubble’s Law, the Spin-Statistics Theoremconservation of momentumeffective field theorythe error bargauge symmetryLandauer’s Principle, the Fourier TransformRiemannian Geometrythe speed of lightthe Jarzynski equalitythe moons of Jupiterspaceblack hole entropyelectromagnetism, and Arrow’s Impossibility Theorem.)

Of course quantum mechanics is very important and somewhat misunderstood in its own right; I can recommend a good book if you’d like to learn more. But we’re not getting into the measurement problem or the reality problem just now. I want to highlight one particular feature of quantum mechanics that is sometimes misinterpreted: the fact that some things, like individual excitations of quantized fields (“particles”) or the energy levels of atoms, come in sets of discrete numbers, rather than taking values on a smooth continuum. These discrete chunks of something-or-other are the “quanta” being referred to in the title of a different book, scheduled to come out next spring.

The basic issue is that people hear the phrase “quantum mechanics,” or even take a course in it, and come away with the impression that reality is somehow pixelized — made up of smallest possible units — rather than being ultimately smooth and continuous. That’s not right! Quantum theory, as far as it is currently understood, is all about smoothness. The lumpiness of “quanta” is just apparent, although it’s a very important appearance.

What’s actually happening is a combination of (1) fundamentally smooth functions, (2) differential equations, (3) boundary conditions, and (4) what we care about.

This might sound confusing, so let’s fix ideas by looking at a ubiquitous example: the simple harmonic oscillator. That can be thought of as a particle moving in one dimension, x, with a potential energy that looks like a parabola: V(x) = \frac{1}{2}\omega^2x^2. In classical mechanics, there is a lowest-energy state where the particle just sits at the bottom of its potential, unmoving, so both its kinetic and potential energies are zero. We can give it any positive amount of energy we like, either by kicking it to impart motion or just picking it up and dropping it in the potential at some point other than the origin.

Quantum mechanically, that’s not quite true (although it’s truer than you might think). Now we have a set of discrete energy levels, starting from the ground state and going upward in equal increments. Quanta!

But we didn’t put the quanta in. They come out of the above four ingredients. First, the particle is described not by its position and momentum, but by its wave function, \psi(x,t). Nothing discrete about that; it’s a fundamentally smooth function. But second, that function isn’t arbitrary; it’s going to obey the Schrödinger equation, which is a special differential equation. The Schrödinger equation tells us how the wave function evolves with time, and we can solve it starting with any initial wave function \psi(x, 0) we like. Still nothing discrete there. But there is one requirement, coming from the idea of boundary conditions: if the wave function grows (or remains constant) as x\rightarrow \pm \infty, the potential energy grows along with it. (It actually has to diminish at infinity just to be a wave function at all, but for the moment let’s think about the energy.) When we bring in the fourth ingredient, “what we care about,” the answer is that we care about low-energy states of the oscillator. That’s because in real-world situations, there is dissipation. Whatever physical system is being modeled by the harmonic oscillator, in reality it will most likely have friction or be able to give off photons or something like that. So no matter where we start, left to its own devices the oscillator will diminish in energy. So we generally care about states with relatively low energy.

Since this is quantum mechanics after all, most states of the wave function won’t have a definite energy, in much the same way they will not have a definite position or momentum. (They have “an energy” — the expectation value of the Hamiltonian — but not a “definite” one, since you won’t necessarily observe that value.) But there are some special states, the energy eigenstates, associated with a specific, measurable amount of energy. It is those states that are discrete: they come in a set made of a lowest-energy “ground” state, plus a ladder of evenly-spaced states of ever-higher energy.

We can even see why that’s true, and why the states look the way they do, just by thinking about boundary conditions. Since each state has finite energy, the wave function has to be zero at the far left and also at the far right. The energy in the state comes from two sources: the potential, and the “gradient” energy from the wiggles in the wave function. The lowest-energy state will be a compromise between “staying as close to x=0 as possible” and “not changing too rapidly at any point.” That compromise looks like the bottom (red) curve in the figure: starts at zero on the left, gradually increases and then decreases as it continues on to the right. It is a feature of eigenstates that they are all “orthogonal” to each other — there is zero net overlap between them. (Technically, if you multiply them together and integrate over x, the answer is zero.) So the next eigenstate will first oscillate down, then up, then back to zero. Subsequent energy eigenstates will each oscillate just a bit more, so they contain the least possible energy while being orthogonal to all the lower-lying states. Those requirements mean that they will each pass through zero exactly one more time than the state just below them.

And that is where the “quantum” nature of quantum mechanics comes from. Not from fundamental discreteness or anything like that; just from the properties of the set of solutions to a perfectly smooth differential equation. It’s precisely the same as why you get a fundamental note from a violin string tied at both ends, as well as a series of discrete harmonics, even though the string itself is perfectly smooth.

One cool aspect of this is that it also explains why quantum fields look like particles. A field is essentially the opposite of a particle: the latter has a specific location, while the former is spread all throughout space. But quantum fields solve equations with boundary conditions, and we care about the solutions. It turns out (see above-advertised book for details!) that if you look carefully at just a single “mode” of a field — a plane-wave vibration with specified wavelength — its wave function behaves much like that of a simple harmonic oscillator. That is, there is a ground state, a first excited state, a second excited state, and so on. Through a bit of investigation, we can verify that these states look and act like a state with zero particles, one particle, two particles, and so on. That’s where particles come from.

We see particles in the world, not because it is fundamentally lumpy, but because it is fundamentally smooth, while obeying equations with certain boundary conditions. It’s always tempting to take what we see to be the underlying truth of nature, but quantum mechanics warns us not to give in.

Is reality fundamentally discrete? Nobody knows. Quantum mechanics is certainly not, even if you have quantum gravity. Nothing we know about gravity implies that “spacetime is discrete at the Planck scale.” (That may be true, but it is not implied by anything we currently know; indeed, it is counter-indicated by things like the holographic principle.) You can think of the Planck length as the scale at which the classical approximation to spacetime is likely to break down, but that’s a statement about our approximation schemes, not the fundamental nature of reality.

States in quantum theory are described by rays in Hilbert space, which is a vector space, and vector spaces are completely smooth. You can construct a candidate vector space by starting with some discrete things like bits, then considering linear combinations, as happens in quantum computing (qubits) or various discretized models of spacetime. The resulting Hilbert space is finite-dimensional, but is still itself very much smooth, not discrete. (Rough guide: “quantizing” a discrete system gets you a finite-dimensional Hilbert space, quantizing a smooth system gets you an infinite-dimensional Hilbert space.) True discreteness requires throwing out ordinary quantum mechanics and replacing it with something fundamentally discrete, hoping that conventional QM emerges in some limit. That’s the approach followed, for example, in models like the Wolfram Physics Project. I recently wrote a paper proposing a judicious compromise, where standard QM is modified in the mildest possible way, replacing evolution in a smooth Hilbert space with evolution on a discrete lattice defined on a torus. It raises some cosmological worries, but might otherwise be phenomenologically acceptable. I don’t yet know if it has any specific experimental consequences, but we’re thinking about that.

4 Comments

Energy Conservation and Non-Conservation in Quantum Mechanics

Conservation of energy is a somewhat sacred principle in physics, though it can be tricky in certain circumstances, such as an expanding universe. Quantum mechanics is another context in which energy conservation is a subtle thing — so much so that it’s still worth writing papers about, which Jackie Lodman and I recently did. In this blog post I’d like to explain two things:

  • In the Many-Worlds formulation of quantum mechanics, the energy of the wave function of the universe is perfectly conserved. It doesn’t “require energy to make new universes,” so that is not a respectable objection to Many-Worlds.
  • In any formulation of quantum mechanics, energy doesn’t appear to be conserved as seen by actual observers performing quantum measurements. This is a not-very-hard-to-see aspect of quantum mechanics, which nevertheless hasn’t received a great deal of attention in the literature. It is a phenomenon that should be experimentally observable, although as far as I know it hasn’t yet been; we propose a simple experiment to do so.

The first point here is well-accepted and completely obvious to anyone who understands Many-Worlds. The second is much less well-known, and it’s what Jackie and I wrote about. I’m going to try to make this post accessible to folks who don’t know QM, but sometimes it’s hard to make sense without letting the math be the math.

First let’s think about energy in classical mechanics. You have a system characterized by some quantities like position, momentum, angular momentum, and so on, for each moving part within the system. Given some facts of the external environment (like the presence of gravitational or electric fields), the energy is simply a function of these quantities. You have for example kinetic energy, which depends on the momentum (or equivalently on the velocity), potential energy, which depends on the location of the object, and so on. The total energy is just the sum of all these contributions. If we don’t explicitly put any energy into the system or take any out, the energy should be conserved — i.e. the total energy remains constant over time.

There are two main things you need to know about quantum mechanics. First, the state of a quantum system is no longer specified by things like “position” or “momentum” or “spin.” Those classical notions are now thought of as possible measurement outcomes, not well-defined characteristics of the system. The quantum state — or wave function — is a superposition of various possible measurement outcomes, where “superposition” is a fancy term for “linear combination.”

Consider a spinning particle. By doing experiments to measure its spin along a certain axis, we discover that we only ever get two possible outcomes, which we might call “spin-up” or “(\uparrow)” and “spin-down” or “(\downarrow).” But before we’ve made the measurement, the system can be in some superposition of both possibilities. We would write (\Psi), the wave function of the spin, as

    \[ (\Psi) = a(\uparrow) + b(\downarrow), \]

where a and b are numerical coefficients, the “amplitudes” corresponding to spin-up and spin-down, respectively. (They will generally be complex numbers, but we don’t have to worry about that.)

The second thing you have to know about quantum mechanics is that measuring the system changes its wave function. When we have a spin in a superposition of this type, we can’t predict with certainty what outcome we will see. All we can predict is the probability, which is given by the amplitude squared. And once that measurement is made, the wave function “collapses” into a state that is purely what is observed. So we have

    \[ (\Psi)_\mathrm{post-measurement} = \begin{cases} (\uparrow), & \mbox{with probability } |a|^2,\\ (\downarrow), & \mbox{with probability } |b|^2. \end{cases}\]

At least, that’s what we teach our students — Many-Worlds has a slightly more careful story to tell, as we’ll see.

We can now ask about energy, but the concept of energy in quantum mechanics is a bit different from what we are used to in classical mechanics. …

26 Comments

Thanksgiving

This year we give thanks for one of the very few clues we have to the quantum nature of spacetime: black hole entropy. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, the Jarzynski equality, the moons of Jupiter, and space.)

Black holes are regions of spacetime where, according to the rules of Einstein’s theory of general relativity, the curvature of spacetime is so dramatic that light itself cannot escape. Physical objects (those that move at or more slowly than the speed of light) can pass through the “event horizon” that defines the boundary of the black hole, but they never escape back to the outside world. Black holes are therefore black — even light cannot escape — thus the name. At least that would be the story according to classical physics, of which general relativity is a part. Adding quantum ideas to the game changes things in important ways. But we have to be a bit vague — “adding quantum ideas to the game” rather than “considering the true quantum description of the system” — because physicists don’t yet have a fully satisfactory theory that includes both quantum mechanics and gravity.

The story goes that in the early 1970’s, James Bardeen, Brandon Carter, and Stephen Hawking pointed out an analogy between the behavior of black holes and the laws of good old thermodynamics. For example, the Second Law of Thermodynamics (“Entropy never decreases in closed systems”) was analogous to Hawking’s “area theorem”: in a collection of black holes, the total area of their event horizons never decreases over time. Jacob Bekenstein, who at the time was a graduate student working under John Wheeler at Princeton, proposed to take this analogy more seriously than the original authors had in mind. He suggested that the area of a black hole’s event horizon really is its entropy, or at least proportional to it.

This annoyed Hawking, who set out to prove Bekenstein wrong. After all, if black holes have entropy then they should also have a temperature, and objects with nonzero temperatures give off blackbody radiation, but we all know that black holes are black. But he ended up actually proving Bekenstein right; black holes do have entropy, and temperature, and they even give off radiation. We now refer to the entropy of a black hole as the “Bekenstein-Hawking entropy.” (It is just a useful coincidence that the two gentlemen’s initials, “BH,” can also stand for “black hole.”)

Consider a black hole whose area of its event horizon is A. Then its Bekenstein-Hawking entropy is

    \[S_\mathrm{BH} = \frac{c^3}{4G\hbar}A,\]

where c is the speed of light, G is Newton’s constant of gravitation, and \hbar is Planck’s constant of quantum mechanics. A simple formula, but already intriguing, as it seems to combine relativity (c), gravity (G), and quantum mechanics (\hbar) into a single expression. That’s a clue that whatever is going on here, it something to do with quantum gravity. And indeed, understanding black hole entropy and its implications has been a major focus among theoretical physicists for over four decades now, including the holographic principle, black-hole complementarity, the AdS/CFT correspondence, and the many investigations of the information-loss puzzle.

But there exists a prior puzzle: what is the black hole entropy, anyway? What physical quantity does it describe?

Entropy itself was invented as part of the development of thermodynamics is the mid-19th century, as a way to quantify the transformation of energy from a potentially useful form (like fuel, or a coiled spring) into useless heat, dissipated into the environment. It was what we might call a “phenomenological” notion, defined in terms of macroscopically observable quantities like heat and temperature, without any more fundamental basis in a microscopic theory. But more fundamental definitions came soon thereafter, once people like Maxwell and Boltzmann and Gibbs started to develop statistical mechanics, and showed that the laws of thermodynamics could be derived from more basic ideas of atoms and molecules.

Hawking’s derivation of black hole entropy was in the phenomenological vein. He showed that black holes give off radiation at a certain temperature, and then used the standard thermodynamic relations between entropy, energy, and temperature to derive his entropy formula. But this leaves us without any definite idea of what the entropy actually represents.

One of the reasons why entropy is thought of as a confusing concept is because there is more than one notion that goes under the same name. To dramatically over-simplify the situation, let’s consider three different ways of relating entropy to microscopic physics, named after three famous physicists:

  • Boltzmann entropy says that we take a system with many small parts, and divide all the possible states of that system into “macrostates,” so that two “microstates” are in the same macrostate if they are macroscopically indistinguishable to us. Then the entropy is just (the logarithm of) the number of microstates in whatever macrostate the system is in.
  • Gibbs entropy is a measure of our lack of knowledge. We imagine that we describe the system in terms of a probability distribution of what microscopic states it might be in. High entropy is when that distribution is very spread-out, and low entropy is when it is highly peaked around some particular state.
  • von Neumann entropy is a purely quantum-mechanical notion. Given some quantum system, the von Neumann entropy measures how much entanglement there is between that system and the rest of the world.

These seem like very different things, but there are formulas that relate them to each other in the appropriate circumstances. The common feature is that we imagine a system has a lot of microscopic “degrees of freedom” (jargon for “things that can happen”), which can be in one of a large number of states, but we are describing it in some kind of macroscopic coarse-grained way, rather than knowing what its exact state actually is. The Boltzmann and Gibbs entropies worry people because they seem to be subjective, requiring either some seemingly arbitrary carving of state space into macrostates, or an explicit reference to our personal state of knowledge. The von Neumann entropy is at least an objective fact about the system. You can relate it to the others by analogizing the wave function of a system to a classical microstate. Because of entanglement, a quantum subsystem generally cannot be described by a single wave function; the von Neumann entropy measures (roughly) how many different quantum must be involved to account for its entanglement with the outside world.

So which, if any, of these is the black hole entropy? To be honest, we’re not sure. Most of us think the black hole entropy is a kind of von Neumann entropy, but the details aren’t settled.

One clue we have is that the black hole entropy is proportional to the area of the event horizon. For a while this was thought of as a big, surprising thing, since for something like a box of gas, the entropy is proportional to its total volume, not the area of its boundary. But people gradually caught on that there was never any reason to think of black holes like boxes of gas. In quantum field theory, regions of space have a nonzero von Neumann entropy even in empty space, because modes of quantum fields inside the region are entangled with those outside. The good news is that this entropy is (often, approximately) proportional to the area of the region, for the simple reason that field modes near one side of the boundary are highly entangled with modes just on the other side, and not very entangled with modes far away. So maybe the black hole entropy is just like the entanglement entropy of a region of empty space?

Would that it were so easy. Two things stand in the way. First, Bekenstein noticed another important feature of black holes: not only do they have entropy, but they have the most entropy that you can fit into a region of a fixed size (the Bekenstein bound). That’s very different from the entanglement entropy of a region of empty space in quantum field theory, where it is easy to imagine increasing the entropy by creating extra entanglement between degrees of freedom deep in the interior and those far away. So we’re back to being puzzled about why the black hole entropy is proportional to the area of the event horizon, if it’s the most entropy a region can have. That’s the kind of reasoning that leads to the holographic principle, which imagines that we can think of all the degrees of freedom inside the black hole as “really” living on the boundary, rather than being uniformly distributed inside. (There is a classical manifestation of this philosophy in the membrane paradigm for black hole astrophysics.)

The second obstacle to simply interpreting black hole entropy as entanglement entropy of quantum fields is the simple fact that it’s a finite number. While the quantum-field-theory entanglement entropy is proportional to the area of the boundary of a region, the constant of proportionality is infinity, because there are an infinite number of quantum field modes. So why isn’t the entropy of a black hole equal to infinity? Maybe we should think of the black hole entropy as measuring the amount of entanglement over and above that of the vacuum (called the Casini entropy). Maybe, but then if we remember Bekenstein’s argument that black holes have the most entropy we can attribute to a region, all that infinite amount of entropy that we are ignoring is literally inaccessible to us. It might as well not be there at all. It’s that kind of reasoning that leads some of us to bite the bullet and suggest that the number of quantum degrees of freedom in spacetime is actually a finite number, rather than the infinite number that would naively be implied by conventional non-gravitational quantum field theory.

So — mysteries remain! But it’s not as if we haven’t learned anything. The very fact that black holes have entropy of some kind implies that we can think of them as collections of microscopic degrees of freedom of some sort. (In string theory, in certain special circumstances, you can even identify what those degrees of freedom are.) That’s an enormous change from the way we would think about them in classical (non-quantum) general relativity. Black holes are supposed to be completely featureless (they “have no hair,” another idea of Bekenstein’s), with nothing going on inside them once they’ve formed and settled down. Quantum mechanics is telling us otherwise. We haven’t fully absorbed the implications, but this is surely a clue about the ultimate quantum nature of spacetime itself. Such clues are hard to come by, so for that we should be thankful.

23 Comments

The Biggest Ideas in the Universe | 24. Science

For the triumphant final video in the Biggest Ideas series, we look at a big idea indeed: Science. What is science, and why is it so great? And I also take the opportunity to dip a toe into the current state of fundamental physics — are predictions that unobservable universes exist really science? What if we never discover another particle? Is it worth building giant expensive experiments? Tune in to find out.

The Biggest Ideas in the Universe | 24. Science

Thanks to everyone who has watched along the way. It’s been quite a ride.

36 Comments

The Biggest Ideas in the Universe | 23. Criticality and Complexity

Spherical cows are important because they let us abstract away all the complications of the real world and think about underlying principles. But what about when the complications are the point? Then we enter the realm of complex systems — which, interestingly, has its own spherical cows. One such is the idea of a “critical” system, balanced at a point where there is interesting dynamics at all scales. We know a lot about such systems, without approaching anything like a complete understanding just yet.

The Biggest Ideas in the Universe | 23. Criticality and Complexity

And here is the associated Q&A video:

The Biggest Ideas in the Universe | Q&A 23 - Criticality and Complexity
18 Comments

The Biggest Ideas in the Universe | 22. Cosmology

Surely one of the biggest ideas in the universe has to be the universe itself, no? Or, as I claim, the very fact that the universe is comprehensible — as an abstract philosophical point, but also as the empirical observation that the universe we see is a pretty simple place, at least on the largest scales. We focus here mostly on the thermal history — how the constituents of the universe evolve as space expands and the temperature goes down.

The Biggest Ideas in the Universe | 22. Cosmology

And here is the associated Q&A video:

The Biggest Ideas in the Universe | Q&A 22 - Cosmology
28 Comments

The Biggest Ideas in the Universe | 21. Emergence

Little things can come together to make big things. And those big things can often be successfully described by an approximate theory that can be qualitatively different from the theory of the little things. We say that a macroscopic approximate theory has “emerged” from the microscopic one. But the concept of emergence is a bit more general than that, covering any case where some behavior of one theory is captured by another one even in the absence of complete information. An important and subtle example is (of course) how the classical world emerges from the quantum one.

The Biggest Ideas in the Universe | 21. Emergence

And here is the Q&A video. Sorry, I hadn’t realized that comments were showing up here on the blog! I have a crack team rushing to get that fixed.

The Biggest Ideas in the Universe | Q&A 21 - Emergence

In the video I refer to a bunch of research papers, here they are:

Mad-Dog Everettianism: https://arxiv.org/abs/1801.08132

Locality from the Spectrum: https://arxiv.org/abs/1702.06142

Finite-Dimensional Hilbert Space: https://arxiv.org/abs/1704.00066

The Einstein Equation of State: https://arxiv.org/abs/gr-qc/9504004

Space from Hilbert Space: https://arxiv.org/abs/1606.08444

Einstein’s Equation from Entanglement: https://arxiv.org/abs/1712.02803

Quantum Mereology: https://arxiv.org/abs/2005.12938

28 Comments

The Biggest Ideas in the Universe | 20. Entropy and Information

You knew this one was coming, right? Why the past is different from the future, and why we seem to flow through time. Also a bit about how different groups of scientists use the idea of “information” in very different ways.

The Biggest Ideas in the Universe | 20. Entropy and Information

And here is the associated Q&A video:

The Biggest Ideas in the Universe | Q&A 20 - Entropy and Information
32 Comments

The Biggest Ideas in the Universe | 18. Atoms

Eighteenth-century chemists famously jumped the gun by using the ancient Greek word “atoms,” referring to the indivisibly small building-blocks of matter, to label the units of chemical elements. Nowadays we know that these atoms are not fundamental, they’re themselves made of smaller particles. But why is it that the particles and fields of the Standard Model come together to form these particular atoms? Let’s find out.

The Biggest Ideas in the Universe | 18. Atoms

And here is the Q&A video, featuring both a brief appearance from Ariel and a plot of honest experimental constraints.

The Biggest Ideas in the Universe | Q&A 18 - Atoms
18 Comments
Scroll to Top