Conservation of energy is a somewhat sacred principle in physics, though it can be tricky in certain circumstances, such as an expanding universe. Quantum mechanics is another context in which energy conservation is a subtle thing — so much so that it’s still worth writing papers about, which Jackie Lodman and I recently did. In this blog post I’d like to explain two things:
In the Many-Worlds formulation of quantum mechanics, the energy of the wave function of the universe is perfectly conserved. It doesn’t “require energy to make new universes,” so that is not a respectable objection to Many-Worlds.
In any formulation of quantum mechanics, energy doesn’t appear to be conserved as seen by actual observers performing quantum measurements. This is a not-very-hard-to-see aspect of quantum mechanics, which nevertheless hasn’t received a great deal of attention in the literature. It is a phenomenon that should be experimentally observable, although as far as I know it hasn’t yet been; we propose a simple experiment to do so.
The first point here is well-accepted and completely obvious to anyone who understands Many-Worlds. The second is much less well-known, and it’s what Jackie and I wrote about. I’m going to try to make this post accessible to folks who don’t know QM, but sometimes it’s hard to make sense without letting the math be the math.
First let’s think about energy in classical mechanics. You have a system characterized by some quantities like position, momentum, angular momentum, and so on, for each moving part within the system. Given some facts of the external environment (like the presence of gravitational or electric fields), the energy is simply a function of these quantities. You have for example kinetic energy, which depends on the momentum (or equivalently on the velocity), potential energy, which depends on the location of the object, and so on. The total energy is just the sum of all these contributions. If we don’t explicitly put any energy into the system or take any out, the energy should be conserved — i.e. the total energy remains constant over time.
There are two main things you need to know about quantum mechanics. First, the state of a quantum system is no longer specified by things like “position” or “momentum” or “spin.” Those classical notions are now thought of as possible measurement outcomes, not well-defined characteristics of the system. The quantum state — or wave function — is a superposition of various possible measurement outcomes, where “superposition” is a fancy term for “linear combination.”
Consider a spinning particle. By doing experiments to measure its spin along a certain axis, we discover that we only ever get two possible outcomes, which we might call “spin-up” or “” and “spin-down” or “.” But before we’ve made the measurement, the system can be in some superposition of both possibilities. We would write , the wave function of the spin, as
where and are numerical coefficients, the “amplitudes” corresponding to spin-up and spin-down, respectively. (They will generally be complex numbers, but we don’t have to worry about that.)
The second thing you have to know about quantum mechanics is that measuring the system changes its wave function. When we have a spin in a superposition of this type, we can’t predict with certainty what outcome we will see. All we can predict is the probability, which is given by the amplitude squared. And once that measurement is made, the wave function “collapses” into a state that is purely what is observed. So we have
At least, that’s what we teach our students — Many-Worlds has a slightly more careful story to tell, as we’ll see.
We can now ask about energy, but the concept of energy in quantum mechanics is a bit different from what we are used to in classical mechanics. …
Black holes are regions of spacetime where, according to the rules of Einstein’s theory of general relativity, the curvature of spacetime is so dramatic that light itself cannot escape. Physical objects (those that move at or more slowly than the speed of light) can pass through the “event horizon” that defines the boundary of the black hole, but they never escape back to the outside world. Black holes are therefore black — even light cannot escape — thus the name. At least that would be the story according to classical physics, of which general relativity is a part. Adding quantum ideas to the game changes things in important ways. But we have to be a bit vague — “adding quantum ideas to the game” rather than “considering the true quantum description of the system” — because physicists don’t yet have a fully satisfactory theory that includes both quantum mechanics and gravity.
The story goes that in the early 1970’s, James Bardeen, Brandon Carter, and Stephen Hawking pointed out an analogy between the behavior of black holes and the laws of good old thermodynamics. For example, the Second Law of Thermodynamics (“Entropy never decreases in closed systems”) was analogous to Hawking’s “area theorem”: in a collection of black holes, the total area of their event horizons never decreases over time. Jacob Bekenstein, who at the time was a graduate student working under John Wheeler at Princeton, proposed to take this analogy more seriously than the original authors had in mind. He suggested that the area of a black hole’s event horizon really is its entropy, or at least proportional to it.
This annoyed Hawking, who set out to prove Bekenstein wrong. After all, if black holes have entropy then they should also have a temperature, and objects with nonzero temperatures give off blackbody radiation, but we all know that black holes are black. But he ended up actually proving Bekenstein right; black holes do have entropy, and temperature, and they even give off radiation. We now refer to the entropy of a black hole as the “Bekenstein-Hawking entropy.” (It is just a useful coincidence that the two gentlemen’s initials, “BH,” can also stand for “black hole.”)
Consider a black hole whose area of its event horizon is . Then its Bekenstein-Hawking entropy is
where is the speed of light, is Newton’s constant of gravitation, and is Planck’s constant of quantum mechanics. A simple formula, but already intriguing, as it seems to combine relativity (), gravity (), and quantum mechanics () into a single expression. That’s a clue that whatever is going on here, it something to do with quantum gravity. And indeed, understanding black hole entropy and its implications has been a major focus among theoretical physicists for over four decades now, including the holographic principle, black-hole complementarity, the AdS/CFT correspondence, and the many investigations of the information-loss puzzle.
But there exists a prior puzzle: what is the black hole entropy, anyway? What physical quantity does it describe?
Entropy itself was invented as part of the development of thermodynamics is the mid-19th century, as a way to quantify the transformation of energy from a potentially useful form (like fuel, or a coiled spring) into useless heat, dissipated into the environment. It was what we might call a “phenomenological” notion, defined in terms of macroscopically observable quantities like heat and temperature, without any more fundamental basis in a microscopic theory. But more fundamental definitions came soon thereafter, once people like Maxwell and Boltzmann and Gibbs started to develop statistical mechanics, and showed that the laws of thermodynamics could be derived from more basic ideas of atoms and molecules.
Hawking’s derivation of black hole entropy was in the phenomenological vein. He showed that black holes give off radiation at a certain temperature, and then used the standard thermodynamic relations between entropy, energy, and temperature to derive his entropy formula. But this leaves us without any definite idea of what the entropy actually represents.
One of the reasons why entropy is thought of as a confusing concept is because there is more than one notion that goes under the same name. To dramatically over-simplify the situation, let’s consider three different ways of relating entropy to microscopic physics, named after three famous physicists:
Boltzmann entropy says that we take a system with many small parts, and divide all the possible states of that system into “macrostates,” so that two “microstates” are in the same macrostate if they are macroscopically indistinguishable to us. Then the entropy is just (the logarithm of) the number of microstates in whatever macrostate the system is in.
Gibbs entropy is a measure of our lack of knowledge. We imagine that we describe the system in terms of a probability distribution of what microscopic states it might be in. High entropy is when that distribution is very spread-out, and low entropy is when it is highly peaked around some particular state.
von Neumann entropy is a purely quantum-mechanical notion. Given some quantum system, the von Neumann entropy measures how much entanglement there is between that system and the rest of the world.
These seem like very different things, but there are formulas that relate them to each other in the appropriate circumstances. The common feature is that we imagine a system has a lot of microscopic “degrees of freedom” (jargon for “things that can happen”), which can be in one of a large number of states, but we are describing it in some kind of macroscopic coarse-grained way, rather than knowing what its exact state actually is. The Boltzmann and Gibbs entropies worry people because they seem to be subjective, requiring either some seemingly arbitrary carving of state space into macrostates, or an explicit reference to our personal state of knowledge. The von Neumann entropy is at least an objective fact about the system. You can relate it to the others by analogizing the wave function of a system to a classical microstate. Because of entanglement, a quantum subsystem generally cannot be described by a single wave function; the von Neumann entropy measures (roughly) how many different quantum must be involved to account for its entanglement with the outside world.
So which, if any, of these is the black hole entropy? To be honest, we’re not sure. Most of us think the black hole entropy is a kind of von Neumann entropy, but the details aren’t settled.
One clue we have is that the black hole entropy is proportional to the area of the event horizon. For a while this was thought of as a big, surprising thing, since for something like a box of gas, the entropy is proportional to its total volume, not the area of its boundary. But people gradually caught on that there was never any reason to think of black holes like boxes of gas. In quantum field theory, regions of space have a nonzero von Neumann entropy even in empty space, because modes of quantum fields inside the region are entangled with those outside. The good news is that this entropy is (often, approximately) proportional to the area of the region, for the simple reason that field modes near one side of the boundary are highly entangled with modes just on the other side, and not very entangled with modes far away. So maybe the black hole entropy is just like the entanglement entropy of a region of empty space?
Would that it were so easy. Two things stand in the way. First, Bekenstein noticed another important feature of black holes: not only do they have entropy, but they have the most entropy that you can fit into a region of a fixed size (the Bekenstein bound). That’s very different from the entanglement entropy of a region of empty space in quantum field theory, where it is easy to imagine increasing the entropy by creating extra entanglement between degrees of freedom deep in the interior and those far away. So we’re back to being puzzled about why the black hole entropy is proportional to the area of the event horizon, if it’s the most entropy a region can have. That’s the kind of reasoning that leads to the holographic principle, which imagines that we can think of all the degrees of freedom inside the black hole as “really” living on the boundary, rather than being uniformly distributed inside. (There is a classical manifestation of this philosophy in the membrane paradigm for black hole astrophysics.)
The second obstacle to simply interpreting black hole entropy as entanglement entropy of quantum fields is the simple fact that it’s a finite number. While the quantum-field-theory entanglement entropy is proportional to the area of the boundary of a region, the constant of proportionality is infinity, because there are an infinite number of quantum field modes. So why isn’t the entropy of a black hole equal to infinity? Maybe we should think of the black hole entropy as measuring the amount of entanglement over and above that of the vacuum (called the Casini entropy). Maybe, but then if we remember Bekenstein’s argument that black holes have the most entropy we can attribute to a region, all that infinite amount of entropy that we are ignoring is literally inaccessible to us. It might as well not be there at all. It’s that kind of reasoning that leads some of us to bite the bullet and suggest that the number of quantum degrees of freedom in spacetime is actually a finite number, rather than the infinite number that would naively be implied by conventional non-gravitational quantum field theory.
So — mysteries remain! But it’s not as if we haven’t learned anything. The very fact that black holes have entropy of some kind implies that we can think of them as collections of microscopic degrees of freedom of some sort. (In string theory, in certain special circumstances, you can even identify what those degrees of freedom are.) That’s an enormous change from the way we would think about them in classical (non-quantum) general relativity. Black holes are supposed to be completely featureless (they “have no hair,” another idea of Bekenstein’s), with nothing going on inside them once they’ve formed and settled down. Quantum mechanics is telling us otherwise. We haven’t fully absorbed the implications, but this is surely a clue about the ultimate quantum nature of spacetime itself. Such clues are hard to come by, so for that we should be thankful.
September 29, 2020 September 29, 2020 / 89 Comments
People often suggest guests to appear on Mindscape — which I very much appreciate! Several of my best conversations were with people I had never heard of before they were effectively suggested by someone. Suggestions could be made here (in comments below), or on the subreddit, or on Twitter or anywhere else.
My policy is not to comment on individual suggestions, but it might be useful for me to lay out what I look for in potential Mindscape guests. Hopefully this will help people make suggestions, and lead to the discovery of some gems I would have otherwise overlooked.
Obviously I’m looking for smart people with interesting ideas. Most episodes are idea-centered, rather than “let’s talk to this fascinating person,” although there are exceptions.
I’m more interested in people doing original idea-creation, rather than commentators/journalists/pundits (or fellow podcasters!). Again, there are always exceptions — nobody can complain when I talk to Carl Zimmer about inheritance — but that’s the tendency.
Hot-button topical/political issues are an interesting case. I’m not averse to them, but I want to focus on the eternal big-picture concerns at the bottom of them, rather than on momentary ephemera. Relatedly, I’m mostly interested in talking with intellectuals and analysts, not advocates or salespeople or working politicians.
I’m happy to talk with big names everyone has heard of, but am equally interested in lesser-known folks who have something really interesting to say.
Sometimes it should be clear that I’m already quite aware of the existence of a person, so suggesting them doesn’t add much value. Nobody needed to tell me to ask Roger Penrose or Dan Dennett on the show.
I like to keep things diverse along many different axes, most especially area of intellectual inquiry. Obviously there is more physics than on most people’s podcasts, but there will rarely if ever be two physics episodes in a row, or even two in the same month. Likewise, if I do one episode on a less-frequent topic, I’m unlikely to do another one on the same topic right away. (“That episode on the semiotics of opera was fine, but you need to invite the real expert on the semiotics of opera…”) More generally, podcast episodes should be of standalone interest, not responses to previous podcast episodes.
I am very happy to talk with people I disagree with, but only if I think there is something to be learned from their perspective. I want to engage with the best arguments against my positions, not just with any old arguments. Zero interest in debating or debunking on the podcast. If I invite someone on, I will challenge them where I think necessary, but my main goal is to let them put forward their case as clearly as possible.
Corollary: someone is not worth engaging with merely because they make claims that would be extremely important if they were true. There has to be some reason to believe, in the minds of some number of reasonable people, that they could actually be true. My goal is not to clean up all the bad ideas on the internet.
Obvious but often-overlooked consideration: the person should be good on podcasts! This is a tricky thing. Clearly they should be articulate and engaging in an audio-only format. But also there’s an art to giving answers that are long enough to be substantive, short enough to allow for give-and-take. Conversation is a skill. (Though Fyodor Urnov barely let me get a word in edgewise, and he was great and everyone loved him, so maybe I should take the hint.)
This is a long list, but the most useful guest suggestions include not just a person’s name, but some indication that they satisfy the above criteria. A brief mention of the ideas they have and evidence that they’d be a good guest is extremely helpful.
None of these rules is absolute! I’m always happy to deviate a little if I think there is a worthwhile special case.
Thanks again for listening, and for all the suggestions. I am continually amazed at the high quality of guests who have joined me, and at the wonderful support from the Mindscape audience.
For the triumphant final video in the Biggest Ideas series, we look at a big idea indeed: Science. What is science, and why is it so great? And I also take the opportunity to dip a toe into the current state of fundamental physics — are predictions that unobservable universes exist really science? What if we never discover another particle? Is it worth building giant expensive experiments? Tune in to find out.
The Biggest Ideas in the Universe | 24. Science
Thanks to everyone who has watched along the way. It’s been quite a ride.
Spherical cows are important because they let us abstract away all the complications of the real world and think about underlying principles. But what about when the complications are the point? Then we enter the realm of complex systems — which, interestingly, has its own spherical cows. One such is the idea of a “critical” system, balanced at a point where there is interesting dynamics at all scales. We know a lot about such systems, without approaching anything like a complete understanding just yet.
The Biggest Ideas in the Universe | 23. Criticality and Complexity
We’re well into the Biggest Ideas in the Universe series, and some people have been asking how I make the actual videos. I explained the basic process in the Q&A video for Force, Energy, and Action – embedded below – but it turns out that not everyone watches every single video from start to finish (weird, I know), and besides the details have changed a little bit. And for some reason a lot of people want to do pedagogy via computer these days.
The Biggest Ideas in the Universe | Q&A 3 - Force, Energy, and Action
Screen capturing/video editing software on the computer (e.g. Screenflow)
Whatever wires and dongles are required to hook all that stuff together.
Hmm, looking over that list it doesn’t seem as simple as I thought. And this is the quick-and-easy version! But you can adapt the level of commitment to your own needs.
The most important step here is to capture your writing, in real time, on the video. (You obviously don’t have to include an image of yourself at all, but it makes things a bit more human, and besides who can possibly talk without making gestures, right?) So you need some kind of tablet to write on. I like the iPad Pro quite a bit, but note that not all iPad models are compatible with a Pencil (or other stylus). And writing with your fingers just doesn’t cut it here.
You also need an app that does that. I am quite fond of both Notability and Notes Plus. (I’m sure that non-iOS ecosystems have their own apps, but there’s no sense in which I’m familiar with the overall landscape; I can only tell you about what I use.) These two apps are pretty similar, with small differences at the edges. When I’m taking notes or marking up PDFs, I’m actually more likely to use Notes Plus, as its cutting/pasting is a bit simpler. And that’s what I used for the very early Biggest Ideas videos. But I got numerous requests to write on a dark background rather than a light one, which is completely reasonable. Notability has that feature and as far as I know Notes Plus does not. And it’s certainly more than good enough for the job.
Then you need to capture your writing, and your voice, and optionally yourself, onto video and edit it together. (Again, no guarantees that my methods are simplest or best, only that they are mine.) Happily there are programs that do everything you want at once: they will capture video from a camera, separately capture audio input, and also separately capture part or all of your computer screen, and/or directly from an external device. Then they will let you edit it all together how you like. Pretty sweet, to be honest.
I started out using Camtasia, which worked pretty well overall. But not perfectly, as I eventually discovered. It wasn’t completely free of crashes, which can be pretty devastating when you’re 45 minutes into an hour-long video. And capture from the iPad was pretty clunky; I had to show the iPad screen on my laptop screen, then capture that region into Camtasia. (The app is smart enough to capture either the whole screen, or any region on it.) By the way, did you know you can show your iPhone/iPad screen on your computer, at least with a Mac? Just plug the device into the computer, open up QuickTime, click “new movie recording,” and ask it to display from the mobile device. Convenient for other purposes.
But happily on Screenflow, which I’ve subsequently switched to, that workaround isn’t necessary; it will capture directly from your tablet (as long as it’s connected to your computer). And in my (very limited) experience it seems a bit more robust and user-friendly.
Okay, so you fire up your computer, open Screenflow, plug in your tablet, point your webcam at yourself, and you’re ready to go. Screenflow will give you a window in which you can make sure it’s recording all the separate things you need (tablet screen, your video, your audio). Hit “Record,” and do your thing. When you’re done, hit “Stop recording.”
What you now have is a Screenflow document that has different tracks corresponding to everything you’ve just recorded. I’m not going to do a full tutorial about editing things together — there’s a big internet out there, full of useful advice. But I will note that you will have to do some editing, it’s not completely effortless. Fortunately it is pretty intuitive once you get the hand of the basic commands. Here is what your editing window in Screenflow will look like.
Main panel at the top left, and all of your tracks at the bottom — in this case (top to bottom) camera video, audio, iPad capture, and static background image. The panel on the right toggles between various purposes; in this case it’s showing all the different files that go into making those tracks. (The video is chopped up into multiple files for reasons having to do with my video camera.) Note that I use a green screen, and one of the nice things about Screenflow is that it will render the green transparent for you with a click of a button. (Camtasia does too, but I’ve found that it doesn’t do as well.)
Editing features are quite good. You can move and split tracks, resize windows, crop windows, add text, set the overall dimensions, etc. One weird thing is that some of the editing features require that you hit Control or Shift or whatever, and when exactly you’re supposed to do this is not always obvious. But it’s all explained online somewhere.
So that’s the basic setup, or at least enough that you can figure things out from there. You also have to upload to YouTube or to your class website or whatever you so choose, but that’s up to you.
Okay now onto some optional details, depending on how much you want to dive into this.
First, webcams are not the best quality, especially the ones built-in to your laptop. I thought about using my iPhone as a camera — the lenses etc. on recent ones are quite good — but surprisingly the technology for doing this is either hard to find or nonexistent. (Of course you can make videos using your phone, but using your phone as a camera to make and edit videos elsewhere seems to be much harder, at least for me.) You can upgrade to an external webcam; Logitech has some good models. But after some experimenting I found it was better just to get a real video camera. Canon has some decent options, but if you already have a camera lying around it should be fine; we’re not trying to be Stanley Kubrick here. (If you’re getting the impression that all this costs money … yeah. Sorry.)
If you go that route, you have to somehow get the video from the camera to your computer. You can get a gizmo like the Cam Link that will pipe directly from a video camera to your computer, so that basically you’re using the camera as a web cam. I tried and found that it was … pretty bad? Really hurt the video quality, though it’s completely possible I just wasn’t setting things up properly. So instead I just record within the camera to an SD card, then transfer to the computer after the fact. For that you’ll need an SD to USB adapter, or maybe you can find a camera that can do it over wifi (mine doesn’t, sigh). It’s a straightforward drag-and-drop to get the video into Screenflow, but my camera chops them up into 20-minute segments. That’s fine, Screenflow sews them together seemlessly.
You might also think about getting a camera that can be controlled wirelessly, either via dedicated remote or by your phone, so that you don’t have to stand up and walk over to it every time you want to start and stop recording. (Your video will look slightly better if you place the camera away from you and zoom in a bit, rather than placing it nearby.) Sadly this is something I also neglected to do.
If you get a camera, it will record sound as well as video, but chances are that sound won’t be all that great (unless maybe you use a wireless lavalier mic? Haven’t tried that myself). Also your laptop mic isn’t very good, trust me. I have an ongoing podcast, so I am already over-equipped on that score. But if you’re relatively serious about audio quality, it would be worth investing in something like a Blue Yeti.
If you want to hear the difference between good and non-so-good microphones, listen to the Entropy video, then the associated Q&A. In the latter, by mistake I forgot to turn on the real mic, and had to use another audio track. (To be honest I forget whether it was from the video camera or my laptop.) I did my best to process the latter to make it sound reasonable, but the difference is obvious.
Of course if you do separately record video and audio, you’ll have to sync them together. Screenflow makes this pretty easy. When you import your video file, it will come with attached audio, but there’s an option to — wait for it — “detach audio.” You can then sync your other audio track (the track will display a waveform indicating volume, and just slide until they match up), and delete the original.
Finally, there’s making yourself look pretty. There is absolutely nothing wrong with just showing whatever office/home background you shoot in front of — people get it. But you can try to be a bit creative with a green screen, and it works much better than the glitchy Zoom backgrounds etc.
Bad news is, you’ll have to actually purchase a green screen, as well as something to hold it up. It’s a pretty basic thing, a piece of cloth or plastic. And, like it or not, if you go this route you’re also going to have to get lights to point at the green screen. If it’s not brightly lit, it’s much harder to remove it in the editor. The good news is, once you do all that, removing the green is a snap in Screenflow (which is much better at this than Camtasia, I found).
You’ll also want to light yourself, with at least one dedicated light. (Pros will insist on at least three — fill, key, and backlight — but we all have our limits.) Maybe this is not so important, but if you want a demonstration, my fondness for goofing up has once again provided for you — on the Entanglement Q&A video, I forgot to turn on the light. Difference in quality is there for you to judge.
My home office looks like this now. At least for the moment.
Oh right one final thing. If you’re making hour-long (or so) videos, the file sizes get quite big. The Screenflow project for one of my videos will be between 20 and 30 GB, and I export to an mp4 that is another 5 GB or so. It adds up if you make a lot of videos! So you might think about investing in an external hard drive. The other options are to save on a growing collection of SD cards, or just delete files once you’ve uploaded to YouTube or wherever. Neither of which was very palatable for me.
You can see my improvement at all these aspects over the series of videos. I upgraded my video camera, switched from light background to dark background on the writing screen, traded in Camtasia for Screenflow, and got better at lighting. I also moved the image of me from the left-hand side to the right-hand side of the screen, which I understand makes the captions easier to read.
I’ve had a lot of fun and learned a lot. And probably put more work into setting things up than most people will want to. But what’s most important is content! If you have something to say, it’s not that hard to share it.
Surely one of the biggest ideas in the universe has to be the universe itself, no? Or, as I claim, the very fact that the universe is comprehensible — as an abstract philosophical point, but also as the empirical observation that the universe we see is a pretty simple place, at least on the largest scales. We focus here mostly on the thermal history — how the constituents of the universe evolve as space expands and the temperature goes down.
Little things can come together to make big things. And those big things can often be successfully described by an approximate theory that can be qualitatively different from the theory of the little things. We say that a macroscopic approximate theory has “emerged” from the microscopic one. But the concept of emergence is a bit more general than that, covering any case where some behavior of one theory is captured by another one even in the absence of complete information. An important and subtle example is (of course) how the classical world emerges from the quantum one.
You knew this one was coming, right? Why the past is different from the future, and why we seem to flow through time. Also a bit about how different groups of scientists use the idea of “information” in very different ways.
The Biggest Ideas in the Universe | 20. Entropy and Information
Sometimes the universe is unpredictable. (Nobody needs to be reminded of that just now.) Is that unpredictability fundamental, or merely apparent? And how should we deal with it when it comes along?
The Biggest Ideas in the Universe | 19. Probability and Randomness