I was doing some end-of-the-year housecleaning on my computer, and stumbled across this poem — an unrhymed sonnet on symmetry breaking in the early universe. (Always aiming at the least common denominator, what can I say?)
I have no misconceptions about my poetic abilities, which is no doubt why it sat privately on my hard drive for so long. But it’s the holidays, so here you go.
The cosmic maelstrom boiled bright and fierce,
A thousand fields did gambol nearly free.
Momentum was exchanged so high and hot
That couplings did asymptote to nil.
Amidst the glue and bosons ‘lectroweak
There stood our pensive scalar doublet, Phi
Surveying a potential all about
Like Buridan’s ass, secured by symmetry.
A longing pulled these spineless complex fields,
To rest where energy was minimized.
But held by finite temperature effects,
The quarks and leptons bound symmetric state.
Yet nothing perfect lasts through cosmic time,
The universe expands, illusion breaks.
Now that The Big Picture is complete, I have more time for fun things like blogging, but I have a bunch of research to catch up on before I can return as normal. So in the meantime, here’s another teaser from the book: my list of “Further Reading” keyed to the different sections. You should have enough time to read all of these between now and publication day, May 10.
Part One, Cosmos:
Adams, F., & Laughlin, G. (1999). The Five Ages of the Universe: Inside the Physics of Eternity. Free Press.
Albert, D.Z. (2003). Time and Chance. Harvard University Press.
Carroll, S. (2010). From Eternity to Here: The Quest for the Ultimate Theory of Time. Dutton.
Feynman, R.P. (1967). The Character of Physical Law. M.I.T. Press.
Greene, B. (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. A.A. Knopf.
Guth, A. (1997). The Inflationary Universe: The Quest for a New Theory of Cosmic Origins. Addison-Wesley Pub.
Hawking, S.W. and Mlodinow, L. (2010). The Grand Design. Bantam.
Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Cambridge University Press.
Penrose, R. (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. A.A. Knopf.
Weinberg, S. (2015). To Explain the World: The Discovery of Modern Science. HarperCollins.
Part Two, Understanding:
Ariely, D. (2008). Predictably Irrational: The Hidden Forces that Shape Our Decisions. HarperCollins.
Dennett, D.C. (2014) Intuition Pumps and Other Tools for Thinking. W.W. Norton.
Gillett, C. and Lower, B., eds. (2001). Physicalism and Its Discontents. Cambridge University Press.
Kaplan, E. (2014). Does Santa Exist? A Philosophical Investigation. Dutton.
Rosenberg, A. (2011). The Atheist’s Guide to Reality: Enjoying Life Without Illusions. W.W. Norton.
Sagan, C. (1995). The Demon-Haunted World: Science as a Candle in the Dark. Random House.
Silver, N. (2012). The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t. Penguin Press.
Tavris, C. and Aronson, E. (2006). Mistakes Were Made (but not by me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts. Houghton Mifflin Harcourt.
Part Three, Essence:
Aaronson, S. (2013). Quantum Computing Since Democritus. Cambridge University Press.
Carroll, S. (2012). The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World. Dutton.
Deutsch, D. (1997). The Fabric of Reality: The Science of Parallel Universes and Its Implications. Viking Adult.
Gefter, A. (2014). Trespassing on Einstein’s Lawn: A Father, a Daughter, the Meaning of Nothing, and the Beginning of Everything. Bantam.
Holt, J. (2012) Why Does the World Exist? An Existential Detective Story. Liveright Publishing.
Musser, G. (2015). Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time–and What It Means for Black Holes, the Big Bang, and Theories of Everything. Scientific American / Farrar, Straus and Giroux.
Randall, L. (2011). Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World. Ecco.
Wallace, D. (2014). The Emergent Multiverse: Quantum Theory According to the Everett Interpretation. Oxford University Press.
Wilczek, F. (2015). A Beautiful Question: Finding Nature’s Deep Design. Penguin Press.
Part Four, Complexity:
Bak, P. (1996). How Nature Works: The Science of Self-Organized Criticality. Copernicus.
Cohen, E. (2012). Cells to Civilizations: The Principles of Change that Shape Life. Princeton University Press.
Coyne, J. (2009). Why Evolution is True. Viking.
Dawkins, R. (1986). The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. W.W. Norton.
Dennett, D.C. (1995). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. Simon & Schuster.
Hidalgo, C. (2015). Why Information Grows: The Evolution of Order, from Atoms to Economies. Basic Books.
Hoffman, P. (2012). Life’s Ratchet: How Molecular Machines Extract Order from Chaos. Basic Books.
Krugman, P. (1996). The Self-Organizing Economy. Wiley-Blackwell.
Lane, N. (2015). The Vital Question: Energy, Evolution, and the Origins of Complex Life. W.W. Norton.
Mitchell, M. (2009). Complexity: A Guided Tour. Oxford University Press.
Pross, A. (2012). What Is Life? How Chemistry Becomes Biology. Oxford University Press.
Rutherford, A. (2013). Creation: How Science is Reinventing Life Itself. Current.
Shubin, N. (2008). Your Inner Fish: A Journey into the 3.5-Billion-Year History of the Human Body. Pantheon.
Part Five, Thinking:
Alter, T. and Howell, R.J. (2009). A Dialogue on Consciousness. Oxford University Press.
Chalmers, D.J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Churchland, P.S. (2013). Touching a Nerve: The Self as Brain. W.W. Norton.
Damasio, A. (2010). Self Comes to Mind: Constructing the Conscious Brain. Pantheon.
Dennett, D.C. (1991). Consciousness Explained. Little Brown & Co.
Eagleman, D. (2011). Incognito: The Secret Lives of the Brain. Pantheon.
Flanagan, O. (2003). The Problem of the Soul: Two Visions of Mind and How to Reconcile Them. Basic Books.
Gazzaniga, M.S. (2011). Who’s In Charge? Free Will and the Science of the Brain. Ecco.
Hankins, P. (2015). The Shadow of Consciousness.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrah, Straus and Giroux.
Tononi, G. (2012). Phi: A Voyage from the Brain to the Soul. Pantheon.
Part Six, Caring:
de Waal, F. (2013). The Bonobo and the Atheist: In Search of Humanism Among the Primates. W.W. Norton.
Epstein, G.M. (2009). Good Without God: What a Billion Nonreligious People Do Believe. William Morrow.
Flanagan, O. (2007). The Really Hard Problem: Meaning in a Material World. The MIT Press.
Gottschall, J. (2012). The Storytelling Animal: How Stories Make Us Human. Houghton Mifflin Harcourt.
Greene, J. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin Press.
Johnson, C. (2014). A Better Life: 100 Atheists Speak Out on Joy & Meaning in a World Without God. Cosmic Teapot.
Kitcher, P. (2011). The Ethical Project. Harvard University Press.
Lehman, J. and Shemmer, Y. (2012). Constructivism in Practical Philosophy. Oxford University Press.
May, T. (2015). A Significant Life: Human Meaning in a Silent Universe. University of Chicago Press.
Ruti, M. (2014). The Call of Character: Living a Life Worth Living. Columbia University Press.
Wilson, E.O. (2014). The Meaning of Human Existence. Liveright.
Greetings, surface-dwellers! I have finally emerged from the secret underground laboratory where I have been polishing the manuscript for The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. We pushed up the publication date to May 10, so you’ll get it in plenty of time for your summer beach reading. Evidence that it exists, all 145,000 glorious words:
As will happen in the writing process, the organization of the book has changed since I first mentioned it. Here is the final table of contents. As you might gather, I went with an organization of many short chapters. Hopefully that will help give the book the feeling of a light and enjoyable read.
THE BIG PICTURE: ON THE ORIGINS OF LIFE, MEANING, AND THE UNIVERSE ITSELF
* Part One: Cosmos
1. The Fundamental Nature of Reality
2. Poetic Naturalism
3. The World Moves By Itself
4. What Determines What Will Happen Next?
5. Reasons Why
6. Our Universe
7. Time’s Arrow
8. Memories and Causes
* Part Two: Understanding
9. Learning About the World
10. Updating Our Knowledge
11. Is It Okay to Doubt Everything?
12. Reality Emerges
13. What Exists, and What Is Illusion?
14. Planets of Belief
15. Accepting Uncertainty
16. What Can We Know About the Universe Without Looking at It?
17. Who Am I?
18. Abducting God
* Part Three: Essence
19. How Much We Know
20. The Quantum Realm
21. Interpreting Quantum Mechanics
22. The Core Theory
23. The Stuff of Which We Are Made
24. The Effective Theory of the Everyday World
25. Why Does the Universe Exist?
26. Body and Soul
27. Death Is the End
* Part Four: Complexity
28. The Universe in a Cup of Coffee
29. Light and Life
30. Funneling Energy
31. Spontaneous Organization
32. The Origin and Purpose of Life
33. Evolution’s Bootstraps
34. Searching Through the Landscape
35. Emergent Purpose
36. Are We the Point?
* Part Five: Thinking
37. Crawling Into Consciousness
38. The Babbling Brain
39. What Thinks?
40. The Hard Problem
41. Zombies and Stories
42. Are Photons Conscious?
43. What Acts on What?
44. Freedom to Choose
* Part Six: Caring
45. Three Billion Heartbeats
46. What Is and What Ought to Be
47. Rules and Consequences
48. Constructing Goodness
49. Listening to the World
50. Existential Therapy
Appendix: The Equation Underlying You and Me
A lot of ground gets covered. In Part One we set the stage, seeing how discoveries in science have revealed a universe that runs under unbreakable, impersonal laws of nature. In Part Two we think about how to conceptualize such a universe: how to learn about it (Bayesian inference, abduction) and how to talk about it (emergence and overlapping theoretical vocabularies). In Part Three we get down and dirty with quantum mechanics, the Core Theory, and effective field theories. In Part Four we start down the road of connecting to our macroscopic world, seeing how complexity and life can arise due to the arrow of time. In Part Five we think about the leading challenge to a physicalist worldview: the existence of consciousness. And in Part Six we recognize that the universe isn’t going to tell us how to behave, and acknowledge that the creation of meaning and purpose is ultimately our job.
Now back to being a scientist with me. I have drafts of four different papers on my computer that need to be kicked out and onto the arxiv!
Now, the thing everyone has been giving thanks for over the last few days is Albert Einstein’s general theory of relativity, which by some measures was introduced to the world exactly one hundred years ago yesterday. But we don’t want to be everybody, and besides we’re a day late. So it makes sense to honor the epochal advance in mathematics that directly enabled Einstein’s epochal advance in our understanding of spacetime.
Highly popularized accounts of the history of non-Euclidean geometry often give short shrift to Riemann, for reasons I don’t quite understand. You know the basic story: Euclid showed that geometry could be axiomatized on the basis of a few simple postulates, but one of them (the infamous Fifth Postulate) seemed just a bit less natural than the others. That’s the parallel postulate, which has been employed by generations of high-school geometry teachers to torture their students by challenging them to “prove” it. (Mine did, anyway.)
It can’t be proved, and indeed it’s not even necessarily true. In the ordinary flat geometry of a tabletop, initially parallel lines remain parallel forever, and Euclidean geometry is the name of the game. But we can imagine surfaces on which initially parallel lines diverge, such as a saddle, or ones on which they begin to come together, such as a sphere. In those contexts it is appropriate to replace the parallel postulate with something else, and we end up with non-Euclidean geometry.
Historically, this was first carried out by Hungarian mathematician János Bolyai and the Russian mathematician Nikolai Lobachevsky, both of whom developed the hyperbolic (saddle-shaped) form of the alternative theory. Actually, while Bolyai and Lobachevsky were the first to publish, much of the theory had previously been worked out by the great Carl Friedrich Gauss, who was an incredibly influential mathematician but not very good about getting his results into print.
The new geometry developed by Bolyai and Lobachevsky described what we would now call “spaces of constant negative curvature.” Such a space is curved, but in precisely the same way at every point; there is no difference between what’s happening at one point in the space and what’s happening anywhere else, just as had been the case for Euclid’s tabletop geometry.
Real geometries, as takes only a moment to visualize, can be a lot more complicated than that. Surfaces or solids can twist and turn in all sorts of ways. Gauss thought about how to deal with this problem, and came up with some techniques that could characterize a two-dimensional curved surface embedded in a three-dimensional Euclidean space. Which is pretty great, but falls far short of the full generality that mathematicians are known to crave.
Fortunately Gauss had a brilliant and accomplished apprentice: his student Bernard Riemann. (Riemann was supposed to be studying theology, but he became entranced by one of Gauss’s lectures, and never looked back.) In 1853, Riemann was coming up for Habilitation, a German degree that is even higher than the Ph.D. He suggested a number of possible dissertation topics to his advisor Gauss, who (so the story goes) chose the one that Riemann thought was the most boring: the foundations of geometry. The next year, he presented his paper, “On the hypotheses which underlie geometry,” which laid out what we now call Riemannian geometry.
With this one paper on a subject he professed not to be all that interested in, Riemann (who also made incredible contributions to analysis and number theory) provided everything you need to understand the geometry of a space of arbitrary numbers of dimensions, with an arbitrary amount of curvature at any point in the space. It was as if Bolyai and Lobachevsky had invented the abacus, Gauss came up with the pocket calculator, and Riemann had turned around a built a powerful supercomputer.
Like many great works of mathematics, a lot of new superstructure had to be built up along the way. A subtle but brilliant part of Riemann’s work is that he didn’t start with a larger space (like the three-dimensional almost-Euclidean world around us) and imagine smaller spaces embedded with it. Rather, he considered the intrinsic geometry of a space, or how it would look “from the inside,” whether or not there was any larger space at all.
Next, Riemann needed a tool to handle a simple but frustrating fact of life: “curvature” is not a single number, but a way of characterizing many questions one could possibly ask about the geometry of a space. What you need, really, are tensors, which gather a set of numbers together in one elegant mathematical package. Tensor analysis as such didn’t really exist at the time, not being fully developed until 1890, but Riemann was able to use some bits and pieces of the theory that had been developed by Gauss.
Finally and most importantly, Riemann grasped that all the facts about the geometry of a space could be encoded in a simple quantity: the distance along any curve we might want to draw through the space. He showed how that distance could be written in terms of a special tensor, called the metric. You give me segment along a curve inside the space you’re interested in, the metric lets me calculate how long it is. This simple object, Riemann showed, could ultimately be used to answer any query you might have about the shape of a space — the length of curves, of course, but also the area of surfaces and volume of regions, the shortest-distance path between two fixed points, where you go if you keep marching “forward” in the space, the sum of the angles inside a triangle, and so on.
Unfortunately, the geometric information implied by the metric is only revealed when you follow how the metric changes along a curve or on some surface. What Riemann wanted was a single tensor that would tell you everything you needed to know about the curvature at each point in its own right, without having to consider curves or surfaces. So he showed how that could be done, by taking appropriate derivatives of the metric, giving us what we now call the Riemann curvature tensor. Here is the formula for it:
This isn’t the place to explain the whole thing, but I can recommend some spiffy lecture notes, including a very short version, or the longer and sexier textbook. From this he deduced several interesting features about curvature. For example, the intrinsic curvature of a one-dimensional space (a line or curve) is alway precisely zero. Its extrinsic curvature — how it is embedded in some larger space — can be complicated, but to a tiny one-dimensional being, all spaces have the same geometry. For two-dimensional spaces there is a single function that characterizes the curvature at each point; in three dimensions you need six numbers, in four you need twenty, and it goes up from there.
There were more developments in store for Riemannian geometry, of course, associated with names that are attached to various tensors and related symbols: Christoffel, Ricci, Levi-Civita, Cartan. But to a remarkable degree, when Albert Einstein needed the right mathematics to describe his new idea of dynamical spacetime, Riemann had bequeathed it to him in a plug-and-play form. Add the word “time” everywhere we’ve said “space,” introduce some annoying minus signs because time and space really aren’t precisely equivalent, and otherwise the geometry that Riemann invented is the same we use today to describe how the universe works.
Riemann died of tuberculosis before he reached the age of forty. He didn’t do bad for such a young guy; you know you’ve made it when you not only have a Wikipedia page for yourself, but a separate (long) Wikipedia page for the list of things named after you. We can all be thankful that Riemann’s genius allowed him to grasp the tricky geometry of curved spaces several decades before Einstein would put it to use in the most beautiful physical theory ever invented.
Breaking my radio silence here to get a little nitpick off my chest: the claim that during inflation, the universe “expanded faster than the speed of light.” It’s extraordinarily common, if utterly and hopelessly incorrect. (I just noticed it in this otherwise generally excellent post by Fraser Cain.) A Google search for “inflation superluminal expansion” reveals over 100,000 hits, although happily a few of the first ones are brave attempts to squelch the misconception. I can recommend this nice article by Tamara Davis and Charlie Lineweaver, which tries to address this and several other cosmological misconceptions.
This isn’t, by the way, one of those misconceptions that rattles around the popular-explanation sphere, while experts sit back silently and roll their eyes. Experts get this one wrong all the time. “Inflation was a period of superluminal expansion” is repeated, for example, in these texts by by Tai-Peng Cheng, by Joel Primack, and by Lawrence Krauss, all of whom should certainly know better.
The great thing about the superluminal-expansion misconception is that it’s actually a mangle of several different problems, which sadly don’t cancel out to give you the right answer.
1.The expansion of the universe doesn’t have a “speed.” Really the discussion should begin and end right there. Comparing the expansion rate of the universe to the speed of light is like comparing the height of a building to your weight. You’re not doing good scientific explanation; you’ve had too much to drink and should just go home.The expansion of the universe is quantified by the Hubble constant, which is typically quoted in crazy units of kilometers per second per megaparsec. That’s (distance divided by time) divided by distance, or simply 1/time. Speed, meanwhile, is measured in distance/time. Not the same units! Comparing the two concepts is crazy.
Admittedly, you can construct a quantity with units of velocity from the Hubble constant, using Hubble’s law, v = Hd (the apparent velocity of a galaxy is given by the Hubble constant times its distance). Individual galaxies are indeed associated with recession velocities. But different galaxies, manifestly, have different velocities. The idea of even talking about “the expansion velocity of the universe” is bizarre and never should have been entertained in the first place.
2. There is no well-defined notion of “the velocity of distant objects” in general relativity. There is a rule, valid both in special relativity and general relativity, that says two objects cannot pass by each other with relative velocities faster than the speed of light. In special relativity, where spacetime is a fixed, flat, Minkowskian geometry, we can pick a global reference frame and extend that rule to distant objects. In general relativity, we just can’t. There is simply no such thing as the “velocity” between two objects that aren’t located in the same place. If you tried to measure such a velocity, you would have to parallel transport the motion of one object to the location of the other one, and your answer would completely depend on the path that you took to do that. So there can’t be any rule that says that velocity can’t be greater than the speed of light. Period, full stop, end of story.
Except it’s not quite the end of the story, since under certain special circumstances it’s possible to define quantities that are kind-of sort-of like a velocity between distant objects. Cosmology, where we model the universe as having a preferred reference frame defined by the matter filling space, is one such circumstance. When galaxies are not too far away, we can measure their cosmological redshifts, pretend that it’s a Doppler shift, and work backwards to define an “apparent velocity.” Good for you, cosmologists! But that number you’ve defined shouldn’t be confused with the actual relative velocity between two objects passing by each other. In particular, there’s no reason whatsoever that this apparent velocity can’t be greater than the speed of light.
Sometimes this idea is mangled into something like “the rule against superluminal velocities doesn’t refer to the expansion of space.” A good try, certainly well-intentioned, but the problem is deeper than that. The rule against superluminal velocities only refers to relative velocities between two objects passing right by each other.
3. There is nothing special about the expansion rate during inflation. If you want to stubbornly insist on treating the cosmological apparent velocity as a real velocity, just so you can then go and confuse people by saying that sometimes that velocity can be greater than the speed of light, I can’t stop you. But it can be — and is! — greater than the speed of light at any time in the history of the universe, not just during inflation. There are galaxies sufficiently distant that their apparent recession velocities today are greater than the speed of light. To give people the impression that what’s special about inflation is that the universe is expanding faster than light is a crime against comprehension and good taste.
What’s special about inflation is that the universe is accelerating. During inflation (as well as today, since dark energy has taken over), the scale factor, which characterizes the relative distance between comoving points in space, is increasing faster and faster, rather than increasing but at a gradually diminishing rate. As a result, if you looked at one particular galaxy over time, its apparent recession velocity would be increasing. That’s a big deal, with all sorts of interesting and important cosmological ramifications. And it’s not that hard to explain.
But it’s not superluminal expansion. If you’re sitting at a stoplight in your Tesla, kick it into insane mode, and accelerate to 60 mph in 3.5 seconds, you won’t get a ticket for speeding, as long as the speed limit itself is 60 mph or greater. You can still get a ticket — there’s such a thing as reckless driving, after all — but if you’re hauled before the traffic judge on a count of speeding, you should be able to get off scot-free.
Many “misconceptions” in physics stem from an honest attempt to explain technical concepts in natural language, and I try to be very forgiving about those. This one, I believe, isn’t like that; it’s just wrongity-wrong wrong. The only good quality of the phrase “inflation is a period of superluminal expansion” is that it’s short. It conveys the illusion of understanding, but that can be just as bad as straightforward misunderstanding. Every time it is repeated, people’s appreciation of how the universe works gets a little bit worse. We should be able to do better.
So now there are T-shirts. (See below to purchase your own.)
It’s a good equation, representing the Feynman path-integral formulation of an amplitude for going from one field configuration to another one, in the effective field theory consisting of Einstein’s general theory of relativity plus the Standard Model of particle physics. It even made it onto an extremely cool guitar.
I’m not quite up to doing a comprehensive post explaining every term in detail, but here’s the general idea. Our everyday world is well-described by an effective field theory. So the fundamental stuff of the world is a set of quantum fields that interact with each other. Feynman figured out that you could calculate the transition between two configurations of such fields by integrating over every possible trajectory between them — that’s what this equation represents. The thing being integrated is the exponential of the action for this theory — as mentioned, general relativity plus the Standard Model. The GR part integrates over the metric, which characterizes the geometry of spacetime; the matter fields are a bunch of fermions, the quarks and leptons; the non-gravitational forces are gauge fields (photon, gluons, W and Z bosons); and of course the Higgs field breaks symmetry and gives mass to those fermions that deserve it. If none of that makes sense — maybe I’ll do it more carefully some other time.
Gravity is usually thought to be the odd force out when it comes to quantum mechanics, but that’s only if you really want a description of gravity that is valid everywhere, even at (for example) the Big Bang. But if you only want a theory that makes sense when gravity is weak, like here on Earth, there’s no problem at all. The little notation k < Λ at the bottom of the integral indicates that we only integrate over low-frequency (long-wavelength, low-energy) vibrations in the relevant fields. (That's what gives away that this is an "effective" theory.) In that case there's no trouble including gravity.
The fact that gravity is readily included in the EFT of everyday life has long been emphasized by Frank Wilczek. As discussed in his latest book, A Beautiful Question, he therefore advocates lumping GR together with the Standard Model and calling it The Core Theory.
I couldn’t agree more, so I adopted the same nomenclature for my own upcoming book, The Big Picture. There’s a whole chapter (more, really) in there about the Core Theory. After finishing those chapters, I rewarded myself by doing something I’ve been meaning to do for a long time — put the equation on a T-shirt, which you see above.
I’ve had T-shirts made before, with pretty grim results as far as quality is concerned. I knew this one would be especially tricky, what with all those tiny symbols. But I tried out Design-A-Shirt, and the result seems pretty impressively good.
So I’m happy to let anyone who might be interested go ahead and purchase shirts for themselves and their loved ones. Here are the links for light/dark and men’s/women’s versions. I don’t actually make any money off of this — you’re just buying a T-shirt from Design-A-Shirt. They’re a little pricey, but that’s what you get for the quality. I believe you can even edit colors and all that — feel free to give it a whirl and report back with your experiences.
Once again I have not really been the world’s most conscientious blogger, have I? Sometimes other responsibilities have to take precedence — such as looming book deadlines. And I’m working on a new book, and that deadline is definitely looming!
An alternative subtitle was What Is, and What Matters. It’s a cheerfully grandiose (I’m supposed to say “ambitious”) attempt to connect our everyday lives to the underlying laws of nature. That’s a lot of ground to cover: I need to explain (what I take to be) the right way to think about the fundamental nature of reality, what the laws of physics actually are, sketch some cosmology and connect to the arrow of time, explore why there is something rather than nothing, show how interesting complex structures can arise in an undirected universe, talk about the meaning of consciousness and how it can be purely physical, and finally trying to understand meaning and morality in a universe devoid of transcendent purpose. I’m getting tired just thinking about it.
From another perspective, the book is an explication of, and argument for, naturalism — and in particular, a flavor I label Poetic Naturalism. The “Poetic” simply means that there are many ways of talking about the world, and any one that is both (1) useful, and (2) compatible with the underlying fundamental reality, deserves a place at the table. Some of those ways of talking will simply be emergent descriptions of physics and higher levels, but some will also be matters of judgment and meaning.
As of right now the book is organized into seven parts, each with several short chapters. All that is subject to change, of course. But this will give you the general idea.
* Part One: Being and Stories
How we think about the fundamental nature of reality. Poetic Naturalism: there is only one world, but there are many ways of talking about it. Suggestions of naturalism: the world moves by itself, time progresses by moments rather than toward a goal. What really exists.
* Part Two: Knowledge and Belief
Telling different stories about the same underlying truth. Acquiring and updating reliable beliefs. Knowledge of our actual world is never perfect. Constructing consistent planets of belief, guarding against our biases.
* Part Three: Time and Cosmos
The structure and development of our universe. Time’s arrow and cosmic history. The emergence of memories, causes, and reasons. Why is there a universe at all, and is it best explained by something outside itself?
* Part Four: Essence and Possibility
Drawing the boundary between known and unknown. The quantum nature of deep reality: observation, entanglement, uncertainty. Vibrating fields and the Core Theory underlying everyday life. What we can say with confidence about life and the soul.
* Part Five: Complexity and Evolution
Why complex structures naturally arise as the universe moves from order to disorder. Self-organization and incremental progress. The origin of life, and its physical purpose. The anthropic principle, environmental selection, and our role in the universe.
* Part Six: Thinking and Feeling
The mind, the brain, and the body. What consciousness is, and how it might have come to be. Contemplating other times and possible worlds. The emergence of inner experiences from non-conscious matter. How free will is compatible with physics.
* Part Seven: Caring and Mattering
Why we can’t derive ought from is, even if “is” is all there is. And why we nevertheless care about ourselves and others, and why that matters. Constructing meaning and morality in our universe. Confronting the finitude of life, deciding what stories we want to tell along the way.
Hope that whets the appetite a bit. Now back to work with me.
Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.
At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.
That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.
Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)
Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.
Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:
That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.
In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?
I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.
But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)
We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.
It remains embarrassing that physicists haven’t settled on the best way of formulating quantum mechanics (or some improved successor to it). I’m partial to Many-Worlds, but there are other smart people out there who go in for alternative formulations: hidden variables, dynamical collapse, epistemic interpretations, or something else. And let no one say that I won’t let alternative voices be heard! (Unless you want to talk about propellantless space drives, which are just crap.)
So let me point you to this guest post by Anton Garrett that Peter Coles just posted at his blog:
It’s quite a nice explanation of how the state of play looks to someone who is sympathetic to a hidden-variables view. (Fans of Bell’s Theorem should remember that what Bell did was to show that such variables must be nonlocal, not that they are totally ruled out.)
As a dialogue, it shares a feature that has been common to that format since the days of Plato: there are two characters, and the character that sympathizes with the author is the one who gets all the good lines. In this case the interlocutors are a modern physicist Neo, and a smart recently-resurrected nineteenth-century physicist Nino. Trained in the miraculous successes of the Newtonian paradigm, Nino is very disappointed that physicists of the present era are so willing to simply accept a theory that can’t do better than predicting probabilistic outcomes for experiments. More in sorrow than in anger, he urges us to do better!
My own takeaway from this is that it’s not a good idea to take advice from nineteenth-century physicists. Of course we should try to do better, since we should alway try that. But we should also feel free to abandon features of our best previous theories when new data and ideas come along.
A nice feature of the dialogue between Nino and Neo is the way in which it illuminates the fact that much of one’s attitude toward formulations of quantum mechanics is driven by which basic assumptions about the world we are most happy to abandon, and which we prefer to cling to at any cost. That’s true for any of us — such is the case when there is legitimate ambiguity about the best way to move forward in science. It’s a feature, not a bug. The hope is that eventually we will be driven, by better data and theories, toward a common conclusion.
What I like about Many-Worlds is that it is perfectly realistic, deterministic, and ontologically minimal, and of course it fits the data perfectly. Equally importantly, it is a robust and flexible framework: you give me your favorite Hamiltonian, and we instantly know what the many-worlds formulation of the theory looks like. You don’t have to think anew and invent new variables for each physical situation, whether it’s a harmonic oscillator or quantum gravity.
Of course, one gives something up: in Many-Worlds, while the underlying theory is deterministic, the experiences of individual observers are not predictable. (In that sense, I would say, it’s a nice compromise between our preferences and our experience.) It’s neither manifestly local nor Lorentz-invariant; those properties should emerge in appropriate situations, as often happens in physics. Of course there are all those worlds, but that doesn’t bother me in the slightest. For Many-Worlds, it’s the technical problems that bother me, not the philosophical ones — deriving classicality, recovering the Born Rule, and so on. One tends to think that technical problems can be solved by hard work, while metaphysical ones might prove intractable, which is why I come down the way I do on this particular question.
But the hidden-variables possibility is still definitely alive and well. And the general program of “trying to invent a better theory than quantum mechanics which would make all these distasteful philosophical implications go away” is certainly a worthwhile one. If anyone wants to suggest their favorite defenses of epistemic or dynamical-collapse approaches, feel free to leave them in comments.