Science has an incredibly impressive track record of uncovering nonintuitive ideas about the universe that turn out to be surprisingly accurate. It can be tempting to think of scientific discoveries as being carefully constructed atop a rock-solid foundation. In reality, scientific progress is tentative and fallible. Scientists propose models, assign them probabilities, and run tests to see whether they succeed or fail. In cosmologist Andrew Jaffe's new book, The Random Universe, he illustrates how models and probability help us make sense of the cosmos.
Support Mindscape on Patreon.
Andrew Jaffe received his Ph.D. in physics from the University of Chicago. He is currently a professor of astrophysics and cosmology and Director of the Imperial Centre for Inference and Cosmology at Imperial College, London. His research lies at the intersection of theoretical and observational cosmology, including the Planck Surveyor, Euclid, LISA, and Simons Observatory collaborations.
Click to Show Episode Transcript
0:00:00.2 Sean Carroll: Hello, everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. One of the ideas that has been very common in intellectual history, at least the parts of intellectual history that I know about in the world, is the search for certainty in knowledge. Rock bottom, 100% reliable knowledge of something along the lines of a proof in geometry or logic or other areas of mathematics. It turns out it took the human race a long time to learn this lesson. But it turns out that scientific knowledge, empirical knowledge about the actual world in which we live is not like that. That's not achievable in the world of the scientific exploration of the world because there's a lot of different ideas you might have about the world. There's no a priori way to reason your way into figuring out which one is the right one. And what you have to do is propose lots of different possibilities and sift through them, trying to fit them through the data and understanding that some of them fit better, some of them fit worse, some of them don't fit yet, but still have a chance. And all that messy reality of the situation.
0:01:09.4 SC: We call these theories, if you want to be a little bit less grandiose about it, models of the world. And scientists use models all the time. But it's not something you need to be a grown up sophisticated scientist to do. Little children model the world almost as soon as they're born. We mentioned this both in the podcast with Alison Gopnik and with Judea Pearl. Little kids touch things and try to build a causal map of the world around them. Scientists just do the same thing in a more sophisticated way. But like many things, the philosophical underpinnings of an idea, like the fact that scientific knowledge is provisional and probabilistic rather than certain and foundational, means that we can update our way of thinking about it. Our actual technical tools that we use to derive scientific ideas and to figure out which ones are doing better. Of course, in actual science and physics, in cosmology, the areas that I'm most familiar with, this is probability theory applied to all sorts of different models, from the simple models of a child in a crib to large scale data sets in cosmology or particle physics, or gravitational waves or biology or a million other areas of science where you have to let the computer sift through the data to find the patterns in it.
0:02:27.7 SC: So today we're very lucky to have Andrew Jaffe, who is a theoretical cosmologist at Imperial College in London. I've known Andrew for a long time. We're the same sort of demographic. We came of age as theoretical cosmologists in the same era, but he has been a little bit more down to earth than I. He has been thinking about the cosmic microwave background, analyzing data, especially from the Planck satellite. Most recently, the best view we have from a satellite of the whole cosmic microwave background sky, and trying to fit that to, guess what, models of the universe.
0:03:01.3 SC: And so we're going to talk about the roles of models in cosmology and in physics, sorry, in science, very, very generally, the way that you think about probability to get there and the nature of randomness in the universe. One of the other things that people used to like to hold on to was the idea of a deterministic universe, right? A clockwork, predictable universe. And again, the reality doesn't quite work out that way. Quantum mechanics gets in the way whether you like it or not. And at least in the world that we can observe, there seems to be intrinsic randomness, intrinsically stochastic behavior. You need to talk about probability in order to model that. The words Bayesian and frequentist may come up in this discussion, as you might guess, but it's always good to not just think about these ideas, but to really ground them in the practice of science. And so that's what we're going to do today. Let's go.
[music]
0:04:10.9 SC: Andrew Jaffe, welcome to the Mindscape Podcast.
0:04:12.6 Andrew Jaffe: Thanks, Sean.
0:04:13.8 SC: Your book coming out now called 'The Random Universe' has this subtitle, How Models and Probability Help us Make Sense of the Cosmos. And my guess, I could be wrong, is that many people like I did, are going to jump to the word probability and think, oh, yeah, you talk about probability, random is right there in the title, but the word models is also right there in the subtitle. I think maybe people are going to skip over the importance of that. Like what led you to put that concept right there at the forefront of your book?
0:04:46.2 AJ: Yeah, I started the book with the title 'The Random Universe', but actually, I very quickly realized that models were almost more important than anything else. And really, the idea behind the book is that we can't know anything at all about the world without a model for the world. And that's true for scientists, and that's true for you walking around in Baltimore or me walking around in London and you talking with people you've never met before or you talking with your wife. We have models for all of these things in our heads, and we'd be able to do none of that absent that model. And indeed, I think we're born with some models that enable us to find the milk in our mother's breast from the very beginning.
0:05:32.9 SC: So what... In that sense, it's almost too crazy a question to ask. What is a model? What do you mean?
0:05:38.2 AJ: Well, yeah, a model is a story about the world. It doesn't have to be a story in words, it doesn't have to be a story in math. Although as scientists, we like to get to the mathy stories very quickly. But it's just a way of thinking about the world that helps you navigate it. Sometimes it's very... Sometimes the model, and I guess the jargony term would be, is really isomorphic to the world in a very particular way. The map is not the territory is the quote, but the map is really like the territory...
0:06:09.2 SC: Hopefully.
0:06:09.2 AJ: In specific ways and that helps you use it. But I live in London and the Tube, the famous Tube map in London, right, the subway, is like the territory. It's a model. It shows you all of the connections, but it is geographically not right. Things look close together on the map and are not really close together in real life and far apart and vice versa, but they do show all the connections in a really nice geometrical way. So the model that you have is good for certain things, but it doesn't have to be good for everything.
0:06:39.6 SC: So a model kind of spells out a set of connections or relationships between different things in the world in a way that is appropriate to whatever question you care about at the moment.
0:06:51.1 AJ: Yeah. And again we, in science, we like to make that mathematical. So Newton's laws were mathematical models for the world. The way things interacted, forces and what those... What it means to push on something essentially, or to pull on it with gravity. That is a model for the world and it's a really good model for the world that worked nearly perfectly well for several 100 years and continues to work nearly perfectly well now. But we have some better models that in some circumstances, and as cosmologists, those circumstances occur every day for us, where we need to go beyond Newton's really complicated and amazing but still simplified law and use Einstein's even more complicated and amazing laws in order to apply them to different circumstances.
0:07:41.0 SC: You must be aware of the very relevant question right now. Do large language models have a model of the world or are they just... Do they just have such big lookup tables that they can always spit out a sensible sounding answer?
0:07:55.4 AJ: Yeah. And one question is, is there a difference between those two things? If you go back 10 years, and between 10 and about 60 years, I guess, most people who thought about this would have used the Turing test, right, to answer the question of, is that thing on your table that you're talking to, thinking? And the answer to that question according to the Turing test, and, okay, if you go back to the original papers, it's a little more complicated than this but not much, was, if it seems to be talking to you, then it's... If you can't tell the difference, then it is. Now, at some level, there's no way large language models as they are currently constituted, do pass that test, because they don't... They clearly don't think in the same way we do. They don't have the same kind of memory that we do. In particular, that's, I think, a very important thing. But if you ask them about the world, they will explicate the thing that, as far as you're concerned, is their model for the world in specific ways. And as far as I know, Sean, that's all I can do for you.
0:09:00.1 AJ: I can't open up your brain and ask you what your model. I can't open your brain and find out what your model of the world is. All I can do is ask you about it. And so I'm still drawn to this idea that it's only the phenomenology of the intelligence that works, and the interior part of it isn't quite enough or we can't know what that is, even if we do. Now, the weird thing about large language models, unlike other people, is we do know a lot about what's going on inside. We don't know everything about it because they're super complicated. And these neural networks are literally millions or hundreds of millions of parameters that are fit to, by feeding it all this language to begin with. So we know something about it. We know in some sense, therefore, more than we know about how our brains work. But we don't know enough to see if there's anything, and again, to use the word from before, that is really isomorphic to the world that's inside a large language model.
0:09:56.8 SC: I guess, actually this is clarifying something. This brief conversation already in thinking about large language models, in a sense, a gigantic lookup table that has all the right answers, is a model of the world. It's just that maybe as physicists, we're used to having these vastly compressed models that we find very, very simple, and maybe that's unclear whether an LLM is using such a thing.
0:10:21.3 AJ: Yeah. I mean, are you... So maybe you're familiar with the philosopher John Searle, and so he has a famous count, which he claims is a counterexample to Turing's model or the Turing test, and whether machines can think. And that's this famous Chinese room. So his model is exactly a lookup table where there's a room and there's a person in the room, and the person has access to this giant table of characters on one side and characters on the other. And if the person in the room is given a particular sequence of characters, he finds it in his lookup table and gives out the corresponding thing and what that is supposed to be is, well, it's Chinese input and Chinese output. But the person in the room doesn't understand Chinese, so he or she doesn't know what's going on. And the equivalent is, well, that's an LLM at some level. It's some lookup table. It is not as simple as just a lookup table because lookup tables would be really slow, if nothing else. So there's something else that's in between. But there are certainly people who 10 years ago would have said no, the Chinese room does understand. Just because the person in the room doesn't understand, and just because there's no brain there or obvious mind there, that doesn't mean it doesn't understand. And there were partisans on either side. I think your friend Daniel Dennett was on the anti-Searle side, for sure he was on the anti-Searle side in most cases.
0:11:48.4 SC: Sure.
0:11:48.8 AJ: And I think he would have said yes, that... Or did say yes, the Chinese room does understand in some important sense. And maybe that's enough to convince us that when LLMs get to the point where they stop doing things that don't seem like thought but just seem like glitches, which they do now, maybe we'll think that they're thinking more than we do now, or maybe they already are. I'm not sure.
0:12:11.0 SC: I definitely would have been one of those people 10 years ago defending the Chinese room does perfectly well understand, but I think that my understanding is more nuanced now. So that's a good direction in which to move. I also, I want to tell a tiny little story because to me, it crystallized the fact that physicists think about models in this way. I once participated in a debate with William Lane Craig about God and Cosmology. And he was using... So he argued that the universe has a beginning. He argued on the basis of the existence of the Big Bang. But also you might know the Borde–Guth–Vilenkin theorem about inflation can't be past eternal, et cetera. And has a deductive proof that God exists based on this premise. And I countered by saying, well, look, we don't know whether the universe has a beginning because here are 15 perfectly viable models in which the universe doesn't have a beginning. And there was this interesting mismatch of presumptions because he and the people later on in commentaries who were on his side were like who cares if you have a model? I want an argument. I want a thing that goes from premises to conclusions. And me and the physicists were like no, we don't know how the universe works. This is how we learn about it. We suggest a model and try to figure out which one works best.
0:13:31.3 AJ: Yeah, that's our job. I mean, that's... I mean, it's funny too that the people against you didn't like models. I mean, it's all a model. That's the argument of the book. Right? Getting back to that. Right? The whole... Literally the only way we can understand the world is with models.
0:13:50.6 SC: But I think that's secretly profound. It's not... There's this rationalist ideal back in the old days where you literally just had true premises and you draw true conclusions. And it's very different than the sort of let's try everything kind of science way.
0:14:10.3 AJ: Yeah. So I mean, I spend a lot of time in the book kind of contrasting deduction and induction. Right. And deduction is just going from true premises to true conclusions. And it's kind of boring. I mean it's... I mean, it's the entire field of mathematics, but it's...
0:14:30.1 SC: I was going to say.
0:14:30.2 AJ: Except that they get to do things like invent new things, invent new terms which then you can prove things about, which is fun, but it's boring in the sense that it really does just follow. Right? From the definitions, then you get the conclusions. You might have to do a lot of work to get there, but they follow. Whereas induction requires making a leap of probability. So you can only ever know that a model corresponds to the world probabilistically.
0:15:03.9 SC: Right.
0:15:05.3 AJ: And that's where the probability comes in. And that's how we choose between models to some extent, is we ask which one of them is more probable given all of the data and all of the other information that we may want to bring to bear to a problem. The nice thing about probabilities, although some would say it's the bad thing about probabilities, is that they depend entirely on what information you want to use as background. So there's not an answer to what is the probability that something will happen. There's an answer, what is the probability that something will happen, given all the things I know.
0:15:40.3 SC: Right. And that leads us directly into, one of the opening chapters in your book is about David Hume, who we all look to, to start a lot of these conversations, not finish them necessarily, but he definitely put a finger on the problem of induction.
0:15:55.4 AJ: Yeah. So the problem of induction is, well, the idea that you should want to prove that you can learn something that is for sure true, from observing some finite amount of the world. Right. We observe that things fall when you drop them. They've always fallen. They've always fell every time I've dropped something. So they will continue to do so in the future. And how sure can you be? And if all that you're bringing to bear is the fact that it's only happened in the past, then you can be pretty sure. But you still have to put a model on it. You have to say, well if you initially went into the problem thinking, okay, my prior information, and I'm using that word on purpose that every people will recognize, is that, well, when I drop something, of course this isn't the way it really works, but when I... There's a 50% chance it's going to drop or it's going to just stay there or float or something else, drop or not drop.
0:16:57.1 SC: Perfectly good prior.
0:17:00.7 AJ: 50/50.
0:17:00.8 SC: Yeah.
0:17:00.7 AJ: It is a perfectly good prior. That's right. It doesn't correspond to my actual knowledge of the world, but it's a perfectly good prior. And then you drop something and it falls, and then you just keep doing the experiment every time and then you can work out probabilistically what is the probability that I think that there is a rule essentially that it will fall every time. Right? If that's... Asking that question and you will never get to 100% and you can... Then as scientists, we can do better than that. We can make other observations about it. We can say, aha, I actually have a reason why things are falling because I have discovered that there's this thing that I'm going to call gravity, for lack of a better term in the world that makes things fall. And Galileo did a lot to advance our understanding of things in this way. He wasn't... Not in the probability side, but just in the physics side. And of course, Newton took it all the way and invented, discovered this inverse square law for gravity that says that things fall in a very particular way. And once you have that model, you can then ask a different question. You can ask, how probable are things going... How probable is it that things are going to fall given the model? The answer is 100%.
0:18:14.9 SC: Sure.
0:18:15.0 AJ: Right? That's easy. But then you can then test the model. You could say, well, how... If things are falling, how... If I keep having these observations of falling, how likely is that model to be true? Well, it's kind of boring if you don't have an alternative, but Einstein can come along a few 100 years later and say, well, I have an alternative model and you could try to test it with dropping things. And maybe nowadays dropping things in a lab is something that you can do carefully enough to be able to see that general relativity from Einstein is more correct than Newton. But even in Einstein's time, 100 and 100 odd years ago, there were observations that were already saying that dropping things in the Newtonian way weren't working, at least on large scales.
0:18:56.0 SC: Yeah.
0:18:56.7 AJ: There's this thing about the orbit of Mercury not being exactly where it needed to be by, I don't know, minutes of arc being less 60, one 60th of one 360th of a circle around the sun. Right? Was getting it slightly wrong. And Einstein's theory got that right, which was kind of amazing. And then it made some other predictions. Most, perhaps most famously, it made the prediction that light would be bent by massive objects called gravitational lensing. And it wasn't at all clear whether Newton's theory made this prediction or not.
0:19:32.2 SC: It was a little ambiguous. Yeah.
0:19:33.9 AJ: But it did make... But it was clear that if it predicted this, it would predict it to have a particular value. And it turns out kind of luckily that the particular value that it had was off by a factor of two from what Einstein had predicted. And it's funny because in the first version of his theory, which was because it wasn't that Einstein's relativity came whole cloth out of his head at once, right? It was a series of paper. Over years, you've written about this, you know it very well. But his, the very first version of the theory actually had the same prediction as Newton's.
0:20:07.8 SC: I didn't know that actually yeah.
0:20:10.0 AJ: I think in some very early paper, it had the wrong answer for his own theory because it wasn't completely worked out. But then when he got it right, it was realized very quickly that it did make this different prediction. And this was understood very well in particular by Arthur Eddington, a British astrophysicist who very famously, well, famously was a Quaker. And so was let off of service from World War I because he was a pacifist. But he was allowed to work on an expedition to go and see this gravitational lensing. And the way we see this gravitational lensing and the way it was realized that we could very early on was, well, what's the most massive object around? The most massive object that we're near is the sun. And that's great. The sun would be an amazing thing to see light pass right by if you could. But the problem is, as most of us realize, the sun is really bright. And so any light that passes by it is just swamped by the brightness of the sun itself. And so, luckily, we live in a place, and this is one of the huge coincidences that make life wonderful for us here on Earth is that we have eclipses, because the size of the sun and the size of the moon are almost identical.
0:21:23.6 AJ: And so you can wait until there's an eclipse, and you can see that the sky is dark, and you can see the things right around where the sun is, where the bright sun would otherwise be. And you could see where the points of light, the stars that are behind it are. And you can measure them with respect to each other when the sun is nowhere near them in the night sky. And you can measure them with respect to each other when they're near the sun during an eclipse. And you can see if those shift and they do shift again by a few minutes of arc. If it's maybe even... I don't even remember the number. It might even be some seconds of arc. But it was just observable in the early 20th century. And they observed it, and it was much closer. They did a full error analysis, maybe not with the full probabilistic machinery we have today, but they did a pretty good error analysis. And it was much more consistent with Einstein's theory than with Newton's theory. And so, finally, going back to where we started here, you could then very probabilistically say, given what we know, Einstein's theory is much more probable to be true than Newton's theory. So we were able to really use probability to compare the models because we were able to make these observations that, with different predictions for the two different cases and they weren't...
0:22:48.2 AJ: It's not like we made this measurement with zero error. And it was spot on Einstein's number. It was one error bar away or something like that. I could look up the numbers, but it's a border, half an error bar, one error bar away. But it was five error bars away from Newton's number. So does that mean that it was given only that data, that it was completely impossible that Newton was right? No. Again, it's a probabilistic statement. It was just vastly more likely that Einstein's theory was right. And of course, we have other data. The prior, the mercury thing I talked about before was already data in Einstein's favor. And so now we know that Einstein's theory is much more correct than Newton's theory is. Of course, one of our day jobs is figuring out how Einstein's theory might also be wrong.
0:23:27.5 SC: We hope. Yeah.
0:23:27.9 AJ: Because we think that it probably isn't right to infinitely many decimal places. So trying to figure out how that works itself out.
0:23:34.6 SC: But it's interesting because I think that a lot of people don't appreciate this aspect of scientific work because they want their knowledge to be foundational. Right? That there's a bedrock knowledge that is a 100% certain, going back to Descartes or whatever. And science of all things, seems so successful and seems so certain about facts that there's a little bit of a lift to tell them, like actually we're not sure about anything.
0:24:03.9 AJ: Yeah, well, and that is, sadly or not, I think scientists actually find that kind of liberating, that A, we'll never be out of a job, but B, it means that there's always something to discover and yes, it means there are some things that are slippery about the world. And if we do discover what sadly people like to call a theory of everything, but a theory that describes lots of stuff in one big nice set of equations, we won't even know that we're done then. Right? We'll keep testing, we'll keep checking on things. We'll certainly measure the numbers that describe the theory better over time. But there's no way we'll ever know that some mathematical or algorithmic or whatever description of the world has to be correct. At least I don't think so. Maybe it somehow works out that something becomes so self evidently true that there's no other way that the universe could be. But I'm pretty sure the very fact that you can kind of write down a computational model of a space with stuff that moves in it that doesn't obey... That doesn't look anything ours. But it's just like a cellular automaton or something like that, just something that does obey some simple set of laws. And you could imagine a realization of it, probably means that... That counterexample means that nothing is just self evidently the right answer to the universe.
0:25:29.1 SC: I definitely agree. And as you already mentioned, the rules could change tomorrow, no matter how self evident they seemed today. And that's a great, I think advanced forward in thinking. It goes along with, you've been good about not actually using the word Bayesian yet, but you're definitely giving us a Bayesian view of the world. And did that really help us with the problem of induction, you think?
0:25:52.2 AJ: I think it is the answer to the problem of induction. I mean, many philosophers and statisticians have tried to answer it by sort of showing that you can prove things, but no, the answer is you can get more certain of things. And just the way I've been describing, right? You start with some potentially very meager bit of prior information which is still a model about the world, right? Every bit of information is some model of the world. And you boot... You use that to bootstrap up to more and more knowledge, more and more refined models about the world. Now it's true that you'd love there to be some grand Bayesian program where you sort of started out knowing nothing except one fact or one distribution about the world, right? These are all one set of probabilities about the world or postulating them. And then you just systematically add all of the data that you could imagine getting about the world and that would somehow coalesce into a strict mathematical model. And maybe that would be nice if that were true. But in practice it doesn't work that way. And in practice we don't ask.
0:27:06.3 AJ: Well, maybe it's just that we've been super unlucky and all of our observations for the last 120 years have been wrong by varying, by amounts that are, if you write down, the distributions are allowed, but highly unlikely. And actually we live in a universe that is completely perfectly described by Newton's laws, by Newton's gravitational law, and it's just, we've just made bad observations of them just coincidentally enough that we happen to look like we live in an Einsteinian universe. That is the way we usually write down our probability distributions, that's still not of zero probability. Now, of course, we write them down in ways that just allow things to go infinitely far away because it's mathematically convenient.
0:27:48.8 AJ: And any more realistic thing would discount this very, very substantially. But you can never be sure of anything. And you really can't because this could be a dream, right? You could be being fooled by a Descartes Cartesian demon. You could be a brain in a vat being given stimulations. You could be, although you've written papers on this, you could be a Boltzmann brainer, a thing that has been popped into existence for just long enough to think your thoughts and then you're going to pop out of exist or die because you're in the midst of space. Those are all logically possible and therefore probabilistically non-zero possibilities. But they are, and I think you've used a great term to describe this, what is it? Ontologically or epistemologically unstable. If you start...
0:28:33.0 SC: Cognitively unstable. Yeah.
0:28:34.5 AJ: Yeah. If you start really putting significant probability on these things, then it's hard to do anything. You sort of... You just have to sit in your room and wait for these bad things to happen because at some level you think they're more probable than anything else. And you're just going to assume that life is too hard. But in order to just get around, you have to impose some simple models that the world is intelligible. You've nicely called this poetic naturalism, I think in some of your books and works. And I think that's a really good description of what we have to do to just get on with our days, much less our lives as scientists, is just to sort of assume that the world makes sense and assume that we can understand it. And that worked out pretty well.
0:29:19.2 SC: Over the course of many Ask Me Anything episodes of Mindscape. We have developed a motto, which is that if a question begins, is it possible that, the answer is always yes. [laughter] But we would like to know relative degrees of likelihood, of credence. But tell me if you have the same impression that I do, which is that when you and I got started as young theoretical cosmologists, which is about exactly the same time, there was still a debate between being a Bayesian and being a frequentist. Most people weren't active Bayesians.
0:29:52.4 AJ: Oh yeah. I think that's... It really was when we were graduate students that the first people in cosmology and astrophysics started taking this Bayesian idea seriously. I was very lucky enough to, at the University of Chicago, this guy, Tom Loredo sort of made his career on sort of analyzing. There's a very famous thing that happened in 1987 that I know you remember, which is a... Well, light from a supernova not terribly far away, but thousands of years ago, hundreds of thousands of years ago in fact, happened and the light came. It was in one of the Large Magellanic Clouds, the large... I can't remember. I think it was the LMC, could have been the small one, I'm not sure. Which are...
0:30:39.4 AJ: Which I guess are several thousand. They're kiloparsecs away. So tens of thousands of years ago and the light came and then we happened to have turned on at the time some neutrino experiments and they are... Basically there was big vats of really pure water and when neutrinos go through big vats of very pure water, there's a small but non-zero chance that they're going to interact and cause light, make light happen. And so around the time of this supernova, some of these experiments saw some extra flashes of light that were not expected. And one of the very first really comprehensive Bayesian analyses in sort of particle astrophysics and cosmology was in fact the analysis of this event. And it was really impossible to do in a coherent frequentist or sort of old fashioned statistics way.
0:31:42.1 SC: It had never happened before.
0:31:44.0 AJ: It had never happened before and... Right. So we couldn't think of lots of... You know particle physics is famous for being able to do frequentist statistics because they have so many events. And sometimes this is used as a philosophical reason why you should be a frequentist to... Particle physics. That actually isn't true. It's just that it's easy to apply it in their case, not so much whether you should apply it and... But this is a one off event. And so it wasn't even clear how to do a coherent frequentist thing when you had to model so many things about it and you had to... In particular, Bayesianism allows you to say here's a bunch of things that I don't know that I'm going to... But I'm going to not bracket my ignorance and just pretend that I know it and it has some particular value I'm going to, it's called marginalize over these things that I don't understand.
0:32:35.6 AJ: And in a lot of analyses, it's the ability to do this more than anything else, it's sort of philosophical. That is a technical improvement of Bayesian methods over these more traditional methods. When there's a lot of things about your instrument or about your physical process that you don't understand, but there are still things that you'd like to be able to learn. And Tom Loredo showed in this particular case that by applying Bayesian methods, you could really show that this flux of extra neutrinos was very consistent with coming from the supernova, that it really was a high probability that you would not expect there to be this many neutrinos in a particular time. And he did... He actually, I think, made an extra claim, which was that I think he had to claim that one of the instruments had a timing error somewhere.
0:33:25.6 SC: Oh, yeah. Okay.
0:33:25.6 AJ: Because it wasn't at quite the right time, it was off by a few seconds or something like that. And so I think that came to be believed and understood, but I don't remember the details of that. But so this was a good case. And then I think with something that's been near and dear to my heart is the analysis of data from the cosmic microwave background, which is, I'm sure many of our listeners know, is light from 400,000 years after the Big Bang. Well, light that we see from that time, but had been scattering around in the universe since time zero, essentially, or time epsilon. There's no time zero that we know of for reasons we should have mentioned before. And the best description of that is as a very probabilistic model for how the bright and cold spots of the microwave background came to be there, based on the way fluctuations in the early universe, the things that eventually grew to be galaxies and clusters of galaxies, were laid down at very early times, perhaps after something like inflation had happened. And even though this is a set of physical processes, they are described in very probabilistic ways because of quantum mechanics. Right? Quantum mechanics is this probabilistic theory. And even if you want to... So we could spend probably a whole other hour talking about the probability in quantum mechanics...
0:34:48.2 SC: Oh, we could.
0:34:48.2 AJ: But no matter what you think about them, we sort of all agree that they produce... When you have lots of different quantum mechanical observations of the same sort of quantity, they produce something that looks like a probability distribution of those quantities. Right? That's nice and easy. And you want to describe that as then a probability distribution that is somehow processed through physics that we understand to get to lumps in the cosmic microwave background and the galaxies and clusters of galaxies that we see today. And because you have this initial probabilistic thing that happens in the early universe, and because we also want to get out numbers that describe those things and the things that they describe in the microwave background are patterns, but they're not. It doesn't say, oh, there's going to be a bright spot here and a cold spot here. It says, if there's a bright spot here, you're more likely to have another bright spot nearby it. And it tells you, given how far away, exactly how likely that is, and then eventually it becomes less likely and then becomes more likely again.
0:35:48.2 AJ: And it turns out that you can describe this by one function. It's called the correlation function or the power spectrum. And you want to measure this power spectrum. And the best way to measure this power spectrum and to understand its implications for what the universe is like turns out to be Bayesian. That's because you can actually come up with very good classical statistical or frequentist methods of determining what this shape is, what this curve looks like. But in order to match that on to the models that we have for the early universe, that step is much better cast in a Bayesian way. And it's much better to describe how we know what the expansion rate of the universe is, how we know how likely it is that the overall level of fluctuations was this high. And a Bayesian description of the outcome of our CMB experiments just makes sense. And of course when I'm describing the Bayesian outcome to kind of non-scientist, it's totally sensible, right? Nobody... So if I say, what do I mean by the Hubble constant is 67 kilometers per second per megaparsec, plus or minus three kilometers per second per megaparsec, right?
0:37:12.7 AJ: When I say, well, I'm pretty sure it's near 67 and if it's more than about three, then I start getting worried. If we're about three away, I start getting worried and it's almost certainly not going to be 10 away from that, right? And I can describe this by a bell shaped curve or something like that. But... And of course that's okay, that's what I thought you meant by an error bar when I tell people that. And if I tell them, but if somebody comes and they have these so called frequentist error bars, because everybody produces error bars.
0:37:40.4 SC: Sure.
0:37:40.4 AJ: You then... And it can look exactly the same and in fact we print them on the page exactly the same we say 67 plus or minus three. And the problem is that you can interpret that thing in two different ways. And the classical statistical or frequentist way of interpreting it is this very complicated statement about if I repeat the experiment lots of times and I do this and I take this function of the data and it gives me a number, then 67% of the time, if it's really... If the answer is really 67, then it's gonna... Then the number that it comes out with, is going to be between 64 and 70.
[laughter]
0:38:16.3 AJ: And that's what I mean, that's what a non-Bayesian statistician means by a frequentist, by an error bar. And it's not that that doesn't make sense. What they're doing is perfectly sensible. And it really does mean that thing that they say that it means. And that's fine. But if you ask them what does that tell me about the value of the Hubble constant, the expansion rate of the universe, they're like well, I just told you.
0:38:44.3 SC: Yeah. [laughter]
0:38:44.3 AJ: Right? So if it's 67, then some... Then this procedure that you gave me is going to work, is going to work in this particular way. And so it's asking a different question. It's getting sometimes the same, sometimes a different answer. But most of us who are kind of dyed in the wool Bayesians do that because we... The answer to the question that we want to ask is what's the Hubble constant? What is the expansion rate of the universe? And that gives us a probabilistic way of expressing it.
0:39:09.5 SC: So I'm 100% on your side here, but it's my job as the podcast host to pretend that I'm not. So I'll try to channel my inner frequentist and say, sure, okay, the procedure... The words you just said for adding error bars to your measurement as a frequentist sound very baroque. Like we have to imagine this hypothetical infinite number of measurements or whatever, but at least they're all super objectively defined. Whereas the Bayesian error bar you're giving me, I think you even said it in the original formulation, it's the answer to the question, what do you think the Hubble constant is on the basis of this measurement? It seems more subjective, and is that okay?
0:39:50.0 AJ: Well, so, yeah, I'll start with the second question. Yes, it's okay, but yes, it's subjective, but in a really limited sense, I would say. So it's subjective in the sense that if I come to the problem with a different model, I will get a different answer. So that's what I mean by subjective. So two people who come to the same problem with the same model, and model includes prior probabilities. Yes. To include...
0:40:16.8 SC: Okay, so the models are coming back in here now.
0:40:19.3 AJ: The models are coming back in. Right. So if you come with some prior information about the Hubble constant before you even sat down and analyzed your data, then you take that into account. And the Bayesian would say, within the Bayesian formalism anyway, there's no such thing as no information. Right? There are kinds of information that are less informative than other kinds of information in some specific quantitative sense. But there's no version of coming to the problem and saying, I don't know any... I'm covering my eyes. You can't... If you're listening to the podcast, you can't see it. I'm covering my eyes and saying, I don't know anything about this. And maybe you'd like that. But the Bayesian answer is, that's not even a meaningful question, to say, how would I express complete ignorance of some quantity? So I have to go in with something, and if we come into the problem with the same information, we can get out the same information. So that's objective in that sense. If we... So the slogan that I put in the book is, all probabilities are conditional, and that means that all probabilities are given a model. So if we come with the same model and we look at the same data, and the model now includes not just what I thought about the Hubble constant before I sat down, but what I thought about how the measurement was made.
0:41:37.9 SC: Yeah.
0:41:38.6 AJ: And all of those quantities that describe that. And it might also come in, if I'm not measuring a very local version of this, it might actually require me to make a model for the whole evolution of the universe. So when somebody... When those of us who measure things from the cosmic microwave background measure the Hubble constant, it's a very indirect measurement. We don't ever look at something that's moving away from us and say, here's how far away it is, and here's how fast it's going. If we knew... If we could measure that about any object in the universe perfectly, we would know the Hubble constant from that object, up to some things which are complicated.
0:42:14.4 SC: You and I know. Okay. Yeah. [laughter]
0:42:16.9 AJ: But if we could do that for sure, if we could do that for a lot of objects, we would be able to measure the Hubble constant. The more objects we get, the better we'd measure it. The problem is that that's not true. But for the kind of measurements that I like to make, it's even less true because we're not measuring that for anything, right? We're just measuring this pattern on the sky that I told you about before. And this one function, this one curve that we're making, and that curve has encoded it in a really complicated way, the value of the Hubble constant. Right? And so we need... So in order for me to measure the Hubble constant, I don't only have to know what the Hubble constant is, I actually have to postulate a model for the evolution of the universe up to the cosmic microwave background and since. And kind of the amazing thing about cosmology today, is that we actually get a reasonably self consistent picture. Now, it so happens that over the last five or 10 years. I don't know if this is something we want to talk about now, but in particular, the Hubble constant itself has been subject to kind of a bit of controversy. And when I... And I particularly chose 67 plus or minus three...
0:43:24.5 SC: You did.
0:43:24.5 AJ: Because when I measured the Hubble constant using the cosmic microwave background, I get 67 not three plus or minus about one or less than one now. And when people who do something that is closer to that other measurement that I mentioned, where you actually figure out how far away something is and how fast it's moving, and they get something like 72 and plus or minus, again, about one or something like that, and those are pretty far apart in terms of those same error bars that I was talking about before, right? They're kind of four, five, six error bars apart. And that is enough to be concerned, right? When they're that far apart, we're concerned. Now again, we're old, although we're not as old as the people who were our professors when we were young. So when we were... When we started out in graduate school, the choice for the Hubble constant wasn't 67 or 72, it was 50 or 100.
0:44:15.6 SC: I know.
0:44:16.1 AJ: And the error bars were not that much bigger than they are now. [laughter] And that's because those people had bad models. Both of them, both sets of people, those who vehemently believed that it was 50 and vehemently believed it was 100, had models that were wrong and in significant ways, obviously, because they were pretty far from the right answer. And they were both far from the right answer. And it turned out that the right answer, which we started by about the year 2000 from the Hubble Space Telescope and measurements that were being made largely from that, but also from other things. We're starting to converge on, ironically, but not completely unexpectedly, I guess, something right in between 150, which is about 75, which is still about where we are now. We're for sure a little bit lower than 75. And maybe we're a little more lower because if we believe the CMB, it's about 67. Or if we believe these supernova measurements, which are the ones that measure exactly how far away something is, that it's about 72. And I guess given that you're from Johns Hopkins, you probably believe the 72. And given that I do the CMB, I probably believe the 67.
0:45:19.8 AJ: But I also believe that experimental physics, and I think I can say this because neither of us are experimentalists, is really hard and these people are incredibly heroic and do amazing things that I would never in my career be able to come close to. And so they're probably both wrong is my guess. And it's probably going to end up again being somewhere in between. Probably, I would guess closer to one than the other would be my guess. But I don't know which. I mean, I think I know which, but I wouldn't put a lot of money on my side being correct because I know how hard it is. And I think the amazing coincidence of it working really, really well, except that this number seems to be off, gives me some hope that it's closer to the CMB's number of 67, because that same... It would be hard to get that curve that I mentioned before be so spot on a viable model, but just be wrong in this one way of getting a viable model that has the wrong value for the expansion rate of the universe, that seems unlikely to me. And most of the attempts to find a model in which they're both right because we're measuring very different things and we are measuring very different things, have not yet totally succeeded.
0:46:37.5 AJ: There are some hints, that again some people at Johns Hopkins, some of my colleagues, have thought about ways that you basically change the early universe from what the CMB measures to the late universe to later universe to what these more direct measurements do. But those have their problems too. They don't quite get the right answer and they make other predictions that are not necessarily easy to be obeyed. So it's not clear where we are now. There's some evidence over the last couple of years that some of the calibration that you need to do because again, we're not really directly measuring the distances of these objects, maybe some of that calibration is off, but again, people of, I would say people of goodwill can disagree rather vehemently on whether these things are problematic or not. But that makes it fun to be a cosmologist right now.
0:47:25.5 SC: Anyone interested in the topic about the Hubble tension can listen to previous episodes we did with both Adam Riess and more recently with Marc Kamionkowski, so they'll get their updates there. But it is a good example of the fact that there are both these everyday calculational distinctions in this shift in view of what probability is, but also deep philosophical differences. I mean, you have the right as a Bayesian to talk about the probability of events that will only happen once, like who will win the World cup or who will win an election or something. Whereas for a frequentist, that almost doesn't make sense. And we do in fact talk about those probabilities all the time. It's kind of important.
0:48:04.9 AJ: Yeah, I think most Frequentists are actually Bayesians in some sense. They just don't want to be... Because, I mean, at some level Bayesian probability is a mathematical theory and there's nothing mathematically incorrect about what we're doing. They just don't like to interpret... They don't like the interpretation of error bars in the way I described. They don't think it is a... They don't like the epistemology of it. They don't think that it really does a good... It is a good way to describe what we are warranted to believe given data, even though they agree that the probability... They agree that it describes a probability in a particular way, but they don't believe we've tested theories in the right way by doing that. And similarly, we Bayesians agree that the probabilistic statements that a frequentist makes are true. We just also don't agree that they necessarily give us a good epistemic hold on what we're trying to measure. But I think luckily I'm actually a physicist and not a statistician or a philosopher. And so there are some problems for which it's really hard to do a proper Bayesian analysis.
0:49:18.1 AJ: And I'm happy to get my guidance at least from a frequentist analysis if that tells me kind of what's going on. And that's why most, like I said, we don't do this sort of ab initio Bayesian calculation from some state of pristine zero knowledge and figure out what the world is like. So I'm willing to take information about what models I want to condition on from anything. Right? And if... Because most statistics done until a few years ago, where a few might be 100 in some cases, were kind of frequentist and so... But I'm not going to have... I don't have to go back and re-derive my belief that Newton was right. Forget Einstein, from his... For the domain that he was trying to test things in by going back and redoing a Bayesian analysis of what he did. It's... I can take his red that that worked.
0:50:17.7 SC: Well, but the history...
0:50:17.7 AJ: Which I'm happy to do that.
0:50:17.9 SC: Yeah. I mean the history is super fascinating. You do talk about it in the book. And there was... There were two big things going on, if I can vastly oversimplify. Like in the 19th century we invented statistical mechanics and some people were scandalized that probabilities were coming into our best way of describing the world. Then in the 20th century, we invented quantum mechanics and there was scandal all over again that we're still recovering from. How do those two big shifts fit into your story?
0:50:45.7 AJ: Well, right. So it does seem that at some level the world wants us to use probabilities to describe things. I think statistical mechanics is a really phenomenally interesting case because if you start thinking about the problem in the way, then it looks hopeless because the problem is how do we describe a medium that is made of literally, I don't even know the... The names for the numbers that are that large. Right? But one followed by 23 zeros or something like that. That's how many molecules of gas are in a cubic liter. And so how you need to describe how each of them are moving if you wanted a description of that entire gas. Right? And it turns out that because at some level, because there are so many of them, that the amount of information you usually need to know to describe it decreases ridiculously. In the simplest case, it decreases to one bit of... One... Not bit, but one... Not one computer bit, but one piece of information. If you know the temperature of your gas, then you know everything there is to know about it in the simplest case.
0:52:02.3 SC: Yeah.
0:52:02.5 AJ: Even though it's describing trillions and trillions and trillions of particles. Because in the simplest case, then the probability distribution, the probabilities that describe the, essentially the velocities, how fast these particles of gas are going, are perfectly fit by the specific bell curve, this normal distribution. And so it's this weird thing where you seem to need to know all these probabilities, but in terms of things that you can manage to observe about the gas, it collapses down into almost no information compared to how much you had to begin with. Now, in more interesting cases, you need more information to describe it, right? If you perturb the simplest gas in any way, then you need more and more. But in almost no case do you need all 10 to the 23 pieces of information. That would be useless to you because how would you possibly act on that much information. Right? And then... And at first, and I think still many people kind of think of this probability distribution and the probabilities here as being some sort of objective description of what's going on, right? And there is a sense in which that's true, that it's a...
0:53:18.3 AJ: If you made a graph of how fast things are going and you put how many that are going between 1km per second and 2km per second, and all the way down to zero or even negative, if you're going in the other direction and to positive very fast movements and you graph them and you counted them up and graphed them, it would be this bell shaped curve and that would be true. But also if all you did is measure the temperature, then that's telling you what you should expect to see. That's telling you how probable a given thing would be given all you know is the information about the temperature. And if you did do other measurements of the gas and they told you something else about it, if you pressed on the gas in a particular way or you put a candle in it so it heated things up in a particular way, then you could make further predictions based on your knowledge of what that... How that perturbs the probability distributions. Right? And what it says about that. And then this really comes into its own, the probabilities, when you talk about the entropy here.
0:54:21.9 AJ: So entropy is this description of that probability distribution for any substance. And indeed we use entropy now in large language models and other places, having to do nothing with mechanics. And there's a law... There are these laws of thermodynamics that seem to be laws of physics which tell us about how something which seems to be a property of a probability distribution has to change. But I've just told you that probability distributions are about what I know, but this seems to be a law of physics. Right? And both of those things can be true at some level, because first of all, that in the simplest case, again, because these probabilities collapse down into knowing so very little about the gas or needing to know so very little, that you can't do anything, you can't do very much with this information. And we use this all the time when we build engines and heat pumps and things like that. But if you had more information... Right? So what the... What this entropy tells us, among other things, is how much work is available, how much energy is available in the gas that you can use.
0:55:34.1 AJ: Right? And what the laws of thermodynamics tells us that in most cases, the energy available to use gets less over time. Right? Entropy increases, you can do less and less useful work with your gas. But if you really did know where all of the particles in the gas were moving at some particular time, then you could arrange a set of paddles or something at just the right places in the gas to actually get all the ones that happen to be moving away from me or in some particular direction to move, and they would all move in a direction where you could use that work. So if I knew more about the gas, I could extract more work from the gas. Right? So this seemed like a law of physics, and it is a law of physics because it's telling me what I can get out of the gas. But that... But what I can get out of the gas depends on what I know about the gas.
0:56:24.2 SC: I'll interject very quickly here just because I can't resist mentioning that David Albert, who's a philosopher of physics, who's been on the show, and one of the smartest guys and has gotten a lot of things right. But he does have this stubborn frequentist streak, and he has sometimes been known to say like you can't tell me that the reason why coffee cools to room temperature in a room is because of our knowledge of what's going on in it. And I want to say, yes, I can. I mean, at least it does tell me why it usually does. It makes... It seem not like a surprise that it does.
0:56:58.8 AJ: Yeah, yeah, we expect that to happen. And indeed, most of the time we're not surprised because, again, it's 10 to the 23 particles.
0:57:04.5 SC: Right.
0:57:05.0 AJ: If there are six particles, then you'd be surprised a lot because things would happen differently a lot of the time.
0:57:11.7 SC: And in the late 19th century, there's a lot of physicists who resisted the idea that the second law was just a statement about probabilities. They wanted it to be an absolute law. There's always this human hunger for the absolute foundational perfect law and nature doesn't always accommodate us.
0:57:27.5 AJ: Yeah, I mean, and it's as... To move on to the other question, right? You asked about quantum mechanics. There, at least at the end of the day, this... The thermodynamics is about particles moving. And you could imagine it being... Maybe this is just a bad description of the probabilities and maybe... You're still thinking about moving things and you could potentially again, pre-quantum mechanics, which we're about to talk about, describe things in a deterministic way. But quantum mechanics, as I'm sure everyone listening knows, is this theory that really does seem to be at bottom, random probabilistic in terms of our experience. And I was... That's the crucial bit.
0:58:08.8 SC: Very careful. Yes, very good.
0:58:11.1 AJ: And exactly how that cashes out ontologically, right? I don't think that's the word we've used yet, but it's how things really are, is a subject of, I would say, very hot debate, despite it being 100 years since people started worrying about these questions. And it's because the fundamental way that the quantities of quantum mechanics interact with our experiments is in probabilities. So the... We... The fundamental beast in quantum mechanics is this wave function. And it makes probabilistic predictions for how the world will act if we poke at it. And it also makes, I mean, we have to be careful. It also makes the main quantum mechanics quantum part comes because it makes some definite predictions, that the hydrogen atom has some particular set of energy levels where we will only ever find the electron in these particular energy levels. Now, we could be super subtle and say, well that's true, except that that's only true if you had an infinitely long lasting hydrogen atom. And because there's always a probability that any given hydrogen atom will decay, that means actually it doesn't have an exact thing.
0:59:30.2 AJ: And there really is, even for that, even for these things that are thought of and indeed were considered the amazing predictions and correct experimental postdictions from quantum mechanics, those things are at some level, even they are probabilistic. But even... But the everyday thing that people worry about being probabilistic is all quantum mechanics ever tells you is that something is probably... Has some probability for being somewhere. There is some probability that some arbitrary set of your molecules is gonna, is gonna fall through the wall or something like that and just tunnel through, that's... That is, there's a non-zero probability of that happening and there's a non-zero probability that you will just entirely as a whole fall through to the floor below you. But it's again so small that it has never happened as far as we know in the history of the universe that someone has done that. And exactly the meaning of those probabilities are completely unclear. I would say. I think, so again, and again there's a split now between, I would say the people who wholeheartedly think of this as a Bayesian probability statement, about what...
1:00:41.0 SC: The quantum uncertainties.
1:00:42.4 AJ: Yeah, the quantum uncertainties are about what individuals know about the universe and those who think that it is just papering over some unknown thing but that there's some certainty behind it. Or I think the third camp is the somewhat more amorphous idea that something very weird is happening having to do with consciousness and that there's something special about maybe humans or maybe just very complicated machines, that when they interact with these objects something else happens or there's... Or that quantum mechanics is only approximately true.
1:01:18.3 SC: Sure.
1:01:18.7 AJ: And that there really is something else going on. But I think those of us, and I think I count you in this category who really believe that they are Bayesian probabilities about the world, that it's just a statement about what we know. Even that, there are various ways you can cash that out. I think the two main ones are the many worlds idea that this wave function, which is the er beast of quantum mechanics, the thing that it provides for the whole universe in principle, has a very natural way of describing everything that there is as being split into "worlds". And that there's a... There's also a very natural beast that comes with quantum mechanics which tells you how probable you are to be in a world and that that's all you need. And that sort of... And that all that there is is this wave function and these probabilities which are part of the wave function. They're not externally tacked on. And that from that you get out all of the predictions of quantum mechanics that you could possibly want. And that's kind of nice. And it's simple in some ways.
1:02:34.9 AJ: I personally am not sure that I buy that it is... That the ontology of quantum mechanics follows naturally from the wave function, that there has to be worlds just because there are these terms in the wave function. Not sure it's not true, but I'm not sure that it is true. The other sort of competing but very Bayesian interpretation of quantum mechanics is called QBism. QBism used to stand for Quantum Bayesianism, but now it's, adherents just say no, it just stands for... So it's spelled Q-B-I-S-M. But I think intentionally sounds like the artistic movement from the early 20th century because that was a bit weird at the time anyway. And it feels more like the quantum mechanics that you learned when you were in college, if you learned quantum mechanics in college, where it's just about what happens when you make a measurement. And that your knowledge of the world changes when you make a measurement, but that there's nothing... So in standard quantum mechanics. And even when... Even though I may not agree with the way of doing it, I teach quantum mechanics when I teach quantum mechanics in this very old textbook way, which is called the Copenhagen interpretation, where when you make a measurement, then the world changes in the sense that the wave function collapses, that's the word that people uses, into something that describes exactly the probabilities of exactly the measurement that you made and then evolves again in some different way.
1:04:09.0 AJ: And that works. And the fact that you can, to probably not quote Richard Feynman, but quote people who think they're quoting Richard Feynman, shut up and calculate and get the right answer even with this weird bizarre kind of, even more than Bayesian solipsistic view of the world seems to work. So if you work as if the quantum mechanics describe this, you get exactly the same answer that a many worlds theorist will get and exactly the same answer that a QBist will get. And it seems to work just fine, even though at bottom nobody's sure it makes any sense, but we seem to be able to calculate with this and everybody gets the same predictions and gets the same probabilities. And I think because the physics community was so frequentist until relatively recently, I think they didn't probe particularly deeply in the meaning of the probabilities.
1:05:06.5 SC: Right.
1:05:07.8 AJ: There was still, I think the question and really even in the physics books that I read as a kid, it was really about the role of consciousness in quantum mechanics. Because that seemed like the important bit that the... That making a measurement was about a conscious being making a measurement and that was somehow changing the world because it was making this wave function collapse. And I think with the emphasis on Bayesian probabilities that all of physics has kind of taken up over the last few decades, that I think made stepping back from this weird mechanics of it being something to do with consciousness, to being just trying to understand what we mean by the probabilities of quantum mechanics. And the quantum Bayesian just says, we just mean probabilities. We just mean Bayesian probabilities. They are... They're not...
1:06:01.3 AJ: They're fundamental in the sense that they're the only tool we have for working with the world, which is what I kind of do argue in the book. They're not fundamental in a physics sense. There's nothing physics about probabilities. They're just in our minds. But that's fine, because the only people that we know that do physics and quantum mechanics are people, and that's their tool for interacting with the world. And so it should be no surprise that our theories are probabilistic, because that's the only way we have for understanding the world. And I kind of tentatively come down to thinking that is good enough. And maybe that's all we need, and we don't need the structure of many worlds to give us even more ability to put probabilities in quantum mechanics. But I'm not sure that that's... That we don't need that either. So maybe the answer comes out to be something in between. I'm not sure.
1:06:51.9 SC: Well, my current conjecture, and I even have a grad student thinking about this, is that QBism is an effective coping strategy for people who don't want to accept many worlds, and they'll eventually get there. The Bayesianness of it is absolutely in common. In fact, I think I've told this story before on the podcast. But David Mermin, who is a famous physicist who is a cubist, wrote a little opinion piece for Physics Today saying there's no such thing as the quantum measurement problem. And what he meant was, I know the answer to the quantum measurement problem, and it's just, it's all Bayesian. And Physics Today asked me to write a response saying, actually, there is a quantum measurement problem, because we... Not everyone else agrees, even if you know what the answer is. And so he and I, David and I emailed back and forth, and it was very interesting because he was pushing on this idea that the De Finetti, who is the hero of the QBists, was this statistician who said, there's no such thing as probability. And what he means is, probabilities are subjective and Bayesian. He didn't mean there's no such thing. And I told David Mermin, I said, look, that's fine with me. I'm 100% on board with that because I think that there is a Bayesian probability about what world you're in. And his response was, oh, in that case we're fine. He was happy with it. So I'm optimistic about the future of consilience on our understanding of quantum mechanics.
1:08:14.4 AJ: Yeah, I think with this idea that things are Bayesian, I think that eliminates a lot of the problems, we'll have to see. I mean, it would be nice... The problem with both, if it's a problem, QBism and many worlds as they are constituted now is, there's no way to test. Right? So absent some understanding that somehow they make a prediction that isn't standard quantum mechanics, which I don't think is possible given the way they're written down, it's going to be hard to adjudicate them. And again, then we'll have to be Bayesian in the...
1:08:56.8 SC: There you go.
1:08:56.8 AJ: In a better sense and use our priors to pick between them. And that's sort of a bit sad because physicists do like the idea that there's a right answer. But this is one of those cases where maybe we can't know.
1:09:09.7 SC: Well, and I want to sort of close the circle here as we're reaching the end of the podcast. You've already talked about cosmology, obviously is what your day job is, and analyzing fluctuations of the CMB. And there's this super profound fact that our favorite model for where those fluctuations and density perturbations come from is quantum mechanics in the early universe. And so there's a lot you can do that is very down to earth about matching those predictions to the data. And that's a full-time job. Have you thought at all about the slightly more outré problems of the cosmological multiverse and the cosmological measure problem and eternal inflation? I know that Paul Steinhardt, for example, has more or less given up on inflation because it predicts too many things and therefore it predicts nothing. And I'm certainly on the side of thinking that if we think about it carefully, even if it predicts many things, we can have a relative probability for predicting one thing versus another.
1:10:12.7 AJ: Yeah, well, I think there are a lot of facets to that problem, obviously. So one is, whether there is a multiverse. Right? Whether...
1:10:23.9 SC: Sure.
1:10:23.9 AJ: And again, okay, we have to... We can enumerate the different kinds of multiverses like our colleague Max Tegmark, but this specifically... That literally parallel to us in some vaster manifold, there are other things that are expanding and may have objects or may not have objects and things in them. Maybe some of them have similar physics or maybe the identical physics to us, maybe they don't. And there are models of inflation, as you mentioned, like eternal inflation, that say that sort of bangs are happening all over the place. And again, I'd be using air quotes if I could. And things are inflating and some universes inflate and sometimes they have the right kind of physics to make stuff. And that is one kind of broader scale model for inflation and which predicts a kind of multiverse in that sense. And yes, it predicts an infinitude of multiverses. And I think our colleague Bill Kinney has a...
1:11:30.0 AJ: What's his book called, 'Infinity of Worlds' or something like that. I can't remember. Where he goes into a lot of this in way more detail. But there... Yeah, so you can use probability to assess how likely these kinds of universes would be. And it's funny because having... There are some of our colleagues actually who like this idea because it allows you to be a frequentist, right? Because there really are these universes out there. And so it is just a matter of counting the ones that actually do exist. And so then you can just say probabilities come out that way. But a Bayesian doesn't need that. And you can ask, well, there could still only be one universe. And there could... And it could still be some... There could still be some prior distribution of the possible parameters that describe the universe. Many of our less Bayesian colleagues, or at least might say, but what's the physics of that prior. Right? Because they want it to not be about our knowledge. They want it to be about some actual set of universes from which things are somehow really physically being chosen. And it's... And again sadly, it might be very hard to adjudicate between these possibilities that there really is only one universe, small... Let's say small U universe where the... Or bigger universe.
1:12:54.1 AJ: There isn't any other set of things that are really out there in some broader sense. But it might be that there's a mechanism for producing lots of them. And the fact that there's some of them that look a bit like ours and there's at least one that looks exactly like ours is something that we can use as data. Right? And this idea has a long and occasionally storied history, the sort of anthropic idea that our existence gives us information about what the universe is like. And of course it does. Right? It tells us that we live in a universe with a particular value of Planck's constant and a particular value of the speed of light. And we can go and ask, well, what could it have been otherwise? And we don't know whether the... We don't know whether the underlying model of physics means that it could have been otherwise. Right now, I think... So unlike the things that we may not ever have any realistic or even possible physical access to, I think we can in principle have access to models that are more comprehensive than our current standard model of particle physics. And maybe it's really hard, and maybe you need a particle accelerator on the scale of the solar system or several solar systems or something like that to do that. But that is different than the many worlds hypothesis which says you really just can't get between them. Although I guess that's not true. They can re-cohere, can't they? But that's... It's non-zero probability.
1:14:39.2 SC: It's a tiny, tiny, tiny probability. Yeah, right.
1:14:40.6 AJ: But... Sorry. So maybe there is in the fullness of time, if you do know more about the full theory of particle physics, I think that does give you some hint about whether things like inflation can produce multiple kinds of universes, whether things... Or whether there really is only one set of fundamental constants. I think our colleagues who do string theory have come to the understanding that if their model is true, then really it seems to be the case that there must be possible realizations of the theory that don't look like our universe. And then there's a second physics question which you can... Should still be able to answer within that model, of whether it's possible to realize more than one. Right? Whether, no, the way it works is, there's one universe, it somehow gets created, and maybe that is something you can answer within the theory. I don't know. And once it's created, it is its own thing. It's all that there is in some more fundamental sense, and it has the particular constants that it does. Maybe there's a reason for that, maybe there's not.
1:15:57.5 AJ: Once you allow the possibility that it is created, and I don't mean by some supernatural being, I mean by some quantum mechanical process, then there's no reason to suspect that if it can happen once, and I'm going to use words that have to do with time and space, but of course, once you're in a theory like this, our language isn't quite sufficient to use it. So if it happens once in a particular place, then it can happen again in a different place. But I don't really mean another time and another place. And I'm not sure exactly what I do mean because I don't know what this theory is yet, and neither does anybody else. But I think the idea that there is more than one universe for some definition of that is super compelling because it's almost hard once you have some fundamental theory, it's hard to imagine that that isn't possible.
1:16:50.8 SC: And we're going to... Meanwhile, I guess there's just a wide ecosystem of ways of tackling this from people who are just looking at the data and shutting up and calculating and collecting data, to people who are sitting back in their armchairs and thinking wild thoughts about the multiverse.
1:17:05.6 AJ: Yeah, there's lots for us to do still.
1:17:07.2 SC: Lots for us to do. That's a good place to finish. Andrew Jaffe, thanks very much for being on the Mindscape Podcast.
1:17:12.1 AJ: Thank you, Sean. It's been a lot of fun.
[music]
Amazing poscast. It reminds me I would love to see George Lakoff in your podcast, talking about metaphors (which can be models of the world.), neuro science and philosophy of maths. In fact, he and Srinivas released a book a few months ago still in my pile.
Pingback: microblog
As an econometrician, I especially loved the extensive discussion of probability in this episode. Such a good podcast and such a good episode!
Sounds like cosmologist Andrew Jaffe’s new book ‘The Random Universe’ is a must read for anyone interested in a deeper look into just how mysterious the universe really is, and our attempts to make sense of it all.
One of biggest, perhaps the biggest, unanswered questions is, are there other universes besides the one we inhabit? As Jaffe states near the end of the interview:
“Once you allow the possibility that it (our universe) is created, and I don’t mean by some supernatural being, I mean by some quantum mechanical process, then there’s no reason to suspect that if it can happen once, …, then it can (happen) again in a different place.”
Perhaps an even stronger way to state it is:
‘Since it happened once, it’s almost inconceivable other universes haven’t existed in the past and won’t continue to come into existence in the future’.
6 7
The Fibonacci sequence and the Golden ratio
https://www.youtube.com/watch?v=dREpRHgkjsg&t=4s