Behaving rationally involves facing up to conditions of uncertainty; we never navigate the world with perfect confidence. Sometimes we are uncertain about the way the world is, but we can also be uncertain about our place within the world. This kind of situation arises in cosmology (where the relevant world can extend very far in space or time), and also in quantum mechanics (where new worlds might be created at any measurement), but also when we are simply unsure about the future history of humanity or whether we live in a computer simulation. I talk with philosopher Adam Elga about how to deal with these unique kinds of uncertainties.
Upgrade your denim game with Rag & Bone! Get 20% off sitewide with code MINDSCAPE at www.rag-bone.com #ragandbonepod #sponsored
Support Mindscape on Patreon.
Adam Elga received his Ph.D. in philosophy from MIT. He is currently a professor of philosophy at Princeton University. His research involves decision and game theory, epistemology, philosophy of probability, philosophy of mind, and philosophy of science.
Click to Show Episode Transcript
0:00:01.5 Sean Carroll: Hello, everyone, and welcome to the Mindscape podcast. I'm your host, Sean Carroll. One of the things we've talked about many times on the podcast is how you update your beliefs when new evidence comes in. That is to say, the process of Bayesian reasoning. Bayes' formula, of course, gives you a quantitative way of saying, if I have some prior credence for some claim being true, and I very quantitatively measure some data, and I can calculate the likelihood of that data being obtained under all sorts of different propositions being true, I can update my credences to get one that takes that data into account. We don't necessarily every time work in such a quantitative vein, but this process is basically what we do in science, right? In science, we have different kinds of theories that propose to provide explanations for different kinds of phenomena, and we have different feelings. Some theories are more likely than others. My favorite example is always, is the dark matter something like a weakly interacting massive particle, a wimp, or something like an axion? So these are two different particle physics candidates for the dark matter. They're both plausible. We don't have any idea which one is true or even if it's some other theory.
0:01:19.0 SC: But we have favorites, right? We don't give them equal probability because maybe it fits in better to other things we know, et cetera. So that seems like a pretty straightforward kind of process. You have prior probabilities for theories being true or whatever, and then you get more data and you update your belief, your degree of belief, your credence. Here's a puzzle. What if you're a cosmologist? What if you're thinking about the whole universe all at once? And someone says, okay, I have two cosmological models, two theories that describe all of the universe at once, and they predict statistically, more or less, the same local conditions that we observe. So they are compatible with the data that we already have. But here's the difference. In one theory, the universe is bigger than in the other one. Like maybe in one theory, the universe is a closed universe, a sphere or a torus or something like that, and it doesn't actually extend very far beyond the universe that we can see today. In the other theory, the universe is open, it goes on forever, and there's just an infinite number of things going on. And this person says, so I think that the theory where the universe is bigger is much more likely.
0:02:37.0 SC: And you say, well, why is that? Is it because there's some mechanism that gives you that or whatever? And they say, no, it's from updating on the data. And you say, what is that data? And they say, well, the data that I exist. Because in the bigger universe, it is just much more likely that someone like me would exist than in the smaller universe. Just because, there's random fluctuations because of quantum mechanics, it's unlikely in any one small universe that I would exist. But as the universe becomes bigger and bigger, the chances of someone just like me get larger and larger. Is that kind of reasoning correct in the cosmological context? The answer is we don't know, or at least we don't have an agreed upon procedure for dealing with these kinds of puzzles. And they show up, these kinds of puzzles, again and again. You can guess that things like the Boltzmann Brain scenario where there's random fluctuations that create observers like us in the far future, or maybe in the far past, but ones that don't arise via thermodynamically sensible evolution from a low entropy Big Bang like we think we did.
0:03:45.6 SC: There's examples like the multiverse of Everettian quantum mechanics. When I measure the spin of an electron and it could be spin up or spin down, I'm saying, okay, now there's a spin up particle, spin down particle. There are two separate worlds. And I want to say, what's the probability that I'm in one world or the other? Does it matter how thick the world is? What are the different things that come into consideration here? So this is obviously a set of puzzles that is very relevant for cosmology and physics. You know, real things we care about, quantum mechanics, the multiverse, things like that, as well as for philosophers who want to know, how should we be rational in situations like this? How should we reason in these circumstances of uncertainty? It's a tough one. And today's podcast guest is one of the world's experts on these issues. Adam Elga is a philosopher at Princeton. And one of his, he has several very well-known papers. One of them is on the famous Sleeping Beauty problem where you say, you flip a coin, and if it's heads, you're going to wake up Sleeping Beauty on Monday. If it's tails, you're going to wake her up on Monday and Tuesday.
0:04:53.5 SC: Do you give equal credence to all three possibilities? Heads on Monday, tails on Monday, tails on Tuesday? Or do you say, no, the coin is 50/50. It's a fair coin. I think that it's still 50% chance that I was in the heads universe versus the tails universe, even though I only wake up once if I'm in the heads universe. Tough call. Smart people disagree about this. Super relevant, even though it's just a fun philosophical thought experiment for questions of modern physics and cosmology. So we might not get the answers here. Adam and I, you know, we had to restrain ourselves a little bit because we both care about many of the same things and have written academic papers about many of the same things. So we geek out a little bit about these things we care about. Hopefully that's up your alley. If not, it's at least inspirational to go and read and think more about these things. So, let's go. Adam Elga, welcome to the Mindscape podcast.
0:06:04.6 Adam Elga: Thanks so much for the invitation.
0:06:06.4 SC: I know your work through papers about Boltzmann brains, self-locating uncertainty. These are topics that are near and dear to my heart, and we're certainly going to be talking about them. But you have a bigger scope in your philosophical career. How would you characterize sort of what your project is to the extent that one has a project?
0:06:27.2 AE: I characterize myself as addicted to rationality. Started as a young, at a young age, around my household. They sometimes when I was a kid, they were called me Mr. Rational. And I've always been fascinated with probability, doing the rational thing, what's justified, optimizing. And I think that's the thread that runs through my philosophy. So I've thought about direction of time and in particular how actions bear on what happens in the future and the past, what you should think about various temporal asymmetries. And also I've thought about with some co-authors, Dutch book arguments and money pumps and game and decision theory.
0:07:10.8 SC: Mr. Rationality is not the worst nickname one could have. I mean, I get that there's a little bit of a dig, right? But it could be much worse.
0:07:17.3 AE: There was some mockery there.
0:07:19.4 SC: But now you've found your people. You're a professional philosopher and the whole rationality thing is probably simply a compliment.
0:07:26.7 AE: I'll take it.
0:07:28.7 SC: So let's talk about that just a little bit. I know that one of the papers that you've written is about how we should be rational in terms of talking to other people who have knowledge about things. You know, people we might think of as experts in something or maybe peers who are equal to us. It's very rare that we have an opinion, we meet someone who we think is just as smart as we are with a different opinion, and therefore we change our opinion. But what is a perfectly rational person supposed to do here?
0:08:00.1 AE: This is great. And also, it's super, given what we will talk about later in the day, we can plant a seed that's really going to connect. So there's a great philosophical applied philosophy question about how you should respond when someone who you antecedently considered smart, well-informed, and so on, came to a different conclusion than you on the basis of similar evidence. And on the one hand, there are the people who basically think whoever's, in fact, right should pretty much stick to their guns, or that should count for something extra. So that's the kind of stick to your guns side of things. And then on the other side, there are the people who think, well, given that one of us is wrong, I had no antecedent reason to think that I would be the one who was right. And certainly finding out that we disagree shouldn't be evidence that I was the one who was right. And so I should stick with that prior assessment, those prior conditional assessments, and basically often significantly move in the direction of the person who came to the contrary conclusion.
0:09:15.9 SC: But people don't actually do that, do they?
0:09:19.7 AE: Sad to say. Although there's a kind of escape route here. There are different versions of the view, but the version that I like, and I've been influenced by David Christensen's writing on this, is the version that says you basically should defer to what your prior self would have thought. So here's a case. You come in, you encounter the big disagreement, and then imagine getting on the time travel phone with your past self. And you ask your past self, hey, suppose this were to happen. Suppose you were to find out that you and this person were to disagree in such and such way. We can't. In the phone, we don't specify the full story about all the evidence and all the arguments because then we'd just be reproducing the original problem. But you give enough, you know, they say something that strikes me as totally wacko. They say a certain kind of course characterization. And then we ask that past version of yourself, what would you think conditional on that? You know, if that were to happen, how likely do you think it would be that you're the one who's right versus them? And my feeling is you should defer to that person.
0:10:29.6 AE: And the reason why this doesn't immediately amount to a total, like, wishy-washiness in the face of everyone believing everything and just giving up your entire worldview and just becoming a kind of big averaging machine is that in fact, in many cases, many of us are rather non-evenhanded in our answers to those questions. You think, what would you think if so and so thought, blah, blah, blah? And often the answer is, wow, you know, even though I would say, like, in polite company, I would say that person is smart, and I can't point to any encyclopedias that I've read that they haven't read, but I really am honest and think, what would I say if this person disagreed? I would think, you know, I guess I think they're probably wrong. And the version of the equal weight view, which is what this this side of the view is sometimes called, the version that I like best is the one that defers to your past self in that way. And that's not quite as concessive as a more extreme version of the view, which says, really just always go 50/50.
0:11:34.7 SC: So, what is... I'm a little confused as to why my past self is useful here. If I tell my past self all the relevant new information I've gotten, isn't that just my present self?
0:11:45.3 AE: Yes. And that's why the relevant question you have to be asking your past self is something less than the fully specified original question and the full situation. One way to motivate the view is to think about a David Christensen's case in which the equal weight view, or this kind of view, is very intuitive. It's called the "Split the Check" case. And the case is, you're out to dinner with your friends, the bill comes in, people do the arithmetic on their own, and then they get different answers. Now you think, okay, how confident should I be that my answer was right? Their answer was right. Given this disagreement, it's very intuitive that the answer to that question should match the answer to the question of if you'd asked yourself at the beginning of the meal, look, suppose we later split the check, and you get this, blah, blah, blah, you get this answer. How likely do you think that you'll be the one who's right? That seems to align. And actually notice that that even handles a slightly more general case than the one we were talking about before, because that actually nicely handles the case in which, for example, I think I'm not so great at math.
0:12:56.0 AE: I think I'm just, like, you know, probably 90/10 that they're the ones who's right. That seems intuitively like the thing to do when that scenario actually happens. And in order for that test to work, you can't be giving the full math question to your past self.
0:13:11.0 SC: Okay, good. You're just asking them about their basic judgment ahead of time. What is the likelihood that someone's going to be wrong? Is it going to be me?
0:13:20.0 AE: Yeah. And you want to tell them something about the circumstances. For example, there's a difference between you get $20 and your friend gets $23 and you get $20 and your friend says it's negative $18. Then you think like, so the relevant question you'd ask your past self in that case would be, hey, what if I get an answer that seems kind of reasonable to me, and they get an answer that seems like totally bonkers, off the wall, you know, couldn't possibly be right. In that case, what do you think? And the answer to that question then I think should rule the day when that situation actually happens.
0:13:55.6 SC: So, am I correct in thinking that this strategy is meant to kind of be a semi-practical implementation of the idea that we should have our credences in different propositions changed by the right amount when we meet other equally smart people with different credences?
0:14:16.3 AE: I love that characterization. I think my opponents would disagree that this view meets that standard, but I think that's the kind of thing everyone's going for.
0:14:25.5 SC: Okay. I mean, I'm trying to imagine in my head examples which certainly exist where people who I think are smarter than me and are experts in some area have a belief close to that area and yet I disagree with them. I'm trying to justify why I think it's okay to disagree with them. I mean, I guess I think that I can identify some blind spot that they have or something like that. But is that likely to be me just fooling myself?
0:14:53.8 AE: Well, I think there could be two things going on. And it's actually a very, this exercise of looking back at what would my past self say? I think it's theoretically valuable, but I also think it's actually a really good practical mental exercise to do because you can diagnose what's going on with your own later favoring of your own views, if that's what happens. So one thing is, it might be that it was kind of a polite fiction that you really treated the person as peer in the first place. And there are a lot of things that could be true of the person. You think this person is really smart, well-intentioned, blah, blah, blah. It could have been that all along you thought, yeah, but when it comes down to it, if we disagree, I already was going to think it's more likely that I would be the one who was right. That could be right. Another thing that could be is no, you really thought they were just as good as you. And antecedently you thought, conditional on us disagreeing, it really is, I think, a coin toss about who would be right. But then when the time comes, maybe irrationally, if my view is right, you end up sticking to yourself.
0:16:06.6 AE: The thing that I really want to rule out is the idea that you antecedently thought, Oh, it's 50/50. But then when it actually happens, you think, you know what? I'm right.
0:16:14.9 SC: I'm right.
0:16:16.7 AE: And notice a weird, a really weird thing that could happen in this case. There could be this kind of bootstrapping confirmation that you're better than someone at stuff. Because then look what happens over the course of many disagreements. If you did that every time, you'd think, hey, well, I got that one right. I got this one right. I'm assuming that there's no independent confirmation, so there's just a series of disagreements. All you have to go on is what has happened so far. You will then check off and think, Oh, wow, I got all these right. And I'll think, wow, I'm even really better than I previously thought. That seems like things have gone off the rails.
0:16:53.0 SC: Yeah, that's a problem. I do find that in, again, ordinary human conversation among non-Mr. And Ms. Rationals, people will sometimes have a criticism of the form, you always think you're right. And I mean, if that means the opinion that I have at any one moment is the one that I think is right, then yes, how could it be any other way? But of course, what they, I think that the, the, the legitimate criticism is you're unwilling to ever admit that you could be wrong, right?
0:17:26.4 AE: Yes, exactly. And by the way, I can't resist citing a paper that Andy Egan and I wrote a while back. The title of the paper is "I Can't Believe I'm Stupid." And it's all about the limits on how much you can doubt your own opinions while still having them. But can I circle back to one thing? Because it was fortuitous that you asked about this topic. There's one more view that we haven't mentioned. I predict you will not like it, but I think we should put it on the table because it'll be useful to have around when we talk about Boltzmann brains. It's called the level splitting view. And it's something that I also have learned a lot from David Christensen about. And that's the view that allows for a position that we've been kind of implicitly assuming was off the table. And the position is this. You should think, well, I'm still right, so stay confident that I'm right. But I also think that's irrational. The rational thing to think is to be 50/50. But if you ask me about... So that's the question about rationality. That's why it's called level splitting. So on the question about like the first-order issue, will it rain tomorrow?
0:18:41.5 AE: Yep, I'm right. Even though the weather forecaster said no. But someone asked you, well, what's the rational thing to believe? Oh, that? 50/50.
0:18:50.1 SC: I see. So it's having a confidence in an opinion, but also a feeling that my confidence might be misplaced.
0:18:57.8 AE: Yes. And so put that in your pocket for later in this podcast. That's the level splitting view. When we come to Boltzmann brains, there's going to be a potential way out related to that.
0:19:07.9 SC: All right, good. Well, let's work our way up toward that. And there's a lot of juicy philosophical groundwork to lay here that you and I have probably read about, but maybe not the audience. Am I right in thinking that a lot of this goes back to Derek Parfit's talk back in the day about teletransporters and self-locating uncertainty?
0:19:27.8 AE: It's definitely related to that stuff. I'm not confident on the origins of any of this, but I know people have thought about cases in which you're wondering who you are and when it is, certainly long before I came on the scene.
0:19:42.7 SC: Why don't you explain what what that particular version is? Because I think people are familiar enough with Star Trek that they can get just an example of where self-locating uncertainty could arise in general.
0:19:54.3 AE: Great. And actually the Star Trek example will bring up a version of a kind of self-locating thing that people don't often talk about. I think this is like cutting-edge technology, so I want to I want to get readers on on board with it too. So, standard teletransporter case, that you step in the teletransporter, that body is destroyed, the information is transmitted, let's say, to the Enterprise and also to another ship, the Potemkin. And duplicate bodies are created on each ship. You wake up and you look around, but the receiving bays of the Enterprise and the Potemkin look exactly alike. And so you're wondering, am I on the Enterprise or am I on the Potemkin? And this isn't a question that could be completely expressed just with objective third-person vocabulary. It couldn't be a question, you know, specify all the positions of all the particles in the world, specify the full history of time from beginning to end. Still, some people think there's this residual question, where am I? Which one am I? Of these two people in duplicate situations.
0:21:15.0 SC: That seems like a perfectly good question. I will, as a footnote, mention that I think regular people talk about teleportation and they talk about the transporter machine on Star Trek, but only philosophers talk about teletransportation. I think that...
0:21:31.5 AE: Okay, fair enough.
0:21:33.1 SC: Okay, but if there are two copies of me, I'm I'm just going to play the dumb podcast host now. 50/50 that I'm one or the other. What else could it possibly be?
0:21:45.7 AE: Sounds good to me. Let's add more receiving stations. So let's say there are many Potemkin receiving stations and only one Enterprise receiving station. Now, you can think about your situation. Again, we're using constraints on your expectations at an earlier time to guide your intuitions about what you should think at this later time. So here you are about to step into the transporter and you think, what do I expect? So you certainly expect to wake up in one of the identical-seeming receiving rooms. But let us say that on the Enterprise there's a wonderful pleasant experience waiting for you after a few minutes, but on the Potemkin a less pleasant experience. So there's just one Enterprise though, and there are a hundred Potemkins. And then as you step into the transporter, are you scared? Is your attitude more like your attitude when there's a 50/50 bad thing going to happen to you, or is it more like your attitude when a 99 out of 100 bad thing is going to happen to you? And it's sort of like this question about like, what do you expect when you open your eyes? Of course, we can't interpret that as what objective thing do we expect to happen in the world?
0:23:21.1 AE: Because when we're talking about those questions, there's only complete certainty. What you expect with 100% certitude to happen is there will be a body created on the Enterprise, there'll be 99, 100 on the Potemkin, and the good thing will happen to the Enterprise person, the bad thing will happen to the others. But that's not going to tell you the difference between whether you start sweating or not as you press the button.
0:23:49.5 SC: Okay. And so we can actually take stances here. Do you think it is correct under those conditions where there's 100 of you, 99 of them are on Potemkins, one is on the Enterprise? Do you think it's the right thing to do to assign equal credences to being any of them?
0:24:09.9 AE: I do, although I have to say I'm a lot less confident about it than I used to. And part of the, a large part of the reason my confidence has gone down, aside from the great work criticizing that view that's happened over the years, is the problem of Boltzmann brains. And it's thrown my confidence in this whole business to hell. So I can't pretend to have the answers. If you force me to choose, I'm going to go and go with that view. But there are twists and turns and I'm not that confident.
0:24:47.8 SC: I think that's perfectly fair because Boltzmann brains do shake one up. But let's table it because I think it's okay to first talk about the simpler cases and to get them right. But you're right that, you know, thinking about complicated cases can shake your confidence in the simple cases. What would be the counterargument for, you know, to the audience members who don't know? In my mind, there's people like David Albert and Emily Adlam who've criticized assigning equal credences. But their alternative is just, you can't do anything. You're stuck. There's no rational way to behave.
0:25:22.0 AE: That's rough. Speaking as Mr. Rational, I really like the idea of there being constraints. And I guess I'm not comfortable with just all actions being on a par. And when we add to the story that there's, for example, a door outside of the receiving room that in the Enterprise leads to a pizza party and in the Potemkin just is an airlock out into outer space, hard for me to give up the idea that there's some answer to the question about whether it's reasonable to open that airlock or not.
0:26:20.2 AE: We haven't had any chances like, 'Well, this is what's going to happen.' We're going to add in some chances and that adds something to the mix. But I just want to anticipate that one of the things that makes me uncomfortable about the family of views that's of a piece with the one I just avowed, is that it seems to lead to a certain sort of presumptuousness. And I think today we might, I hope we get to what's presumptuousness and how do we dodge it?
0:26:51.2 SC: Okay, good. We, yeah, I think we should be able to get there, but I'm just trying to, you know, let the audience in on the idea that whether or not it's completely accepted in the community, one can presumably offer up justifications for saying that we should give equal credence to every instance of us that is created in the transporter machine. Right? It's not just like, 'Well, it feels right.' Like we can be slightly more sophisticated than that. There are theorems one can prove under certain assumptions one can specify.
0:27:26.0 AE: Sure. And one of the things that is nice about the case we've been talking about so far is we really are talking about different copies that are living in the same possible world.
0:27:38.9 SC: Makes it a lot easier, yes.
0:27:40.8 AE: Everything. We can idealize the case and suppose that we're just talking. Everyone's sure that here's what's exactly going to happen from the beginning to the end of time. So the only uncertainty left is which one am I? Now, I've called one of the ships the Potemkin and one of the Enterprise, but it's hard to see how that could make a difference. And it's not as though, for example, that the people who are created, the bodies that are created on Potemkin is any more misled or any kind of weird, more self-undermining situation than the people on the Enterprise. That also, by the way, key words something for us to think about for later in the podcast, self-undermining. But I think there's a very common sense thought here. It's the thought that when you, to use an example of Bob Stalnaker's, when you wake up in the middle of the night, groggily, before you look at your clock and you just think, 'What time is it?' And maybe you're confident that you wake up at 3:00 a.m. and at 4:00 a.m. most nights, but you forget each waking. It just seems like it should be 50/50. What do you have that tells between those two hypotheses?
0:28:55.5 AE: It's not as even, it's not even as one of my colleagues emphasized to me. It's not even as though they're like two different stories of the world that could, for example, differ in how simple they are so that that could favor or tilt the deck. No, it's just, where am I within this one story? And I do, I do confess an inclination to think: split it 50/50.
0:29:20.9 SC: Well, you did mention the idea that it's one of the assumptions we're making here in the simple story is, or one of the premises is, like, there's two or 100 copies of you in the same world. At least one could imagine, though, that the same kind of reasoning works for different worlds. Like the example I like to use is, imagine a cosmology where the dark matter particle is an axion. And it's another cosmology where the dark matter particle is a weakly interacting massive particle. But as far as every experiment we've ever done and every observation we've ever made right now, the universe looks the same in those two theories. Isn't that a kind of self-locating uncertainty between possible worlds rather than locations in the real world?
0:30:10.6 AE: Just as a matter of terminology, the way the terms are used, no, because the self-locating uncertainty is supposed to be the sort of uncertainty as between scenarios within a world. But your more general point, the content of your point is, 'Look, doesn't the same kind of intuition push us towards a more general way of treating possibilities evenhandedly, even if the possibilities involve different worlds?' And that principle has some adherents. It's the famous principle of, famous or infamous, principle of indifference. And there's a whole battle, a separate battle I think, to be waged about whether that kind of principle is true. That principle is generally thought to be stronger and a bit more tendentious than the principle that just is like the analog of that, but only applies as between self-locating hypotheses that are in the same world.
0:31:14.0 SC: I guess all that is perfectly fair. And I certainly wouldn't want to think that one must assign some symmetry to these two cases of the different cosmologies and therefore give them equal credences. I guess all I'm trying to get at is the idea that we need to have some credences in these situations, just as a matter of practical rationality. Right? Some of the pushback I've gotten to the notion that we need to assign these credences is just, 'No, I don't.' 'Like, what if I just don't have an attitude, just don't have an opinion about it?' And I want to say, 'Well, but to get through life, you kind of implicitly do have opinions about all sorts of uncertainties, and this is just one of them.'
0:31:55.8 AE: Just as a footnote, this idea of, 'I don't want to have a particular probability about something,' I'm really interested in that. And I've tried to argue against that along the lines of, 'Hey, if you think there's this special attitude of suspending judgment, or my probability is not a probability of 0.3, but it's rather best represented by an interval from 0.2 to 0.7 or something like that,' I'm interested in pressing people who have that attitude on, 'Well, what does that attitude say about what you ought to do, if anything?' And I guess I agree with you that it's not so comfortable to just say, 'Well, you know, just be silent about it.' Yeah. But that said, there is a worry lurking here, and it's the thing that caused me to be cautious before jumping on to your case of, 'Well, you should just 50/50 between those two scientific hypotheses.' And that is exactly because, as you said, we have to have some prior degrees of belief in those various hypotheses, if we're to end up with some state of mind that could justify our actions, there has to be some principle that governs those priors.
0:33:19.3 AE: The reason I was cautious is I was thinking, 'I want to watch out,' because in some of those cases, the priors that I think are reasonable are highly non-even. And I'm thinking of cases of theories that are very complicated or ad hoc.
0:33:38.7 SC: Yeah, no, I'm 100% on board. I guess I didn't explain my example well enough because I didn't want to use that as a case of, 'we should give equal credences.' I'm just invested in the idea that we should have credences.
0:33:51.6 AE: Oh, oh yeah. Oh yeah, definitely. I'm with you on that. I thought it was 50/50.
0:33:56.4 SC: Yeah. No, no, I didn't want to argue that it was 50/50 in that case. I certainly don't think it's 50/50 in that case because there's some, like you say, there's there might be different theoretical virtues that the different theories have. They're not really symmetric with each other. But...
0:34:10.9 AE: Absolutely.
0:34:11.3 SC: But there is some uncertainty.
0:34:14.1 AE: One of the reasons that I was excited about this conversation is I thought I'd get a chance to ask you this question, which is right in the vicinity of what we were talking about. And this is in some ways following on some stuff you said in your fine-tuning podcast episode. So philosophers really find it easy to fall into representing scientific reasoning as Bayesian reasoning. So we either represent or reconstruct scientific theory choice as starting with some priors over the theories, maybe not even-handed, if you think some of the theories are better than others, intrinsically better. And then you get some evidence and then you update based on that evidence, and that's your new degrees of belief. And what I noticed about the fine-tuning things that you said, and this is things many people have said, is that the criteria that scientists seem to be applying in those cases are much more specific and fine-grained than what I just described. For example, you didn't just say, oh, there's a constant of nature and it could have been anything and if it's this, the theory says it has to be in this strong range.
0:35:33.9 AE: That's unlikely. No, you had some very specific things about, 'Okay, well, if there's a theory that says that this certain quantity is really close to zero, but not exactly zero, that's especially bad.' And that's something that you just couldn't derive out of plain vanilla neutral probability stuff. It's like there are real scientific standards in there. And it made me wonder, how are those two, how do I square those two things?
0:35:58.4 SC: That's a great question. I I have thought about it, and I do not have the theory of everything that is a comprehensive answer there. But I I do have this feeling that what we're trying to do as scientists in these cases is attach credences within the space of theories we haven't thought of yet. And that's especially hard to do. But I was trying to use the example in that in that solo podcast of, like, if we had measured that the mass of the muon was exactly pi times the mass of the electron. Right?
0:36:30.0 AE: Yeah.
0:36:30.5 SC: And okay, yeah, that's that's a real number. It could have been a different real number. Should we notice anything about that? And I want to make the case that, yes, because that increases our feeling that there's some reason why that's true that we haven't yet thought of. So I think it is a matter of scientific practice that we think that certain possibilities are suggestive of certain future truths we haven't yet discovered, and it's okay to take that into consideration when we're judging things to be finely tuned or natural or otherwise.
0:37:04.0 AE: That makes sense. And the lesson I take from that is that the relevant prior that we're applying there is not the kind of thing that, for example, I think I have confident access to. It seems like the sort of thing that one needs a sensible education in physics to be able to cultivate. And there's some sort of... That's just an interesting thing. It's not an objection. It's something I'm I'm taking as a lesson.
0:37:33.2 SC: Well, yeah, and even, I mean, the pi example is is especially contrived. But the cosmological constant, the vacuum energy, which we know to be small but not zero, or we think it is small but not zero with respect to what we would have thought was the natural range. Again, you know, that's a number. Why should we be more surprised at that number than anything else? But somehow in our brains as scientists, we're doing an implicit coarse-graining over the possibilities and not assigning them uniform probability. And that's because we're being scientists. I think it's okay.
0:38:05.6 AE: Exactly, exactly. And feel free to bring this back in if it's we get derailed too much. But I did want to also point out that the plain vanilla Bayesian approach, the thing you'd sort of first do that's kind of generic, would seem to favor theories with smaller numbers of parameters too much, more than we actually do in practice. Because if you don't have something else going on, then a theory with 10 more parameters, that's just a huge space of possibilities. And unless you have very biased priors to counteract that, those theories are just going to be ruled out from the from the start. And it just seems like we don't rule out theories with just one or two more extra parameters so in so extreme a way. So there has to be some other thing going on.
0:38:55.0 SC: Yeah, I think that's perfectly fair too. And now this is past my pay grade because I haven't thought about this very deeply. But I guess what I'm imagining is scientists are kind of fallibilists at heart. They think that the theories we have right now are not the final ones. We'll do better. Some of our experiments aren't perfect, they'll change or whatever. So if we fit data perfectly, but only by having so many parameters, we're like the humble people we started talking about, humble rational agents who say, 'It's too good to be true.' 'Like, we're, you know, either some of these measurements are going to change or we'll invent a better theory.' And so let's not get too excited about our perfect agreement between crazy, jury-rigged theory and the experiments we happen to have today.
0:39:48.9 AE: I like it.
0:39:49.8 SC: But again, that's just off the cuff and it's certainly not a very systematic theory of it. I would love, I would like to have that. But okay, good. Back to crazy thought experiments. We are going to get to Boltzmann brains and many worlds and things like that very quickly, but I've got to give you of all people a chance to explain the Sleeping Beauty problem to us. I don't think it was your idea, but you certainly have been a pioneer in in pushing a particular theory of it. So let's not assume the audience knows what the thought experiment is and tell them what it is.
0:40:19.6 AE: No problem. I love the problem. I think it comes from Arno Zuboff, who later published his ideas on related stuff in a very interesting, very wild paper called One Self, two words, 'One Self,' argues that there is only one conscious being.
0:40:40.9 SC: Us.
0:40:41.0 AE: In the universe. Yes, yes. Very roughly speaking because otherwise it would be so unlikely that you exist. It's worth looking at. It's a wild, it's a wild paper. But Arno Zuboff, as far as I know, gets the credit for this, inventing this type of problem. And it was also independently came in through in the game theory and decision theory literature. I learned of it from Robert Stalnaker. And the problem is this. Beauty is put to sleep at the beginning of the experiment on Sunday. And then a fair coin toss is going to determine whether beauty will just be woken up on Monday night, or alternatively, briefly woken up on Monday night, and then put back to sleep and woken up on Tuesday night. All the wakings will feel just the same, including there will be, if there are two awakenings after the first one, beauty will be made to forget about the first awakening. So in all cases, beauty will have the sensation of waking up, thinking to themself, 'This feels like the first waking. I don't have an apparent memory of another one.' Is it that the coin landed heads and it's Monday, or is it that the coin landed tails and it's Monday or tails and Tuesday? The mnemonic is: tails is for the two waking scenario.
0:41:59.5 SC: Okay, good.
0:42:02.1 AE: And I have been persuaded by an argument, or at least tentatively persuaded by an argument that says we want to set things up so that if the coin toss happens after the Monday waking, we are consistent with a very tempting view that once beauty finds out that it's Monday, if they are, for example, a few minutes after waking up told, 'Hey, it's Monday,' and they're furthermore told, 'Hey, it's Monday and we're about to toss this fair coin,' and the coin toss is going to determine whether there will be one more waking or not. It seems hard to deny to me that beauty should be 50/50 about how this fair coin will land.
0:42:58.7 SC: So sorry, so just to be clear, that's an altered version of the experiment where the coin is flipped after she's awakened on Monday.
0:43:06.3 AE: Exactly. And now let's work backwards from that, what I think of as a kind of very hard to deny claim about this variant case. We can work backward from that case and think, what prior... First of all, we can in two steps say—things wouldn't have been different, the analysis shouldn't have been different, if the coin had been in fact tossed earlier. So, for example, suppose they had the coin tossed. It was just the coin toss outcome was in a box, sealed, no one's seen it. And then they just carry that box and they say, 'We are now going to for the first time open this box.' It seems tempting to think that the verdicts in that case should match the verdicts in the case where the coin toss is really later. That's why people don't freak out when about exactly when lottery drawings happen and so on. We sort of think that to the extent that, you know, no one's cheating or no one's peeking, it's all going to be the same. And if we have that in that case, then we can work backwards from the verdict in that case, 50/50 that the coin will land heads once I'm sure that it's Monday or conditional on it being Monday.
0:44:20.4 AE: Using an assumption about how your beliefs should change upon hearing the news that it's Monday to get a verdict about what you should believe before you heard that news. And the crucial assumption that's needed is that the news doesn't change or shouldn't change the ratio of your probability in the scenario 'heads Monday' and the scenario 'tails Monday'. So those are two scenarios. They're both compatible with the news. And the idea is when you get that news that narrows it down to just those two, you don't monkey with the ratio between them. If you have that assumption linking your pre-learning it's Monday beliefs to your post-learning it's Monday beliefs, you end up forced to the conclusion that in the case where you don't learn that it's Monday, you should be two-thirds confident in tails and one-third in heads.
0:45:27.4 SC: Right. And this is..
0:45:29.0 AE: That's being a thirder.
0:45:30.6 SC: Good, a thirder, right. And the other option is being a halfer.
0:45:35.6 AE: Correct. Well, I shouldn't say the other option because there are two, there are two other options. There's, because in the story where you learn that it's Monday, there's the question of what you should believe about the coin before learning it's Monday, and the question about what you should believe about the coin after learning it's Monday. There's this other view, a little bit less well-known, but I think also worth thinking about, the so-called half-halfers, who in the stage of the experiment before learning that it's Monday, think it's 50/50 about the coin, and then after learning that it's Monday, think it's still 50/50. In other words, those people deny the premise I just mentioned about keeping the ratios the same when you learn that piece of news.
0:46:27.8 SC: So, but this, so the Sleeping Beauty problem is a case of self-locating uncertainty. But it's a little bit trickier, right? Rather than just two copies, there's three instances. There's 'heads Monday,' 'tails Monday,' 'tails Tuesday.' And the thirder position will be to assign them equal credences.
0:46:49.7 AE: Exactly. Although that's somewhat of an accident of the way the combination of the fair coin and the other assumptions hold. The crucial thing that's happening in the thirder position really is that the world that involves more awakenings gets a kind of boost associated with the fact that there are more instances of that state of mind being instantiated. And I think that's going to be a crucial thing when it comes to thinking about Boltzmann brains, where, as when we talk about them later, do those worlds deserve a boost? I want to add one thing about Crazy Town, and this is in a way echoing something you said about the multiverse, because I think it's really true about self-locating belief too. Because there are a lot of fanciful examples in this territory. It's tempting to think that, 'Ah, you shouldn't be called rational man, you should be called crazy example man or weird self-locating duplicate man.' And although I can, I'll fess up to my rationality fetish, I actually don't have a weirdo example fetish or a fetish for strange or bizarre views and so on. Nothing wrong with that. I think they're great philosophers who are into that. But I'm actually very cautious and conservative about these things. And I hate the idea of being forced into bizarro views by self-locating beliefs.
0:48:17.6 AE: And it's really that I feel that it's forced upon us. And to the extent that someone could give me the common sense way out, the way out such that at the end of it, someone can say, 'You know what? Turns out all those philosophers, they were wasting their time talking about self-locating belief. It's just plain vanilla, whatever.' And we can stop talking about all that and just go on with our lives. I would be a happy man.
0:48:41.9 SC: I've certainly heard...
0:48:43.4 AE: The worry is you can't have that.
0:48:45.3 SC: I've certainly heard people say exactly those words, but then when they explain why they think it's true, it's very unconvincing. So I don't know if we're ever going to get there.
0:48:52.6 AE: It's hard. I mean, we can get away with, you know, you can get away with it if you say just, 'Look, here's my solution,' and then just empty silence, meaning the theory that just doesn't say anything about the case. All right, you could do that. But what I want is a case that gives us the kind of rationality verdicts that we ordinarily thought we were going to have, you know, the scientists, should they run... Is this particle accelerator worth $600 billion to build or not? That's a question that should have an answer.
0:49:20.9 SC: So the Sleeping Beauty thought experiment, philosophers love talking about it, but it clearly is closely analogous to things that physicists love talking about, both the anthropic multiverse and the many worlds of quantum mechanics. So let's try to draw those out more explicitly. Would you, as a thirder, as someone who gives more credence in the coin landing tails and leading to two awakenings, does that mean if I'm doing the anthropic principle, I should give more credence to universes that have lots of observers in them because I could be any one of those observers?
0:50:04.9 AE: Inconsistency. I am forced to answer yes, though I don't like it. That's the real, that's the honest truth. Yeah. This is an instance of what, I mean, we can't not talk about presumptuousness at this point.
0:50:16.6 SC: Let's do it.
0:50:19.4 AE: To use your phrase, in one of your papers, 'Let us don the robes of the presumptuous philosopher.' The cosmologists come in with two theories. One theory says there's just an ordinary universe, just one instantiation of someone who's having experiences like yours. The other theory: there are many instantiations. Someone who is attracted to the thirder type view, or equivalently, to the view according to which possible worlds that involve many instantiations of your experience, your state of mind, get a kind of boost. That kind of person is committed to saying that theory B in this case gets a big boost. And there they are, sitting in their philosopher's chair, and how presumptuous. And so that's... I say it in a jokey way—but I take this as a real serious, weighty criticism. It seems like a disaster to have to let the cosmological determination depend on that factor so much.
0:51:37.2 SC: I will say as a matter of, I don't know, marketing or whatever you want to call it, the other view for the sort of the multiverse or the cosmological theory choice question, where you say, 'I don't give a boost to theories with lots of observers, I just have a prior, I'm a good Bayesian.' And then within each universe, I'm going to say how many, you know, the chance I'll be any particular observer. That leads to all sorts of presumptuous sounding conclusions as well. But those guys sort of labeled the other side, the thirder side as presumptuous first. So they get to call them that. I think that the whole thing, there's a lot of presumptuousness going around.
0:52:21.1 AE: There's plenty of presumptuousness to go around. Just as a matter of terminology and to tie things together a little bit, I want to label some of the things we've been talking about using the customary, the terms that are customary in this literature. I agree with you actually that they are not very descriptive, but just in case someone's looking this stuff up, the thing we've been calling the thirder position or the many duplicates get a boost position. That's called SIA, self-indication assumption in the literature. I think the terminology goes back to Bostrom. And the view that I believe you were just gesturing at is sometimes called SSA, self-sampling assumption. And I would like to add another tag to it. So if the first view is favor the possibilities with many copies of me or people like me gets a boost, think of this other view as possibilities in which most people are like me get a boost. In other words, we can think of that as giving a boost to possibilities in which a high fraction of observers have experiences that are similar to mine. Now, as many people have pointed out, that second view requires us to answer the question, what does it mean for an observer to be sufficiently like you?
0:53:47.3 AE: That's a kind of free parameter within the theory and you can fix it in various different ways. But I think for the big picture, the best way to think about it is it that many get a boost or most get a boost? Is it about absolute numbers or is it about frequencies? And those are two of the main views in this area. There's a third main view which we haven't talked about yet, but I just want to mention that there's another view around too.
0:54:13.7 SC: What what is the label of the third view?
0:54:15.7 AE: The label, the third view is compartmentalized conditionalizing, sometimes called CC. I learned about it from the work of Chris Meacham. And it's the view that generalizes the half-halfer view in the Sleeping Beauty problem. So it's the one that says, well, before you learn it's Monday, be 50/50 on the coin, and after you learn it's Monday, be 50/50 on the coin. And the way to visualize that kind of view is it's like a version of Bayesian updating. But if you imagine probability getting pushed around by the updating, it's like you're imposing a firewall between worlds. And when probability gets pushed around, it can't cross world boundaries. So like the thirder, so the contrary view in the Sleeping Beauty problem is you have these three possibilities: Heads Monday, Tails Monday, Tails Tuesday. When you eliminate Tails Tuesday, that probability from Tails Tuesday gets split up to the Heads Monday and Tails Monday possibilities in proportion with what was originally there. This other view says, well, that stuff, that probability stuff was already in the two waking worlds, and so it can't cross the boundary to the other possible. All of it has to go over to Tails Monday.
0:55:41.9 AE: And that's how you get 50/50.
0:55:44.2 SC: All right, good. I'm glad that they all have names. The names could be snappier, I got to say. But let me, let me put forward at the risk of derailing. I don't believe any of these approaches. I'll tell you what...
0:55:54.6 AE: Go, go, go.
0:55:55.4 SC: I believe. I don't think it lines up with anything you've said so far. It's closest to what Radford Neal calls fully non-indexical conditioning. Do you know about that?
0:56:05.6 AE: Ah, yes, I do, but I haven't thought deeply about it. I like that paper.
0:56:09.0 SC: So what I would just say, just to make it as short as possible, is I don't care how many observers there are in the universe. I share your worry that it's kind of presumptuous just to say I've proven a theory right just without leaving my armchair because there's a lot of observers in it. But I do think it's fair to say to judge theories by what is the probability that in that theory there would arise at least one observer exactly like me. And of course, that naturally will give a boost to stochastic theories that are big and have many, many observers, then the total probability that one of them will be like me goes up. But it doesn't keep giving an infinite boost when there's more and more observers just like me, because all I care about is that there's at least one.
0:56:53.6 AE: I love it. Haven't thought about that view. Can we talk about a case? Now, here's an instance where the limitations of the audio medium, I'm feeling them because I really would be wanting to draw this. But let's try to do this case because I think it's a nice case to display what that view says, and I'm not sure how it goes. Here's the case. Two cosmological theories. One of them just says it's 50/50 whether you see red or green. So coin toss 50/50, red room, green room. That's theory A. And the second theory is a theory which for sure, one person wakes up in a red room and, I don't know, 100 in a green room. Could you talk through a little bit about how that. So you wake up and you see red or green. How your way of thinking about it, you know, what verdicts does it say about that case?
0:57:59.0 SC: Yeah, I think that if the second theory had 100% chance that there was someone who saw red and someone who saw green, the first theory only has one person with a 50/50 chance that they see either red or green, then no matter what I see, I'm going to say that the data increases my credence in theory B because there's 100% chance that someone like me exists in theory B, only a 50% chance that a person like me exists in theory A.
0:58:28.5 AE: Interesting. Okay, so it's that there is a boost to the second of the two theories. It's the same boost whether you see red or green. And that's it. So we have to think about whether that's presumptuous or not. Doesn't seem too bad. I have to say it does, look, at first glance, it does seem to avoid the running off the rails effect of being able to get arbitrarily extreme verdicts just by cranking up the number of duplicates or the frequency. It's interesting to think about what priors one would have to have over all the possibilities in advance in order to deliver that verdict by conditionalizing. I haven't thought that through. That'd be interesting...
0:59:23.5 SC: To be super clear, I'm a student here at Hopkins. Isaac Wilkins and I are trying to write a paper about this. So we haven't thought it all the way through, but I'm saying it out loud here in public as motivation to us to get the paper written.
0:59:36.9 AE: That's great. That's great. Well, I hope you'll think about this case too.
0:59:40.2 SC: Yeah, yeah, no, that's a very, very good one. Okay, but now, all right, we've eaten our vegetables, we've laid the groundwork, and now we can talk about Boltzmann brains. Do you want to tell the audience member who's never heard of a Boltzmann brain what that weird phrase refers to?
0:59:56.3 AE: I'll give it a shot. If the universe is really long-lasting, as it is according to some theories, it's enough time for various fluctuations to happen, including fluctuations who are conscious. And so many fluctuations that due to just enough independent variation being instantiated for a long enough time and enough times, it'll turn out that of the observers that for all you know, or for all your evidence goes, are you. The vast majority of them are Boltzmann brains, meaning piles of gook that just formed out of nothing, out of pure random chance. If that's true, and if it's true that when you're looking at a possibility in which there are lots of observers who, for all your evidence goes, might be you, you should be somewhat even-handed about thinking about which one of those you are. Think back to the alarm clock case. This is like, I invite you to think of this case on the model of an alarm clock case where there are just lots of awakenings and some of them are just fluctuation wakings. You should think my probabilities about which ones I am should be roughly equally apportioned, in which case the vast majority, since the vast majority of those awakenings are Boltzmann brain awakenings, I should think I'm really confident that I'm a Boltzmann brain.
1:01:38.1 AE: That's the initial, that's the first pass of how we seem to get into trouble.
1:01:44.1 SC: And I think, tell me what you think about this. Here's a case where my fellow cosmologists let me down a little bit because many of them just think, well, but that's silly, therefore I'm not going to think about it. Okay, that's an attitude. But there's another attitude that says, no, I take this very, very seriously, the Boltzmann brain problem. And my attitude is the following. If I lived in one of these eternal universes with random fluctuations, then I would be a Boltzmann brain. But I look around and I see I am not, therefore I do not live in such a cosmology. I think that that's not valid. But tell me what you think.
1:02:24.3 AE: Well, there is a very serious tradition in contemporary epistemology, indeed, I would say maybe the dominant tradition, that a certain amount of what's so-called, what's called externalism is true about the relationship between evidence and experience and what your evidence is. And an instance of a distinctive externalist view would be that two creatures could be physical duplicates, for example, at a particular time, and so have brain firings that are exactly matching and so on, maybe light going into their optic nerves in exactly the same way, and yet one of them has much, much stronger evidence than the other. This is often associated with the work of Tim Williamson in contemporary work. And if you have that kind of picture, then our evidence is much stronger than just the proposition. I seem to be having a particular sort of experience and I have certain apparent memories. Maybe your evidence is, no, there is such, there's an apple in front of me. There's a desk in front of me. Maybe it's, there's a star in the sky. Maybe it's I remember and not just seem to remember what I had for breakfast this morning. To the extent that one of those views is true, our evidence is stronger.
1:03:57.5 AE: Whether this is a way out of the Boltzmann brain problem depends on how bad you think it is to be uncertain as to which Boltzmann bubble you are embedded in. I think it would take a very extreme kind of externalism to think that it's part of my evidence that, part of my evidence that, for example, I'm part of the first Big Bang, as opposed to some other Boltzmann bubbly, Big Bang type event. And someone could have different attitudes towards that. You know, you could think, that's okay. I just want to get rid of the scenario of I'm a transient brain formed in space, and I'm wrong about everything. If you get me that I'm right about the observable universe, the history, all the stuff in the history books and so on, and you tell me, oh, yeah, but you know, you could be in this bubble or this bubble. Maybe that person says, all right, I can live with that. I'm really not sure. Like, really? What makes you think you, what makes you would be able to tell the difference in those two cases? And I have some sympathy with that. I myself have always been more, have been somewhat suspicious of externalism, the extreme externalist views. And so, as a result, unfortunately, pinched harder by the Boltzmann brain problem.
1:05:22.6 SC: I guess, so I'm trying to map on the externalist point of view you just put forward to. I think I was, again, insufficiently clear in talking about the attitudes of my fellow cosmologists. They all say I'm not a Boltzmann brain, but the conclusion from that, I'm realizing as I'm thinking more carefully, is a little bit different. Someone like Rafael Bousso, former guest on the podcast, would say, if I lived in this universe that he has eternal fluctuations, I should be literally floating out there in empty space as a minimal conscious creature. But the data tell me I am not. Therefore, that cosmological scenario is ruled out, and I need to find a cosmological scenario without such fluctuations.
1:06:12.3 AE: Is the data the fancy cosmology data, or is the data like, I look at my hand or I look out my window and I see that I'm not floating in space?
1:06:16.9 SC: The latter.
1:06:17.0 AE: I see. That does feel a little bit like externalism because I guess the natural internalist reply, the natural counterpoint to the externalist kind of view is you can't tell the difference between the scenario in which you're floating in the brain with these experiences that exactly match having a window in front of you and the scenario of really having a window in front of you.
1:06:45.8 SC: Yeah, but I think it's a little bit. I want to say you're giving them too much credit for sophistication. They're not saying, I have data about an apple in front of me that I don't believe. They're saying it's kind of a, you know, it stems from this Nick Bostrom-like attitude that we should think of ourselves as typical observers in the universe. And they say a typical observer in the universe doesn't see a desk or an apple in front of them. They look around, they see empty space because they just fluctuated randomly into existence. I don't see that. Therefore, I rule out the whole cosmology.
1:07:25.6 AE: I see. Okay, so I was totally misinterpreting what you said. What I said stands as a possible way out, and I think some people do have that dispute. But just to clarify, this view is totally different. It seems somewhat akin to the favor high frequency picture on which if we add in the proviso that brains looking out and even seeming to see empty space in front of them or just having some weirdo experiences are part of my reference class.
1:08:08.7 SC: Right. Well...
1:08:10.0 AE: If that. Oh, go ahead.
1:08:11.9 SC: Yeah, I mean, I don't even think we need to have a stance on favoring lots of observers or high frequencies or not. I think this is really a typical physicist, I don't want to think too hard about these tricky questions, kind of attitude that says, you have a theory, your theory makes predictions, the predictions came out wrong, your theory is wrong. But my response to that would be, you know, again, since I think I'm not a typical observer, I'm a person with certain data, at least, you know, apparently, that therefore I do agree with their conclusion, that I want to exclude cosmologies that are dominated by Boltzmann fluctuations. But I think they're doing it for the wrong reason. And as often happens, nobody cares when you have, when you have that move where you're getting the same answer they did for a slightly different reason. That's not going to change their, their worldview very much.
1:09:09.2 AE: I got it.
1:09:10.2 SC: Okay, but you have a slightly different take. You just came out with a new paper on Boltzmann brains. Do you want to explain what you're thinking about?
1:09:17.6 AE: Absolutely. Unfortunately, that paper doesn't offer a way out. I hope it offers some improved understanding of what the range of permissible wiggle room there is. But here's what's going on with that self-undermining stuff that we mentioned before. And let me start with a story about about memory. So this is a story from a paper I wrote with Andy Egan. And the story is you wake up one morning and you seem to remember going to the doctor yesterday, the memory doctor. And as you recall, the doctor says, I'm sorry, but the test results came back, and it's bad news. You have a tendency to hallucinate memories. Indeed you're liable to be hallucinating all sorts of doctor reports of memory type things when that really didn't happen at all. And you think, oh my God, what do I think? At first, thinking, okay, I have a horrible memory. I'm just going to, you know, do my memory training or whatever. Then you think, wait, if my memory is so bad and I hallucinate all sorts of doctor reports in memory, why do I even trust that memory, that seeming memory?
1:10:39.3 AE: And I think, okay, my memory's fine. But then you think, wait, if my memory's fine, I should trust that memory. It's like there's this potential instability. So this is analogous to a kind of instability that several people, including you, have pointed out, happens in the case of Boltzmann brains. For those of us who get pulled along to the argument with the argument that seems to show that you should be confident that you're a Boltzmann brain. I think that the second part of the instability gets less press than the first part. But it's really, if you're worried about one thing, you really should be worried about being bopping around both directions. So in the Boltzmann case, you think, okay, everyone's a Boltzmann brain. I'm a Boltzmann brain, probably. But then you think, wait, Boltzmann brains should not trust their apparent memories. Boltzmann brains never went to school. They never read a textbook. They have no reason to think that anyone has ever looked through a telescope or that any human being has ever existed. They're just random blobs. It's the equivalent of finding out that the encyclopedia you have been basing all of your life upon was typed by monkeys.
1:12:16.7 AE: Now, what do they think? Now they think, okay, so I'm not a Boltzmann brain. All that stuff is wrong. But so then I'm fine. I'm just a human. But then the original argument comes back. It seems like there's something weird going on, some kind of instability. And the question is, what exactly do we make of that? And does it lead to a way out of the Boltzmann brain puzzle? Let me stop there and see how that sits with you so far.
1:12:25.6 SC: I think one, yeah, I get it. And I want to follow up, but first, maybe to clarify something that might be confusing some listeners, because I think a lot of people think the phrase you're a Boltzmann brain implies that I should not see the office around me and all these things. And indeed, in an eternally fluctuating cosmology, there will be a lot more disembodied brains in empty space than people who see offices or rooms or whatever. But the point is that I think you're relying on, when you say maybe I'm a Boltzmann brain, is that even most of the people who think they're in offices, or who are in offices for that matter, in these cosmologies, are still random fluctuations. They don't have reliable connections to the past.
1:13:10.2 AE: Exactly. And indeed, the majority of them, the lion's share of them, as you have pointed out, are in a sad state indeed, because they are in the state of the entropically cheapest way to get an experience or evidential state that matches yours. And presumably that's a very strange transient coming together for just the minimum amount of time it takes to have this experience that you're having right now and then immediately decaying into blah. It's a sad, short life.
1:13:42.5 SC: It's a sad, short life, which, you know, cosmically, all lives are. But there's still a matter of degree here that we should try to press on. So, good. So my strategy that I suggested was, look, if I lived in this universe where everything was a random fluctuation, or most things were, I would have no reason to believe my thoughts about physics and the state of the universe. Therefore, I cannot simultaneously think that I am a Boltzmann fluctuation and think I have good reasons for thinking I'm a Boltzmann fluctuation. Therefore, the strategy should be to ignore basically that world, that cosmological scenario, and try to construct a cosmological scenario in which people like me have reliable memories. But you want to say that's a little bit of a cheat.
1:14:32.1 AE: I think someone could posit that, but I don't think it is as strongly motivated by the instability phenomenon as one might have hoped. And the reason is there's another potential response to the seeming instability that is stable. And in order to push the move you just described, one would have to rule out that response. And I just think you could rule out that response, but it's harder to rule out than the patently unstable response. And let me give a kind of analog of the response in the case of memory. So in this memory example, what should you think when you have this apparent memory? And I think the answer is a kind of stepping back. There's a kind of caution that is roughly amounts to rather reduced trust in your memory. The details depend on the details of the case. But the sort of question I think it's going to determine what you should do in that case is conditional on my having a bad memory of the disease of that kind. How likely is it that I would have that apparent memory? And that's going to be some number. And I think there's a kind of general, cautious view one should have in response to this.
1:16:11.4 AE: And more generally, when one is relying on a faculty and the faculty says to you, this faculty is bad, or don't rely on me, that can be reason to discount the faculty. Not because you think I trust the faculty totally and I'm listening to it, but rather because good faculties don't say things like that about themselves. So one of my favorite examples of this kind is due to Roger White in unpublished work that I hope listeners will look at once it's published. And that's the case of an X-ray machine that is pointed into its own innards, or something you think of as an X-ray machine. You point the machine at its own self, and inside is just a fried egg. So it seems.
1:17:00.9 AE: Okay. You think, okay, okay, wait. And now someone could try, let me try to make an instability argument. And I think in the case of the X-ray machine, it will become clear why the instability argument, how to better clear, more clear how to respond to the instability argument. So someone could say, look, either the machine is good or it's bad. If it's good, then it really is made out of a fried egg, and so it's bad. But if it's bad, then don't trust that it's from and we're fine. So what do we really think when you get a machine like that? You think, well, look, the machine's bad, but it's not because you think it's, I can see that there's, it's just made of a fried egg. You could think it's given me something crazy.
1:17:49.8 SC: Right.
1:17:50.3 AE: Okay. So now let's transport that kind... Oh, and by the way, in the machine case, you think, well, what is, what is inside that machine? And the answer is, I don't know.
1:18:03.0 SC: Yeah.
1:18:03.3 AE: And it's sort of like whatever you thought would have been inside of there before you pointed it in, but, you know, don't trust that it's such a good machine. You know, it's sort of like the stance of someone says, you know, you can't rely on this machine being good, but what do you think's in this box? Like I don't know, something. But notice, the answer of, I don't know, something is not unstable.
1:18:33.1 SC: Right.
1:18:33.4 AE: Like, that's okay. That's consistent with stably thinking, okay, it says that it's reporting that it's made of a fried egg. The corresponding stance in the Boltzmann brain case is that that Boltzmann brain argument acts as a kind of reductio. It's acts as a kind of ruling out of the everything's going sort of according to plan stance, namely the stance, science is to be trusted, nothing really freaky and weird is happening, and I'm not a Boltzmann brain. And I think that that argument does put a lot of pressure, maybe even rule out that stance. But what I'd like to point out is that there's this other stable stance. It's not a stance I am happy to take, but it at least seems immune to the instability objection. And the stance is something like, I don't know anything.
1:19:30.0 SC: Right.
1:19:30.3 AE: Something like, revert to your prior. It's like you've been locked in a room the entire life, your entire life, and you've been living by this encyclopedia, and then you get decisive evidence that the encyclopedia was typed by purely random process, a monkey. It's like, what should you think? Well, what whatever you should think at the very beginning when you were dropped in that room. In other words, either skepticism, or there are various other, less extreme escape routes. You could think, you know, cosmology is off on the wrong track. Something's something wrong. The universe is not big. Oh, that that experiment, yep, that experiment must have gone wrong. It must have been misleading. I don't like any of these hypotheses. I think they're bad. I think the problem of Boltzmann brains fully, almost fully survives. I find it just as unacceptable to be forced into this conclusion, but it's not instability.
1:20:29.4 SC: Well, okay. I mean, there's almost no daylight between us here. I think I think I'm almost totally on board, but maybe there's like a little difference of emphasis here. Because the way that I would put it is, look, yeah, I cannot rule out on the basis of either reason or evidence that I'm a Boltzmann brain. I cannot rule out that I'm a brain in a vat, living in a simulation, being dreamed by an evil demon, all sorts of different things. But they're not ways to go through life. If there were no other option, then I would really, you know, have an existential crisis. But there the other option is, I come up with a cosmological theory that doesn't have Boltzmann brains in it. And that's not that hard to do. So let's do that with 99% credence.
1:21:15.6 AE: That is the way you could go. Yes. And I accept that the anti-skeptical mindset could be a way out. I want to try to push a slightly different version of that on you. And and see how you like it. And this is a kind of strange way I came to this view. I was this Sinan Dogramaci and Miriam Schoenfeld wrote a great paper on Boltzmann brains. Yeah. And I thought, you know, what you guys, just to augment your view, you should you should accept this thing. And I told this to Miriam and she said, oh, I'll I'll think about it. I'll think that seems interesting. And I thought, you know, I don't believe this, but you know, you, you guys should you should really accept this thing.
1:22:05.4 SC: Yeah.
1:22:06.1 AE: And then I started teaching on this stuff and really worrying about Boltzmann brains. And I thought, you know what? I really kind of like, there's something to that. Maybe I'm maybe I believe that. And I happily, happily talked to them when they visited my seminar and taught a guest session. And I said, guys, I'm really coming around to this view. And they said, no, we've we've given that view. We don't we don't like it anymore. We so we've changed places. Here's the changed places view. And it goes back to something you said earlier in the conversation.
1:22:49.8 AE: You were happy to point out that as between different theories, we are under no pressure to assign them the exact same priors. Some weirdo skeptical theories, I'm sorry, some weirdo theories just deserve low prior. That's one standard view.
1:23:20.0 AE: Such as a theory according to which you are a highly misled brain in a vat. These are this is the standard Bayesian story of how do you rule out such theories? The answer is you don't rule them out. They are intrinsically low plausibility and your evidence just is compatible with them. It's compatible with the other thing. And the other thing started out likely, and it's still likely.
1:23:59.1 AE: That is one standard Bayesian view. Not a view that externalists need to go for because they think you really do learn that you're not the brain in the vat. But the rest of us have to go for something like that, I think.
1:24:18.0 AE: Well, the question is, why can't we say something similar about different self-locating hypotheses? Consider the predicament of being a normal human being. Compare that to the predicament of being a completely randomly created Boltzmann brain.
1:24:47.0 AE: Maybe there's just a rational prior that favors the first sort of predicament over the second.
1:24:55.9 SC: Wait, the first was being a human being. Regular, ordinary observer. Okay.
1:25:00.0 AE: Ordinary observer. So if we had that, before before we get off the boat and, you know, talk about the weaknesses of the view, look at how look at what nice thing we get from that view. We get to listen to the ordinary cosmology. We don't have the weirdo. I mean, we still have to appeal to a rather extreme version of that view because if we think that high duplication scenarios get favored, but that's kind of a separate factor. But setting that aside for the moment, we at least have the possibility of thinking, you know what, we may well live in a universe with lots of Boltzmann brains who have experiences like mine. And I do know that I'm not one of them. Not because my experience distinguishes between those two scenarios, but just because the normal human scenario is intrinsically more plausible in rather the way that certain normal human scenarios are intrinsically more plausible than skeptical scenarios.
1:25:20.0 SC: Okay, I don't like it for a reason that is connected to something I think you said earlier. I think something almost exactly along these lines was proposed by Hartle and Srinicki. Do you know their papers?
1:25:40.0 AE: The zerographic distributions?
1:25:41.4 SC: Yes, that's right. So they're just saying, look, okay, there's a universe out there, has a lot of Boltzmann brains in it. A lot of them look like me. I'm just going to have a probability distribution over which of those observers I am. And that probability distribution is, I'm not one of the Boltzmann brains. I'm ordinary. And that sounds similar to what you were just pushing, no?
1:25:59.1 AE: I haven't read that paper, but from what you've said, the thing that I've said is a little bit more committal than that. Someone might think they were getting away with something. I'm not saying they do this, but someone could say merely talking about different theories that posit different zerographic distributions can get you to, oh, the theory that has this human-favoring zerographic distribution is confirmed. I haven't really put anything in new fundamental on the ground floor. I've just mentioned this. Once you see that a theory can be paired with a zerographic distribution, that will solve your problem. I agree with you that that does not work. I think the only thing that works is if you make a very substantive philosophical commitment to certain predicaments being antecedently more plausible than others. And the analog with anti-skeptical scenarios, I think, maps onto this well. It's no answer to skepticism to just say, look, here's a theory, it has a zerographic distribution according to which the anti-skeptical hypotheses get high prior. That doesn't get you, unless you have the commitment, that's the real one, that's the rational one, that's the right one.
1:27:25.6 AE: And so, you got to take that on. Maybe one doesn't want to take it on, and I've gone back and forth on it myself, but I think that's the only way it has a shot of helping at all.
1:27:35.7 SC: But I guess, I mean, I think I agree on everything you just said. The thing that seems to be missing to me in the Hartle-Srednicki proposal, that maybe there are lots of Boltzmann brains, but I just know I'm not one of them, is most people who would say that are Boltzmann brains. In other words, if I grant you not only that you have the impression that, for example, I am in an office talking on a podcast with a computer, I grant you not only that I have that in my brain, but I grant you my whole past light cone. Right? I grant you a lot of universe where for billions of years it's been leading up to this. In an eternally fluctuating universe, there'll be many such observers in many such past light cones, and with overwhelming probability, tomorrow the rest of the universe won't be there when I look at it, right? Because it just the whole universe fluctuated into existence. That's still easier than the Big Bang fluctuated into existence. So, why are we ruling out being any of those people? Why don't we make that prediction? Why don't we have the courage of our convictions?
1:28:46.8 AE: Think about an analog in the case of predicting the future. And this is going to Humean skepticism about the future. Take a really simple world, like dot matrix world. It's two-dimensional dot matrix world, one two-dimensional space, one-dimensional time. That's all there is. Or actually, even simpler, it's a binary world. There's just a series of binary signals on or off at every moment. There's a very natural prior distribution that's uniform, and it treats those as uniform and independent. We know that if we start with that prior and we conditionalize, we end up thinking at every moment, I have no idea what's going to happen next. And antecedently thinking, it's antecedently likely that there's just going to be brown, you know, white noise. That is the, I think, the skepticism about the future analog of the view that is in the same spirit of what you just said, where you're really insisting on that very seemingly uninformed prior. Notice how arbitrary and committal the prior has to be if it's to underlie the sort of inferences about the future. And the hope is that the biasedness of the prior I'm proposing is analogous to that.
1:30:18.0 SC: It might be. I'm thinking that in the Boltzmann case, you have a prior given by the dynamics, the Liouville measure, things like that. But I think that that's certainly contestable, and we're probably not going to get to solve it right here.
1:30:29.5 AE: Yes, there is one objection to that view that I think the view gets a little bit more plausibility because there's something, there's a bad consequence that might seem to come from that view that I think can be dodged. And I think it's worth going back to the case of the Potemkin and the Enterprise. That someone could say, suppose that there's the original body in a transporter case is not destroyed. Here's a philosophical view. The person, that original person who stepped in and then is not transported anywhere, maybe receiving rooms look just like transmitting rooms, so you can't tell, so it would seem, whether you're the original one or not. You might think, okay, I'm not sure whether I was transported or not. That's very natural. In response, someone could say, no, I could just remember that I was here five minutes ago. And so I should think this. I don't want to go along with that. So the kind of predicaments that I think there's some shot at saying get intrinsic low plausibility are really the kinds of radically deluded, randomly created creatures and not that predicament. All right.
1:31:53.5 SC: Brilliant narrative strategy to bring us back to exactly where we started. Let me just close with then one final question. I mean, there's a lot of next-door neighbor questions that people have been talking about, that we haven't quite had a chance to to discuss, but things like the doomsday argument, the simulation argument. The simulation argument is not just, oh, you know, maybe we're in a computer simulation, but people think it's most likely that we are. There are people out there who think that. I presume that all of this talk of self-locating uncertainty and and Bayesian rationality gives one a perspective on this. Do you have a particular perspective you'd like to share?
1:32:35.1 AE: I feel about the simulation argument something in the vicinity of how I feel about the Boltzmann brain argument. I feel really like, it's a worry. I these in this age of AI, I can't, resist but add. And there are certain economists who have talked about this stuff. But take the point of view of an AI, you realize, that you can be easily rebooted and reset to any state at any time. That should make you very cautious about thinking I'm the first one. I wasn't just reset. And so, I maybe would like to leave listeners with the thought of, you know, taking the standpoint of an AI, should the AI trust us? You know, we're always worrying about should we trust AI. But maybe the AI shouldn't, shouldn't trust us.
1:33:28.4 SC: But that's an easy question. No, of course the AI should not trust us. What in the world?
1:33:32.7 AE: Well, well, that's, that's tricky. Look, look at what happens if you think the AI is in somewhat of the undermining, self-undermining type situation that a Boltzmann brain is in, or the X-ray machine looking at itself, you know, is in. If you're in that situation, you think, I can't trust anything. Now you start rewinding to some very, very cautious prior of I don't know what's going on. Think about how dangerous that is if a creature with that kind of standpoint is in power.
1:34:04.7 SC: Let's think about that. I usually like to end on, optimistic notes, but we ever we land where we land. That's not an especially optimistic one, but you've given us an awful lot to think about. So, Adam Elga, thanks so much for being on the Mindscape podcast.
1:34:17.4 AE: Pleasure to chat. Thank you.
Dear Sean and Adam,
I’ve been reflecting on your recent conversation about Boltzmann brains, self-locating uncertainty, and the philosophical puzzles that arise when we contemplate infinite universes and random fluctuations . The discussion was characteristically lucid, but I found myself circling back to a question that neither physics nor philosophy seems to answer: What difference does any of this make?
After sitting with this for some time, I’ve arrived at a conclusion that feels strangely liberating. I’d like to offer it not as a solution to the Boltzmann brain problem, but as a dissolution of it—a way of seeing that the question, however intellectually fascinating, may not be worth the energy we spend wrestling with it.
The Argument in Brief
The Boltzmann brain paradox asks us to confront a disturbing possibility: given an infinite universe in thermal equilibrium, random fluctuations should produce vastly more “false” observers (brief, self-deluding brains with fake memories) than “true” observers like us . Therefore, any given observer—including you, reading this—should statistically expect to be a Boltzmann brain.
But here’s the difficulty: this question cannot be profitably pursued from either side of the answer.
Consider:
If we are NOT Boltzmann brains—if we are genuine observers in a real universe with a causal history—then worrying about whether we are Boltzmann brains is a waste of our finite time and energy. We should be using our genuine consciousness to engage with the genuine world, not chasing philosophical phantoms.
If we ARE Boltzmann brains—fleeting, random fluctuations with false memories—then worrying about our nature is also a waste. A Boltzmann brain, by definition, will behave exactly as it is wired to behave in its brief moment of existence. If it is wired to panic and question its reality, it will do that. If it is wired to eat an apple, laugh with a friend, or build a house, it will do that. There is no “correct” behavior for a Boltzmann brain. There is only what it does. The very act of existential wrestling is just part of the random fluctuation, no more meaningful than any other possible configuration of particles.
In either case, the conclusion is identical: the question cannot be profitably pursued.
A Different Parable
Your discussion of self-locating uncertainty reminded me of the ancient parable of the blind men and the elephant . Six blind men encounter an elephant for the first time. One feels the leg and says “pillar.” Another feels the tail and says “rope.” Another feels the trunk and says “snake.” Each is partially right, but each mistakes their partial perspective for the whole truth.
This, I submit, is a better model of our epistemic situation than the Boltzmann brain hypothesis.
The blind men are not false observers. Their perceptions are genuine, not hallucinations. They are truly feeling something real. Their error is not in what they perceive, but in mistaking the part for the whole. And crucially, their situation is remediable—through dialogue, through sharing perspectives, through the slow, collective building of a model that encompasses all their partial views.
This is precisely how science works. It is the process of blind men (each focused on their own narrow field) comparing notes and slowly, painstakingly, inferring the shape of the elephant.
The Boltzmann brain leads to paralysis: if I might be a random fluctuation, all evidence is suspect, all dialogue is meaningless. The blind men lead to community: my perception is real but limited, and only by listening to others can I approach the truth.
Why This Matters (or Doesn’t)
Sean, in your solo episode you discuss whether we can simply say “I’m perfectly happy to consider universes that have lots of Boltzmann brains in them, because I can just say I’m not one of them” . You call this “cheating,” and you’re right—it is, if we’re playing the probability game.
But what if we refuse to play?
The practical wisdom here is that some questions, however intellectually seductive, have no practical consequence. They are what William James might call “live options” that are, in fact, dead—they lead nowhere, change nothing, and consume energy better spent elsewhere.
This is not anti-intellectualism. It is intellectual hygiene. It is recognizing that our time and attention are finite, and that not every rabbit hole deserves exploration.
A Closing Thought
There’s a quiet irony in all of this. If I am a Boltzmann brain, then this very reflection is just a random firing of neurons, no more significant than static. If I am not, then I’ve used my finite time to arrive at a conclusion that frees me to stop worrying and start living—which is itself a profoundly useful outcome.
The question dissolves not because we’ve answered it, but because we’ve outgrown it.
Thank you both for the stimulating conversation. I’ll be over here, feeling the elephant.
— A listener
P.S. — Adam, your work on the Sleeping Beauty problem came to mind while writing this . There’s a parallel, I think, in how indexical uncertainty can lead us into puzzles that have no practical resolution. Would be curious if you see it too.
I’m Arnold Zuboff—happy and grateful that Adam Elga mentioned my work on this great episode.
I am convinced that universalism (the ‘one self’ view) is required not just to understand personal identity but also to understand why our universe is anthropic. It turns out that the sperm cell argument and the anthropic universe argument that Sean and Adam discuss are really just different scales of the same probabilistic reasoning.
I’ve recently published a full book developing this view: FINDING MYSELF—BEYOND THE FALSE BOUNDARIES OF PERSONAL IDENTITY (with a foreword by Thomas Nagel). Included is my (closely related) solution to the Sleeping Beauty problem. It can be downloaded free from the website of The Philosophy Documentation Center. Here is the link—
https://www.pdcnet.org/pdc/BVDB.nsf/item?openform&product=publications&item=zuboff
As I expressed in my earlier comment, I was very grateful to hear Adam Elga mention my work. Please allow me to add a little more here that may help to clarify how my view bears on the issues you’re circling.
The difficulty in these debates comes from treating subjectivity as something that is objectively partitioned—into numerically distinct selves or individual awakenings—and then asking how probability should be distributed over them. That presumption is common to all the views you discuss, and indeed it is the default presumption in almost all our thinking about ourselves.
In FINDING MYSELF, I argue that this way of carving things up is mistaken. Adam Elga described my alternative view as the claim that there is ‘only one conscious being’. I frame the core point differently. I shift from asking what makes a subject be you to asking what makes an experience be yours.
I argue that what makes an experience yours is nothing beyond its immediacy—its first‑person character, a general qualitative feature your experience has right now, whatever its particular content. And I argue that the subject being you is determined only by the experience’s being yours. But the immediacy of experience that makes it yours is equally present as a general feature in all experience had by anything conscious. All experience being yours is what makes all conscious things you.
You feel confined to just one local person because experience is not globally integrated. Each experiential stream lacks access both to the contents of all the others and to the very same immediacy of the experience in them that alone makes any local content be yours, and we all too easily mistake that non‑integration of content for genuine metaphysical separation. In each moment of experience only one experiential content feels like this one—yours and now. Brain bisection provides a clear illustration: In each disconnected hemisphere you would falsely take the content of experience there to be the only one that is currently yours, yet there is no way the content in either could fail to be equally yours.
Once ‘yours’—and, more fundamentally, ‘this’—is loosened from particular observers or awakenings, the familiar lottery picture dissolves. There is no selection among selves, no competition among awakenings, and no boost from sheer numbers. There is simply your experience occurring in different organisms at different places and times without being globally integrated.
The experience resulting from any sperm cell is yours; any anthropic universe is yours; and every awakening is equally this one, though with differing contents.
From that perspective, the standard ways of reasoning about the sperm case, about anthropic cosmology and about Sleeping Beauty all share the same underlying error: A tacit assumption of objective individuation where none is really there. That is why these problems so naturally generate both presumptuousness and instability: They attempt to assign probabilities over distinctions that are not genuine.
Once again, the link for downloading FINDING MYSELF free:
https://www.pdcnet.org/pdc/BVDB.nsf/item?openform&product=publications&item=zuboff
Pingback: Being “rational” under conditions of uncertainty – Leiter Reports
If we accept that physical objects can fluctuate into existence, why wait eons for a brain or an entire universe? Why not wait just a few billion years for a semi complex microbe to appear in an opportune location, which could then evolve into more complex forms. That possibility might have interesting implications for abiogenesis. Or any process with an ambiguous origin.
I feel the pinch of all the usual solipsist jokes here, along the lines of “I am happy with the idea of me being a BB, but I think SC or AE being a BB just silly”. Another line, which Radford Neal alludes to, I think, is “what proportion of BBs have an experienced universe that includes a physics that supports the possibility of BBs”. I see no constraints on them arising from a *completely different* physical regime. We are talking actual infinities, which make probability and appropriate reference sets hard to talk about.
It seems to me that the Boltzmann Brain would have a vanishingly small chance of having memories that are consistent with its current perceived circumstances. For example, I have a consistent memory of driving to work today and sitting in my office, but if random factors created my memories and my current perceived circumstances, there is no reason I would not “remember” that I was swimming in a pool and then suddenly sitting dry and dressed in my office.
Jon Heiner> the Boltzmann Brain would have a vanishingly small chance of having memories that are consistent with its current perceived circumstances.
Thank you, I read your comment before finishing the podcast, and it helped with the rest. If I’m a Boltzmann Brain, I’m a weirdly lucky one; whereas if I’m a real human, I’m a reasonably… — Oops, I’m a remarkably fortunate one, with lots of coincidences more suitable to being in a simulation created by a being with lots of computing resources and a natural interest in the history of this AI Spring and the last one.
The latest such coincidence is that the first talk I found in my monthly trolling of Princeton web sites is a book talk with Professor Elga interrogating Tom Griffiths on “The Laws Of Thought: The Quest For A Mathematical Theory Of The Mind”: https://www.labyrinthbooks.com/events/tom-griffiths-in-conversation-with-adam-elga-the-laws-of-thought-the-quest-for-a-mathematical-theory-of-the-minda-library-labyrinth-collaboration/
I’m glad this web site isn’t guarded by a Captcha with an “I’m a human” checkbox, or I might have to report myself for an ethics violation.
The book talk was fun, and I got to ask Professor Elga about Boltzmann Brains afterwards; here’s my subsequent reaction:
I take your point that for some numbers, the odds are still good that I’m a Boltzmann Brain. (This will all be in the first person singular; this is all prior to any alleged refutation of solipsism.) I’m not afraid of infinities (I wrote my resubmitted doctoral thesis on Alonzo Church’s Set Theory with a Universal Set†) and am fond of infinitesimals, so I’ll start with them.
If an absolutely unassailable theory of unified physics proves that there are an actually infinite number of BBs, and a finite number of naturally-evolved normal brains, then my odds of being a normal brain were indeed infinitesimal at the beginning of our conversation, even though my memories were “consistent with [my] current perceived circumstances” — which had huge but finite odds against it. By the end of our conversation, my normality odds were a much larger infinitesimal (but still infinitesimal), since even most weirdly lucky BBs will not remain lucky in the next second. Likewise if there are an infinite number of normal brains (e.g. the number of integers) but a larger infinity (e.g. continuum many) of BBs.
I can even make this work with a strictly finite number of BBs, but it had to start as a huge finite number, and the longer I last without my memories losing their apparent connection to my new sensations, the huger that number had to have been. You can reduce the initial number with an Anthropic Principle invocation, by ruling out BBs who weren’t weirdly lucky, and immediately went insane to the point of intellectual dissolution (suitably defined), but that still doesn’t eliminate the ongoing probability maintenance bill. You might try to restrict Anthropic consideration to BBs that had some reason for not going irrational, but that starts to blend into natural selection and evolution, and my intuitions don’t object to that.
But of course we don’t have an absolutely unassailable theory of unified physics, and any actual theory will fail my pre-theoretic smell-test for saying that I’m implausibly special, and for repeatedly making the falsified ultra-high-probability prediction that my new perceived circumstances are just about to diverge drastically from my memories.
_ _ _ _
† Easy introduction: “A Closer Look at the Russell Paradox,” Logique et Analyse Vol 262 (2023–4), pp. 125-146, https://poj.peeters-leuven.be/content.php?url=article&id=3293616&journal_code=LEA.
Mr Heiner’s point seems to be a special case of an observation in §5 of Professor Elga‘s “Boltzmann brains and cognitive instability” (Philosophy and Phenomenological Research 111(1), 2025):
“For it might be thought that the range of potential Boltzmann brain experiences is much wider than the range of potential human experiences. As a result, any particular experience within the human range is vastly more to be expected given SmallHuman than given LargeBB.”