Turtles Much of the Way Down

Paul Davies has published an Op-Ed in the New York Times, about science and faith. Edge has put together a set of responses — by Jerry Coyne, Nathan Myhrvold, Lawrence Krauss, Scott Atran, Jeremy Bernstein, and me, so that’s some pretty lofty company I’m hob-nobbing with. Astonishingly, bloggers have also weighed in: among my regular reads, we find responses from Dr. Free-Ride, PZ, and The Quantum Pontiff. (Bloggers have much more colorful monikers than respectable folk.) Peter Woit blames string theory.

I post about this only with some reluctance, as I fear the resulting conversation is very likely to lower the average wisdom of the human race. Davies manages to hit a number of hot buttons right up front — claiming that both science and religion rely on faith (I don’t think there is any useful definition of the word “faith” in which that is true), and mentioning in passing something vague about the multiverse. All of which obscures what I think is his real point, which only pokes through clearly at the end — a claim to the effect that the laws of nature themselves require an explanation, and that explanation can’t come from the outside.

Personally I find this claim either vacuous or incorrect. Does it mean that the laws of physics are somehow inevitable? I don’t think that they are, and if they were I don’t think it would count as much of an “explanation,” but your mileage may vary. More importantly, we just don’t have the right to make deep proclamations about the laws of nature ahead of time — it’s our job to figure out what they are, and then deal with it. Maybe they come along with some self-justifying “explanation,” maybe they don’t. Maybe they’re totally random. We will hopefully discover the answer by doing science, but we won’t make progress by setting down demands ahead of time.

So I don’t know what it could possibly mean, and that’s what I argued in my response. Paul very kindly emailed me after reading my piece, and — not to be too ungenerous about it, I hope — suggested that I would have to read his book.

My piece is below the fold. The Edge discussion is interesting, too. But if you feel your IQ being lowered by long paragraphs on the nature of “faith” that don’t ever quite bother to give precise definitions and stick to them, don’t blame me.

***

Why do the laws of physics take the form they do? It sounds like a reasonable question, if you don’t think about it very hard. After all, we ask similar-sounding questions all the time. Why is the sky blue? Why won’t my car start? Why won’t Cindy answer my emails?

And these questions have sensible answers—the sky is blue because short wavelengths are Rayleigh-scattered by the atmosphere, your car won’t start because the battery is dead, and Cindy won’t answer your emails because she told you a dozen times already that it’s over but you just won’t listen. So, at first glance, it seems plausible that there could be a similar answer to the question of why the laws of physics take the form they do.

But there isn’t. At least, there isn’t any as far as we know, and there’s certainly no reason why there must be. The more mundane “why” questions make sense because they refer to objects and processes that are embedded in larger systems of cause and effect. The atmosphere is made of atoms, light is made of photons, and they obey the rules of atomic physics. The battery of the car provides electricity, which the engine needs to start. You and Cindy relate to each other within a structure of social interactions. In every case, our questions are being asked in the context of an explanatory framework in which it’s perfectly clear what form a sensible answer might take.

The universe (in the sense of “the entire natural world,” not only the physical region observable to us) isn’t like that. It’s not embedded in a bigger structure; it’s all there is. We are lulled into asking “why” questions about the universe by sloppily extending the way we think about local phenomena to the whole shebang. What kind of answers could we possibly be expecting?

I can think of a few possibilities. One is logical necessity: the laws of physics take the form they do because no other form is possible. But that can’t be right; it’s easy to think of other possible forms. The universe could be a gas of hard spheres interacting under the rules of Newtonian mechanics, or it could be a cellular automaton, or it could be a single point. Another possibility is external influence: the universe is not all there is, but instead is the product of some higher (supernatural?) power. That is a conceivable answer, but not a very good one, as there is neither evidence for such a power nor any need to invoke it.

The final possibility, which seems to be the right one, is: that’s just how things are. There is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops. This is a simple hypothesis that fits all the data; until it stops being consistent with what we know about the universe, the burden of proof is on any alternative idea for why the laws take the form they do.

But there is a deep-seated human urge to think otherwise. We want to believe that the universe has a purpose, just as we want to believe that our next lottery ticket will hit. Ever since ancient philosophers contemplated the cosmos, humans have sought teleological explanations for the apparently random activities all around them. There is a strong temptation to approach the universe with a demand that it make sense of itself and of our lives, rather than simply accepting it for what it is.

Part of the job of being a good scientist is to overcome that temptation. “The idea that the laws exist reasonlessly is deeply anti-rational” is a deeply anti-rational statement. The laws exist however they exist, and it’s our job to figure that out, not to insist ahead of time that nature’s innermost workings conform to our predilections, or provide us with succor in the face of an unfeeling cosmos.

Paul Davies argues that “the laws should have an explanation from within the universe,” but admits that “the specifics of that explanation are a matter for future research.” This is reminiscent of Wolfgang Pauli’s postcard to George Gamow, featuring an empty rectangle: “This is to show I can paint like Titian. Only technical details are missing.” The reason why it’s hard to find an explanation for the laws of physics within the universe is that the concept makes no sense. If we were to understand the ultimate laws of nature, that particular ambitious intellectual project would be finished, and we could move on to other things. It might be amusing to contemplate how things would be different with another set of laws, but at the end of the day the laws are what they are.

Human beings have a natural tendency to look for meaning and purpose out there in the universe, but we shouldn’t elevate that tendency to a cosmic principle. Meaning and purpose are created by us, not lurking somewhere within the ultimate architecture of reality. And that’s okay. I’m happy to take the universe just as we find it; it’s the only one we have.

110 Comments

110 thoughts on “Turtles Much of the Way Down”

  1. But there would still be an invariant algorithm somewhere.

    Sure. But my point is that it’s conceivable that the underlying regularity would be completely inaccessible to us, and once you allow Turing machines in the Tegmark hypothesis, you’ve predicted essentially every set of (computable) observations, however chaotic and irregular.

    For example, if you found that there were two types of mathematical structure in which intelligent life could potentially evolve (structure A and structure B), and 10^10 more intelligent observers would evolve in A than in B, but we observe that we live in B, then we expect that there’s something wrong with our theory.

    This is intuitively appealing, but I think it’s wrong. You stress probability, but Tegmark’s hypothesis leads to all its outcomes with certainty. If Tegmark’s hypothesis predicts our universe with certainty (and I think it does), and also another universe in which life is far more common by a factor of 10^10 (which I expect it also does, since you get to fiddle not only with the Drake equation but also fundamental constants), then why does our failure to live in the second universe falsify, or even lower the likelihood of, Tegmark’s hypothesis? When we make an observation, it is not preceded by a process in which we are selected at random from the pool of all conscious beings. When there are multiple universes containing conscious life, there are a multitude of observations — and the observations we make don’t need to be picked from some giant cosmic barrel in order for us to make them. Under such hypotheses, we are not testing (and can not test) what a “typical” observer will see.

    I think the false intuition here arises by comparison with probabilistic theories that deal with a finite number of trials. If theory A tells us that we have a fair coin, while theory B tells us that it’s biased for heads, then if we toss the coin 10 times and see 10 heads, we are entitled to favour theory B. But what’s different here is that only 1 in 2^10 of the possible outcomes is generated. Tegmark’s hypothesis generates all outcomes, not a random sample. The conditional probabilities are all equal to 1 , so Bayes’s theorem gives us nothing with which to refine our prior probabilities. (Well, there might be some conditional probabilities of 0, if you can think of something that literally can’t happen within any mathematical structure.)

    There’s a similar flawed intuition behind the Boltzmann Brain argument, which claims that we should favour cosmological parameters that rule out the future evolution of Boltzmann Brains, on the basis that if they eventually come into existence and outnumber us, that would render us vastly atypical. To which I think the correct response is “so what?” If theories with parameter set A and parameter set B both give comparable probabilities for planet-bound life like us to exist at all, then it’s irrelevant whether or not set A also predicts a far future full of 10^10 more conscious vacuum fluctuations than there ever were instances of conscious planetary life.

    Sean mentioned a paper a while back, Are We Typical? by Hartle and Srednicki that deals with some of these issues, though since it doesn’t address Tegmark it doesn’t confront head-on what it means when a hypothesis predicts essentially all possible observations.

  2. There’s an old African saying that if you want to travel fast, go alone, but if you want to travel far, go with a group.

    The problem, especially in the chaos of the modern world, is defining and motivating the group. That’s where arbitrary beliefs are useful. They separate the true believers from everyone else.

    Science does this subconsciously as well. That is what four dimensional spacetime is. The frame and the direction, with no conflict or paradox. Remember Christianity didn’t become the state religion of Rome because Jesus was such a nice guy, but because Constantine had a vision of the cross as a war totem. What better to get everyone focused and moving in the same direction, but the crosshairs of a two dimensional coordinate?

    Look at cosmology. It has coalesced around a theory of the universe as a single entity, going from start to finish and anytime the data is contradictory, some new energy, or force or additional theory is assumed, the theory is never questioned. Is that the epitome of true belief, or what?

    The problem is that it is just another house of cards or bubble. Usually these grow until they just can’t anymore, then it all falls flat as a pancake and everyone wonders how they ever thought pets.com, or that house, or that theory could have ever been that important, or that expensive. Someday, in the not to distant future, there will be a lot of Christians wondering why they didn’t get raptured before everything fell apart, but today’s cosmologists won’t have that epiphany. The economy supporting their endeavors to find the edge of the universe will collapse before it gives them that next bigger telescope they need and they will go to their graves believing, hoping someday the proper instruments are built.

  3. The parameters and constraints of a given model determine the reasonable possibilities for conscious existence in the cosmology.

    An eternal universe of infinite mass in tandem with flat space can eventually, by conjecture of a certain kind, produce anything and everything…everything is possible.

    An eternal universe of finite mass with closed space and invariant frames of reference still has great potential to produce complexity, and posesses built in engineering constraints which make the development of complexity plausable.

    Any cosmology which is finite in time, finite in mass and limited in spatial extent is highly unlikely to contain high levels of informational complexity.

    Note in paragraph 2…”conjecture of a certain kind”. I imply that such a cosmology is fatally flawed. Complexity requires well defined cosmological contraints to be formed, conserved and evolve, as organic evolution on Earth is only possible for fish while bodies of water exist, or higher animals so long as the atmosphere contains Oxygen…or the the fact that the inital origin of life required certain substances (amino acids etc) in pools of water and probably lightning.

    The idea that overall universal entropy can decline and informational complexity increase in an open soup of quantum fluctuations is highly suspect…infinite or not, perhaps ESPECIALLY if it is infinite!

    It is interesting that when we have so little understanding of the development of information and complexity in our own universe, we try to escape our dilemma by imagining infinite additonal sets of universes, the existence of which is only suggested by assuming our present hypotheses are correct.

    Any scientist needs to develop a profound respect for the universal existence of inorganic information and organic high complexity in the universe. We also need to remind ourselves, as we construct models, of the necessary quantum connection between observation and existence.

  4. I am not a scientist, but I’m not sure I recall anywhere where it said that scientists have faith that the universe behaves according to “rational” physical laws. Quantum mechanics seemingly defies rationality, as does aspects of General Relativity (black holes). Of course, our understanding of rationality is biased by our own evolution. We’ve identified a few rules which the universe obeys. But because the universe obeys them does not make them rational, it just means it obeys them. We don’t need faith to believe this because we have evidence.

    Five hundred years ago, the reigning rationality was that the Sun revolved around the Earth. It is only rational in retrospect that the Earth revolves around the Sun. In this case, religion had to bend to science, but the science did not change. Religion will have to continue to redraw the bounds of faith to explain the new truths of science.

    Why does our universe obey these rules? The potential explanations are innumerable, at least as many as there habitable planets in the universe, but the observation, the rule, remains valid in each case. It doesn’t matter if the rule is rational, it only matters that you can observe it.

  5. Davies’ blather exemplifies the fact that a sentence that has the syntactic form of an interrogative is not necessarily a sensible question.

  6. Pingback: Woit’s loaded terms; Sean’s self-professed unbias; both a facade of hard science « Society with Jimmy Crankn

  7. Sean, it seemed to me your post treated the questions “Could the fundamental laws of physics possibly be otherwise?” and “Does the universe have meaning and/or purpose?” as equivalent. I wonder if this equivalency is of necessity, i.e., even if we learn that the laws of physics could not be otherwise, does this necessarily bear at all on whether the universe has meaning or purpose?

    My instinct would be to answer in the negative, but I must admit I haven’t thought much about it.

  8. The universe does not behave according to our pre-conceived ideas. It continues to surprise us.

    One might not think it mattered very much, if determinism broke down near black holes. We are almost certainly at least a few light years, from a black hole of any size. But, the Uncertainty Principle implies that every region of space should be full of tiny virtual black holes, which appear and disappear again. One would think that particles and information could fall into these black holes, and be lost. Because these virtual black holes are so small, a hundred billion billion times smaller than the nucleus of an atom, the rate at which information would be lost would be very low. That is why the laws of science appear deterministic, to a very good approximation. But in extreme conditions, like in the early universe, or in high energy particle collisions, there could be significant loss of information. This would lead to unpredictability, in the evolution of the universe.

    To sum up, what I have been talking about, is whether the universe evolves in an arbitrary way, or whether it is deterministic. The classical view, put forward by Laplace, was that the future motion of particles was completely determined, if one knew their positions and speeds at one time. This view had to be modified, when Heisenberg put forward his Uncertainty Principle, which said that one could not know both the position, and the speed, accurately. However, it was still possible to predict one combination of position and speed. But even this limited predictability disappeared, when the effects of black holes were taken into account. The loss of particles and information down black holes meant that the particles that came out were random. One could calculate probabilities, but one could not make any definite predictions. Thus, the future of the universe is not completely determined by the laws of science, and its present state, as Laplace thought. God still has a few tricks up his sleeve.

    That is all I have to say for the moment. Thank you for listening.

    Does God Play Dice? by Professor Stephen Hawking

  9. I would usually consider such question as something beyond science. So, for a typical scientist, it’s not their jobs to ask such an overwhelming question. What I think is that if we keep on asking ourselves such question that bores deeper and deeper and becoming more and more fundamental, we end up falling in a bottomless pit. It’s just like that: once we find a set of general rules, we question them and form new sets of rules to explain those. Then we go further and question these rules and form another new set of more fundamental rules to explain these. It’s like proving in mathematics, you need theorems to prove theorems, and axioms to prove the more fundamental theorems. At the end of the day, you start questioning those axioms and find that you are digging into an infinite cycle of proving. This is how we are studying the universe. But we have to know where to stop or end up falling in a bottomless pit of reasonings.

  10. Tegmark’s hypothesis generates all outcomes, not a random sample. The conditional probabilities are all equal to 1 , so Bayes’s theorem gives us nothing with which to refine our prior probabilities.

    Exactly as I said in an earlier thread “Everybody’s got to be somewhere”.

  11. Greg, it is still the case that the same observer (observer = mathematical model = universe in its own right) can be found embedded in different larger universes. When we do physics we try to infer something about this larger universe.

    We can ask about the the probability distribution of some variable we are about to observe, given all the knowledge stored in our brains so far. This should be well defined in principle…

  12. Santa does not exist

    Not true, see here 🙂

    If this sentence is true, then Santa Claus exists.
    We need not believe, beforehand, that the sentence is true or that Santa Claus exists. But we can ask, hypothetically, if the sentence is true, then does Santa Claus exist?

    If the sentence is true, then what it says is true, namely that if the sentence is true, then Santa Claus exists. Therefore the answer to the hypothetical question must be yes: Santa Claus does exist if the sentence is true. However, that is exactly what the sentence states: not that Santa Claus exists, but that he exists if the sentence is true, which is just the hypothetical answer just established. Therefore the sentence is true after all, and since we have established that Santa Claus exists if the sentence is true, and that it is true, it follows that Santa Claus must exist.

  13. Why does anyone even refer to our universe as being *representable* by a mathematical structure, much less “being” one? I mean really, look at the wave function and it’s collapse: you can’t even model it properly because of simultaneity problems, maybe the issues of Renninger negative result measurements, unreliable detectors, etc. And don’t tell me, it’s OK because it’s just a representation, etc: No, if you really “believe in electrons” then “something” comes out of a nucleus or electron gun, and then appears at some spot and not anywhere else – and yet multiple shots show an interference pattern. It is absurd as a realist/mathematical concept. (BTW, multiple worlds and decoherence are BS anyway, since they still don’t get us to the actual localization itself from waves, they just hypocritically work off the collapse already being taken for granted and then just work it into their treatment of wave interaction and evolution.)

    Also: I wasn’t implying that everyone who is against the idea of God etc. is a hack, anymore than I think that everyone who is a “conservative” is a hack – but the parallel is right there: like the difference between sincere strict constructionists and cynical neocons. It’s much like the difference between sincere atheists or doubters (many of you here), with what appear to them to be perfectly good arguments (and some aren’t bad), versus manipulators (yes, often unconsciously of course) of notions of multiple worlds etc. to discredit God concepts, while pretending to uphold the old rational/postivist tradition that would have rejected both as unknowable or “meaningless.” I think Stenger is one of the worst at that – he wants to admit other worlds, while conveniently corraling possible alternatives into all being kind of like ours, instead of appreciating that unleashing “existability” opens a huge can of worms – maybe even God Herself.

  14. how surprising is it that people who argue about god and science without having a real understanding of what either one is get confused?

  15. Pingback: What They're Saying About Davies' Op-Ed - Telic Thoughts

  16. Egan, your arguments concerning the misuse of probabilistic arguments are the most cogent I have ever encountered. You have greatly helped me clarify my thinking on this important issue and I thank you for that. The Boltzmann argument, and its many parallels and restatements in other areas of modern debate, have always struck me as profoundly missing the point, and I appreciate your ability to articulate why this is true so clearly.

  17. Count Iblis (#62), if you want to define all observers with the same subjective history to be “one observer”, that’s fine by me — it’s really a semantic issue, but this is a supportable choice of definition.

    But when this “one observer” with N different “threads” in N different universes makes a fresh observation, under Tegmark’s hypothesis you end up with exactly the same result as ever: there are now m different observers for the m results of the experiment, consisting of N_1, N_2, N_3, … N_m “threads” respectively, and the values of N_i are irrelevant, because even if N_1 << N_2 << … N_m, so long as N_1 is non-zero it is still a certainty — not an outcome with probability N_1/N — that result 1 of the experiment will be observed by someone. When I do this experiment and find result 1, I still have nothing to go on except:

    P(there will be someone with the history I had prior to the experiment who sees result 1 | Tegmark’s hypothesis) = 1

    I never had empirical access to N before, and I have no empirical access to N_1 now, or to any of the N_i, or to m, the number of ways I’ve been split. If I am trying to figure out whether the universe is governed by Tegmark’s hypothesis, or by some other model that predicts at least one observer seeing result 1 for the experiment I just did, then I have no basis for rejecting Tegmark’s hypothesis.

    I am not entitled to say “Gosh, under Tegmark’s hypothesis what are the odds that *my* consciousness ended up in this minority branch of my subjective future?”; that’s as misguided as saying “What are the odds that *I* get to be this particular one of the six billion humans on Earth that I actually am?”

    As Chemicalscum put it, “Everybody’s got to be somewhere”. Unless they’re nowhere. The only way you can try to falsify Tegmark is by constructing a potential observation (or set of observations, as large as you like) that Tegmark would predict literally no observer seeing. Given that we expect any real model of physics to be a sub-case of Tegmark’s grand catalogue, I’d be amazed if such a test can be devised, and even more amazed if the result falsified Tegmark’s hypothesis!

  18. Greg, it’s not that hard to cast doubt on Tegmark’s hypothesis of (apparently) radical modal realism (or at least, of “mathematical structures.”) As I’ve said before, the number (roughly) of describable universes (“possible worlds”, PWs) is much larger than the number of nice clean ones with simple and continued laws of physics. In other words, there are many more PWs with sloppy laws of attraction like 1/r^2.1223 and not even consistent between particles or in time (since we can describe that – I just did!), or filled with “electrons” of slightly or greatly varying masses, etc. Well, even if we have to “find ourselves” in a PW conducive to life, the chances are that even then, many features would still be sloppy, and not elegant “laws of physics.” Even worse, once we got to this point, there are many more PWs where things wouldn’t continue as they had before (just like many more toss-regimens of coins where, having gotten to 50 heads in a row out of 100 tosses total, there are more where the remaining tosses vary in all kinds of ways than the one which continues to come up heads, etc.) And don’t tell me, as some did hereabouts, those aren’t really “mathematical structures” since I guess they aren’t continuous functions I guess: matrices are real math, and unrelated numbers, and there’s Fourier analysis which can handle one function spliced onto another or chunks of unrelated hills and valleys, etc.

    Our being in a “nice elegant universe” is absurdly unlikely from the point of view of wild pan-realism for PWs, so I say there’s “Management” of some sort, regardless of just what sort of thing that is.

    PS: since it’s easier here: I still have trouble with that extra acceleration of a lateral moving particle in the planar mass field being consistent with the equivalence principle. After all, the components of acceleration can be compared separately, so the total being adjusted to conserve energy wouldn’t keep the accelerations of relatively moving bodies from being “different.” Also, what if it’s a stationary (relative to the plane) but rotating ring – how fast does that fall? At the rate appropriate to the rim velocity? Doesn’t that have problems? tx

  19. Neil, you haven’t engaged at all with my argument on Tegmark. I agree with you that his hypothesis predicts many “non-elegant” universes. Unfortunately the numbers are invisible to us, and hence irrelevant. Tegmark’s hypothesis is untestable metaphysics; it is not refuted by the elegance of the universe, but given that it’s irrefutable in principle I am obviously not arguing that it should be treated as science. Equally, your claim about “Management” is untestable metaphysics. Go ahead and believe whatever you feel like believing, but the bottom line is you have no logical basis with which to persuade anyone else to share those beliefs.

    On the equivalence principle and different accelerations, consider this analogy. Pick a point P on a sphere, and consider all the geodesics — all the great circles — that pass through P. You should find it trivially easy to prove that the rate at which they “accelerate away from” each other, evaluated at P, is zero.

    To make this more precise: take two geodesics through P, and travel a distance s away from P along both of them, reaching points Q and R. Compute the length of the geodesic QR (this isn’t actually unique, but there’s a unique sensible choice when close to P). The second derivative of the length of QR with respect to s, evaluated at s=0, is zero.

    This is not something special about the sphere, it is a property of all geodesics on smooth manifolds, including the world lines of free-falling particles in GR. When two particles in free fall pass each other, if you then ask how far apart they are after a proper time of tau has passed for both of them, the second derivative of that distance wrt tau, evaluated at tau=0, will be zero. That is what the equivalence principle demands, and that’s what basic differential geometry guarantees.

    How, then, can someone using a certain coordinate system measure different accelerations for the coordinates of these particles?

    Go back to the sphere, and adopt the usual latitude and longitude for the coordinate system. Let P be the point with longitude 0, latitude 45 degrees south. Adam, travelling due south from P will have a constant longitude 0, and a latitude that is a linear function of the distance s that he has travelled. Both coordinates, in this case, obviously have second derivatives wrt s of zero.

    Now look at the coordinates of Eve, travelling from P along a great circle that also passes through, say, the point on the equator at 45 degrees east. The second derivative of Eve’s latitude and longitude wrt s will not be zero at P.

    Yet despite having these different “coordinate accelerations”, Adam and Eve measure no mutual acceleration away from each other at P.

  20. Neil, I suspect what’s causing a lot of your confusion regarding GR is that you’re expecting the second rates of change of a particle’s coordinates to be the components of some kind of acceleration vector. That’s only true of Cartesian coordinates in flat spacetime. In GR, the only meaningful acceleration vector is the covariant derivative of the 4-velocity, and its components differ from the second derivatives of the particle’s coordinates by terms which involve the Christoffel symbols. Free-falling particles all have zero acceleration vectors, while of course their coordinates will generally have non-zero second derivatives. What’s more, you can’t take those coordinate second derivatives for two different particles and subtract them to find a “relative acceleration”, as if you were dealing with vectors.

    If this is all Greek to you, I’m afraid it’s not very practical for me or anyone else to try to explain the whole framework of GR in a series of off-topic blog comments. If you ever get serious about GR, try reading Sean’s online lecture notes.

  21. Greg, yes the modal realist type claims are indeed untestable metaphysics, but I can still critique on the basis of, “If we assumed soandso was right, what can we say then…” I wasn’t replying to your spefific take, just per your apparent general question, can we find consequences of Tegmark’s view that don[t sit with what we find ourselves in?
    Your description of GR is baffling, and all I can say is: the semi-popular discusssions sure don’t make any of those weird issues clear about acceleration etc. I still would like to know: what happens to the free-falling rotating ring, or even more complex objects with parts moving at different speeds?

  22. I am not entitled to say “Gosh, under Tegmark’s hypothesis what are the odds that *my* consciousness ended up in this minority branch of my subjective future?”; that’s as misguided as saying “What are the odds that *I* get to be this particular one of the six billion humans on Earth that I actually am?”

    Not only six billion humans, but all consciousnesses (great and small) in all multiverses 😉

    Whether or not it’s misguided depends on whether you view it phenomenologically or “objectively”. If you imagine someone else asking themself the why-am-I-me question, it seems silly – “of course you’re you, who else would you be?” However, if you ask the question of yourself, it becomes much more involved. So which is the right perspective? Unfortunately, reality is not, and has never been, separate from your consciousness. The perspective where you imagine someone else asking the question is only a model in *your* mind, whereas asking it of yourself is much more immediate and real.

    I like the why-I-am-me question because it leaves you between a rock and a hard place. You can make the question go away by accepting solipsism, but that is shocking and profound in it’s own right. If you reject solipsism, then you have a very difficult (if not impossible) question to answer.

  23. Reply to Greg #68,

    I agree that that for each of the possible outcomes there is an observer experiencing it (with probability 1). However, we can apply this reasoning to any stochastic experiment (multiverse or no multiverse) and I think you would not argue against the use of conventional probabilities in these cases.

    E.g., let’s consider a single observer defined as some algorithm/model embedded in some larger mathematical structure which is not uniquely defined when we specify the observer.

    Suppose the observer throws a coin 10,000 times and records how many times the coin lands heads up. The possible values range from zero to 10,000. In a multiverse setting all these possible outcomes are realized. For any n ranging from zero to 10,000 there is an observer with probability 1 who finds that the number of times the coin landed heads up is n.

    The reason why all the outcomes will be realized is because the information stored in the observer’s brain before he trows the coin does not contain enough information to fix the outcome of the coin trows. So, the same observer will be located in different universes where the initial conditions yield different outcomes.

    In a single universe setting, only one of the possible outcomes will be realized, of course. In that case there will therefore be an observer that observes whatever he is observing with probability 1. All other outcomes are observed with probability zero.

    So, Multiverse or no Multiverse, this notion of probability is not of much use. Clearly we do need a notion of certain states being more likely than other states. In this case we know that the number of head ups is approximately distributed according to a normal distribution with mean 5000 and standard deviation 50. The observer will observe a result between 4800 and 5200 with more than 99.99% certainty.

    In the Multiverse case we can say that more than 99.99% of the observers will observe an outcome in the range from 4800 to 5200. Since you don’t know in which universe you are before you throw the coins (i.m.o., you really are everywhere), you can be 99.99% sure that you’ll end up one of the observers who end up observing an outcome in the range from 4800 to 5200.

    Now, the case of the single universe is slightly more awkward, as you have to appeal to counterfactual intitial conditions to justify saying that you’ll observe an outcome between 4800 and 5200 with 99.99% certainty.

    In the single universe setting you need to appeal to “equally likely” but conterfactual initial conditions to justify the probability distribution, so one could argue that the probability distribution is more natural in hte multiverse setting.

    Another way of looking at this is in terms of entropy. An outcome of 5000 can be realized in 1000!/(500!)^2 = approximately 2.7 *10^(299) while an outcome of zero can be realized in only 1 way. All these possible realizations are equally likely.

    Perhaps one can justify doing statistics with all the observers, even though each observer can find himself in only one state, because the outcomes do not affect the observers in such a way as to change them irreversibly. If you observe 5023 tails up, you can imagine forgetting about that the next day.

    If you think about the number of heads up the next day, you will do a measurement on your long term memory and you’ll recall the number 5023. Before you recall that number you would be identical to all the other copies who observed different outcomes and are not constantly aware of the number (and who are identical to you in all other respects that you are aware of). So it is in fact a new measurement.

    At any time when you do not think of the number you are not located in one universe where there was a definite outcome. The ensemble of different outcomes is thus always relevant.

Comments are closed.

Scroll to Top