Turtles Much of the Way Down

Paul Davies has published an Op-Ed in the New York Times, about science and faith. Edge has put together a set of responses — by Jerry Coyne, Nathan Myhrvold, Lawrence Krauss, Scott Atran, Jeremy Bernstein, and me, so that’s some pretty lofty company I’m hob-nobbing with. Astonishingly, bloggers have also weighed in: among my regular reads, we find responses from Dr. Free-Ride, PZ, and The Quantum Pontiff. (Bloggers have much more colorful monikers than respectable folk.) Peter Woit blames string theory.

I post about this only with some reluctance, as I fear the resulting conversation is very likely to lower the average wisdom of the human race. Davies manages to hit a number of hot buttons right up front — claiming that both science and religion rely on faith (I don’t think there is any useful definition of the word “faith” in which that is true), and mentioning in passing something vague about the multiverse. All of which obscures what I think is his real point, which only pokes through clearly at the end — a claim to the effect that the laws of nature themselves require an explanation, and that explanation can’t come from the outside.

Personally I find this claim either vacuous or incorrect. Does it mean that the laws of physics are somehow inevitable? I don’t think that they are, and if they were I don’t think it would count as much of an “explanation,” but your mileage may vary. More importantly, we just don’t have the right to make deep proclamations about the laws of nature ahead of time — it’s our job to figure out what they are, and then deal with it. Maybe they come along with some self-justifying “explanation,” maybe they don’t. Maybe they’re totally random. We will hopefully discover the answer by doing science, but we won’t make progress by setting down demands ahead of time.

So I don’t know what it could possibly mean, and that’s what I argued in my response. Paul very kindly emailed me after reading my piece, and — not to be too ungenerous about it, I hope — suggested that I would have to read his book.

My piece is below the fold. The Edge discussion is interesting, too. But if you feel your IQ being lowered by long paragraphs on the nature of “faith” that don’t ever quite bother to give precise definitions and stick to them, don’t blame me.

***

Why do the laws of physics take the form they do? It sounds like a reasonable question, if you don’t think about it very hard. After all, we ask similar-sounding questions all the time. Why is the sky blue? Why won’t my car start? Why won’t Cindy answer my emails?

And these questions have sensible answers—the sky is blue because short wavelengths are Rayleigh-scattered by the atmosphere, your car won’t start because the battery is dead, and Cindy won’t answer your emails because she told you a dozen times already that it’s over but you just won’t listen. So, at first glance, it seems plausible that there could be a similar answer to the question of why the laws of physics take the form they do.

But there isn’t. At least, there isn’t any as far as we know, and there’s certainly no reason why there must be. The more mundane “why” questions make sense because they refer to objects and processes that are embedded in larger systems of cause and effect. The atmosphere is made of atoms, light is made of photons, and they obey the rules of atomic physics. The battery of the car provides electricity, which the engine needs to start. You and Cindy relate to each other within a structure of social interactions. In every case, our questions are being asked in the context of an explanatory framework in which it’s perfectly clear what form a sensible answer might take.

The universe (in the sense of “the entire natural world,” not only the physical region observable to us) isn’t like that. It’s not embedded in a bigger structure; it’s all there is. We are lulled into asking “why” questions about the universe by sloppily extending the way we think about local phenomena to the whole shebang. What kind of answers could we possibly be expecting?

I can think of a few possibilities. One is logical necessity: the laws of physics take the form they do because no other form is possible. But that can’t be right; it’s easy to think of other possible forms. The universe could be a gas of hard spheres interacting under the rules of Newtonian mechanics, or it could be a cellular automaton, or it could be a single point. Another possibility is external influence: the universe is not all there is, but instead is the product of some higher (supernatural?) power. That is a conceivable answer, but not a very good one, as there is neither evidence for such a power nor any need to invoke it.

The final possibility, which seems to be the right one, is: that’s just how things are. There is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops. This is a simple hypothesis that fits all the data; until it stops being consistent with what we know about the universe, the burden of proof is on any alternative idea for why the laws take the form they do.

But there is a deep-seated human urge to think otherwise. We want to believe that the universe has a purpose, just as we want to believe that our next lottery ticket will hit. Ever since ancient philosophers contemplated the cosmos, humans have sought teleological explanations for the apparently random activities all around them. There is a strong temptation to approach the universe with a demand that it make sense of itself and of our lives, rather than simply accepting it for what it is.

Part of the job of being a good scientist is to overcome that temptation. “The idea that the laws exist reasonlessly is deeply anti-rational” is a deeply anti-rational statement. The laws exist however they exist, and it’s our job to figure that out, not to insist ahead of time that nature’s innermost workings conform to our predilections, or provide us with succor in the face of an unfeeling cosmos.

Paul Davies argues that “the laws should have an explanation from within the universe,” but admits that “the specifics of that explanation are a matter for future research.” This is reminiscent of Wolfgang Pauli’s postcard to George Gamow, featuring an empty rectangle: “This is to show I can paint like Titian. Only technical details are missing.” The reason why it’s hard to find an explanation for the laws of physics within the universe is that the concept makes no sense. If we were to understand the ultimate laws of nature, that particular ambitious intellectual project would be finished, and we could move on to other things. It might be amusing to contemplate how things would be different with another set of laws, but at the end of the day the laws are what they are.

Human beings have a natural tendency to look for meaning and purpose out there in the universe, but we shouldn’t elevate that tendency to a cosmic principle. Meaning and purpose are created by us, not lurking somewhere within the ultimate architecture of reality. And that’s okay. I’m happy to take the universe just as we find it; it’s the only one we have.

110 Comments

110 thoughts on “Turtles Much of the Way Down”

  1. Greg Egan (#51):

    The sort of reasoning you advocate here (and that Hartle and Srednicki wrote so nicely about) is seductive due to the annoying problems it solves, but I think it also severely (and I suspect unnecessarily) curtails our ability to use a given piece of data to distinguish between cosmological models. For example, as I argue
    here, by this reasoning no set of data that you have in hand can distinguish between our universe and a 500 kg thermal ball of gas that exists forever. (That is, while the usual Bolzmann’s brain problem is banished, another one appears to takes its place.)

    Anthony

  2. Coirrection:

    An outcome of 5000 can be realized in 10000!/(5000!)^2 = approximately 1.59*10^(3008) while an outcome of zero can be realized in only 1 way. All these possible realizations are equally likely.

  3. Neil Bates (#73) wrote:

    Your description of GR is baffling, and all I can say is: the semi-popular discusssions sure don’t make any of those weird issues clear about acceleration etc. I still would like to know: what happens to the free-falling rotating ring, or even more complex objects with parts moving at different speeds?

    I’m afraid you’ve proved several times that any effort I put into calculating something like that is wasted, because you don’t know how to interpret results in GR — after complaining that “nobody told you it was like this”, you then go and ram the answers into yet another Newtonian kludge. If you’re genuinely curious, go and learn what you need to learn.

  4. Pingback: » Davies on blogosphere

  5. Greg Egan,

    This is intuitively appealing, but I think it’s wrong. You stress probability, but Tegmark’s hypothesis leads to all its outcomes with certainty. If Tegmark’s hypothesis predicts our universe with certainty (and I think it does), and also another universe in which life is far more common by a factor of 10^10 (which I expect it also does, since you get to fiddle not only with the Drake equation but also fundamental constants), then why does our failure to live in the second universe falsify, or even lower the likelihood of, Tegmark’s hypothesis?

    Here’s the thing. No matter what the real multiverse is like, if it is a multiverse, the vast majority of observers will exist in what we might call a “typical” universe. Therefore, it is not unreasonable to put good money on us being in a “typical” universe. It may be wrong, but as long as we can show that the probabilities are suitably astronomical before throwing out a hypothesis, this is unlikely to be the case.

    I think the false intuition here arises by comparison with probabilistic theories that deal with a finite number of trials. If theory A tells us that we have a fair coin, while theory B tells us that it’s biased for heads, then if we toss the coin 10 times and see 10 heads, we are entitled to favour theory B. But what’s different here is that only 1 in 2^10 of the possible outcomes is generated. Tegmark’s hypothesis generates all outcomes, not a random sample. The conditional probabilities are all equal to 1 , so Bayes’s theorem gives us nothing with which to refine our prior probabilities. (Well, there might be some conditional probabilities of 0, if you can think of something that literally can’t happen within any mathematical structure.)

    Right. So it’s exactly the same as this situation:

    Imagine that Sean has a red ball and a blue ball. For whatever reason, he wants to give us each one of the two, without us knowing before hand which one. Given our ignorance of Sean’s decision-making process, we should naturally bet on a 50/50 chance of getting either the red of the blue ball each.

    Now imagine that Sean has one million blue balls, but only one red ball. He gives out all of his balls to various people, and you and I are each one of those people. What probability would you place on obtaining the red ball? Remember that every single ball is handed out. But we are still forced to think of it in a probabilistic manner because each only see [i]one[/i] of them. And then, if you obtain the red ball, what are you going to think? With a 1 million to 1 chance of obtaining the ball purely randomly with uniform weighting, would it not make more sense if Sean’s method of choosing who to give the ball to somehow favored you?

    Think about it this way: if Sean gives the balls out randomly, then you have a 1/10^6 chance of obtaining the red ball. But if Sean decides that he wants to give the ball out to only frequent commenters on his blog, then your chances were much better to begin with. If you got the red one, then, isn’t this scenario more likely?

    Of course, as I said, we might be misled by this, so it requires rather careful analysis of the probabilities to ensure that being misled is as unlikely as possible. In this case, for instance, there’s around a 1/10,000 chance or so (assuming 100 frequent posters) that one of the frequent posters would have been chosen randomly, so we don’t really have much reason to favor the idea that that was the method Sean used. Furthermore, one would want to have independent confirmation of the theory. One does expect there to be a few exceptional things about anybody’s life, and there’s no reason to necessarily expect one to be related to another.

    In sum, let’s take the following scenario, comparing two competing theories using the weak anthropic principle:

    Theory A:
    1. 10^10 times as many observers in a type of universe that is very different from our own.

    Theory B:
    1. 10 times as many observers in a type of universe that is very different from our own.

    If the theories are otherwise equivalent, we would have strong reason to suspect that theory B is correct. But it would be foolish to stop there: we should seek other, independent methods of distinguishing between the two theories.

  6. Anthony, Count Iblis, Jason, thanks for the comments. You’ve made me think twice about Hartle & Srednicki, though I’m not yet prepared to renounce their approach. Anthony wrote (here):

    So in my mind, the question of how we can reason in ‘multiverse’ cosmology in a way that (a) actually allows us to effectively discriminate between models, but (b) does not lead to any weird paradoxes, is still very much open.

    which makes me less convinced than before that there’s necessarily one right answer here. I’d recommend that people read Anthony’s whole post, along with Hartle & Srednicki, before concluding that this matter is all neatly tied up, either way.

    Count Iblis, when it comes to coin tosses in a single universe, I have no problem with adopting the strategy that I should assume a biased coin if I see an improbable run of heads and tails. Under either Copenhagen QM or classical mechanics with initial conditions free of weird conspiracies, if Alice and Bob are both shown multiple runs of 10,000-coin tosses that are sometimes from fair coins and sometimes from biased ones, then if Alice adopts a strategy of guessing that the coin is biased when the results are sufficiently improbable for a fair coin, she will certainly guess correctly more often than Bob, who ignores the data. And we all want to adopt a strategy that helps us guess the truth as often as possible.

    But even when you simply switch to a multiverse version of the coin toss scenario, I don’t think everything stays exactly the same, and it’s certainly not quite as obvious what our goal should be. If I stick to Alice’s strategy, I believe that will maximise the number of versions of me across the multiverse who guess the fairness of the coin correctly — with the versions weighted equally by microstate, i.e. each version who sees a different head/tail run gets counted separately. But what if these versions of me aren’t shown the run sequence, just told the total number of heads? Does that change anything? Should someone who was told there were 5000 heads really count 10^3008 times more than someone who was told there were none? I’m not saying I can’t see some logic in doing so, but it begins to seem a lot more subjective to me at this point. It’s obvious that when I’m one person I want to guess correctly as often as possible, but when I am (or there are) many people, it’s not quite so compelling a case to say “I want as many people in the multiverse as possible to guess correctly”, especially when there are different ways open to us to count the numbers of people.

    And when we switch from guessing whether a coin was fair or not to guessing which of two multiverse theories is true, maximising the number of people who guess the correct theory can lead to absurd results.

    Suppose theory A predicts that there is a single universe much like ours, while theory B predicts that there are 10^10 universes much like ours. In all other respects, they make identical predictions. In order to compare strategies for guessing which is true, we invoke a higher-level multiverse, in which the level 1 multiverse obeys theory A 50% of the time, and theory B 50% of the time. For concreteness, suppose the level-2 multiverse contains 10 level-1 multiverses obeying theory A, and 10 obeying theory B.

    The strategy “Always guess theory B” will lead to 10 * 10^10 universe-populations of people who guessed the true theory of their multiverse correctly. Random guesses would yield only half that number (plus a relatively tiny additional amount for the correct guesses of theory A). But despite the success on those terms of the “Always guess theory B” strategy, I do not believe there is any good reason to prefer theory B.

    Jason wrote:

    But it would be foolish to stop there: we should seek other, independent methods of distinguishing between the two theories.

    That’s good advice, and with luck it will eventually settle the Boltzmann brain issue (though some cosmologists go so far as to say that if contemporary evidence ends up pointing to a future full of Boltzmann brains, there must then be some unknown mechanism that will come along and destroy the universe before that happens.) A lot of the silliness of anthropic reasoning stems from the complete absence of all other data; it’s the first iteration of a process of refinement that needs to run for tens or hundreds of cycles to lead anywhere meaningful.

    But I’m beginning to suspect that even the issue of what evidence would count as falsifying Tegmark will remain disputed for centuries.

  7. But I’m beginning to suspect that even the issue of what evidence would count as falsifying Tegmark will remain disputed for centuries.

    Sure enough, but I still think you folks don’t really “get” the full scope of the modal realist concept: “everything exists” – I mean, every stinking configuration of whatever, that is an element of the Platonic mindscape. But the comments here seem to show the provincialism of still imaging you’re reliably in a universe with physical consistency and are just fiddling with details (like those playing with “landscapes” which depend on particular physical theories.) Note that a real “physical theory” involves a certain expectation that the substrate of the/a universe has some dependable character, instead of the sort of “describable thing” I mentioned that just acts any old way and need not be consistent from one time to another. (BTW “time” per se versus just 4-d structures of world lines, isn’t really logically definable: the latter is just like a tinkertoy sitting on a table with no “past-present-future”…) Well, that sort of assumption is just what Davies meant IIUC, and so he’s pretty right on target and shouldn’t be calumniated so much.

    Greg, if you can suffer just one more point about GR and the EP, and this is more about permissible torturing of semantics than physics per se: Sure, there’s an “acceleration” defined as you describe, which is zero for a free-falling particle (what an accelerometer measures, from relative intertial forces between adjacent masses like test body and spring-container.) But if you had completely avoided coordinate accelerations of the type I meant (equivalent to progress relative to floor levels, such as the significant banality of whether they hit a floor at the same time), then your statement that “the acceleration” of the transversely-moving body was larger by the factor (1 – v^2/c^2) would ironically have no meaning! But as I *read* the descriptions of the EP as a “semanticist” who expects clear exposition, not hidden behind “we know that what it really means is different from what it sounds, and that’s our secret to find out from grinding away at upper crust references”: it says that “gravitational fields can be transformed away in tiny regions by accelerations.” Well, that means that if I have a falling little chamber, the particles passing right by each other at the top will not reach the floor at the same time if they have relative motion. No matter how you defined “acceleration” for upper-class consumption, that *is* a way to tell the difference because it is a definable result. And I still want to know how a spinning ring or disk falls, please. Thank you.

  8. As for the Boltmann Brain issue, part of the difficulty is how to measure probabilities in the first place. For example, let’s say that the laws of the universe are such that as a given region of the universe ages, Boltmann Brains will eventually vastly outnumber real observers. But if we have eternal inflation, and the “proper” measure of probability is to take the number of observers (real or BB) in an equal time slicing of the universe. In this situation, real observers will vastly outnumber BB observers at any given time, because young regions always vastly outnumber old ones in equal time slicings of the universe in the context of eternal inflation.

    Another possible solution would be to look at the probability of generations of new patches of inflation. Provided the production rate is high enough, even if inflation is not eternal young regions will still vastly outnumber old ones due to new ones always being generated.

  9. Greg: BTW I don’t imply that you are responsible for any confusing or crusty use of terms and framing in GR, or that it’s bad faith in that tradition either – it’s just rough on the non-insiders.

  10. Update on the Hartle & Srednicki paper. One of the central claims of the paper is the following:

    We have data that we exist in the universe, but we have no evidence that we have been selected by some random process. We should not calculate as though we were.

    Resorting to probabilistic descriptions is not a statement that we are selected by a random process. Rather, it is a method of encapsulating our ignorance.

    To argue by analogy, consider quantum mechanics. Within quantum mechanics, there is no need to resort to a probabilistic description. The theory is, without the assumption of wave function collapse, a perfectly deterministic theory. But, as Everett showed in 1957, one can derive the appearance of wave function collapse from this perfectly deterministic theory. Resorting to probabilities is our method of encapsulating our ignorance as to what portion of the wave function of the universe represents “us”.

    By a similar token, we do not know which universe we should have found ourselves in, and, as a result, the use of probabilistic methods in the context of the weak enthropic principle encapsulates this ignorance.

  11. Jason: Sure, we don’t *know* what sort of universe we “should have found ourselves in,” but, given some background assumptions, we can guestimate some things – and why should we expect to do any better, or why should critics consider that a fatal flaw? I consider theoretical perfectionism to be a version of the straw-man fallacy. All of this is just guestimation – I say, accept that and just play with it, instead of either pretending it’s a slam dunk or playing anal-retentive logical priss.

    As for anyone “showing” that collapses can be incorporated into the deterministic wave function: I humbly submit (on the most general terms of proper semantics and logical hygiene) that he could not have done so. We still don’t know how the wave converts into or is also manifested as localization (hey, it’s just a wave, there is no inherent *mathematical* connection to localizations.) Just saying the localizations are spread over every possible place and we just end up in/as one of them is a copout, based on taking the observed effect for granted to begin with and then pretending one is explaining it from above (it’s a form of circular reasoning, even if rather subtle.) The same goes for the BS operations of decoherence and “many worlds” as “explanations” of “apparent” collapse. And BTW, how do those concepts deal with the wave redistribution forced by Renninger negative result measurements, and even worse, how does the wave respond to reports by unreliable detectors? I’m still waiting for a good answer to the last question.

  12. Neil wrote:

    [The Equivalence Principle] says that “gravitational fields can be transformed away in tiny regions by accelerations.” Well, that means that if I have a falling little chamber, the particles passing right by each other at the top will not reach the floor at the same time if they have relative motion. No matter how you defined “acceleration” for upper-class consumption, that *is* a way to tell the difference because it is a definable result.

    Tiny regions of space-time, not tiny regions of space.

    At a given event, E, in space-time, you can adopt a coordinate system based on all the geodesics that pass through that event. These are known as Riemann normal coordinates.

    All timelike geodesics through E, in this coordinate system, are linear functions of proper time:

    x^i = c^i tau

    just as they’d be in flat space-time. The connection coefficients at E in this coordinate system vanish, along with the derivatives of components of the metric at E. In other words, by choosing a coordinate system at E based on how particles move in free-fall, you demonstrate that an infinitesimal region of space-time resembles flat space-time, in essentially the same way as a small part of the Earth’s surface resembles the flat geometry of a plane.

    On the Earth, if you pick a lamp post L and draw all the great-circle geodesics through it, you can use that to map points in a small piece of Euclidean space to a small region around L. [In detail: Pick two orthogonal directions at L that you want to be your x and y directions. For a point P in Euclidean space with Cartesian coordinates (x,y), find the unit vector u at L with coordinates (x,y)/|(x,y)|, then follow the geodesic at L whose tangent is u for a distance |(x,y)|. That takes you to f(P), where f is our map from a small neighbourhood of the origin in Euclidean space to a neighbourhood of L. Essentially the same construction works in space-time, but you have to do things slightly differently to take account of the existence of spacelike, null, and timelike directions.]

    If you take two geodesics at L, there is no reason to expect the second derivatives of their latitude or longitude (wrt distance along the geodesics) to be equal; it’s only in the geodesic-based coordinate system at L that everything is linear. Similarly, if you take two test particles that pass through the event E, there is no reason to expect the second derivatives of their coordinates (wrt proper time) in some arbitrary coordinate system to be equal. If you follow geodesics far from L, the absence of relative acceleration between them at L is not a promise of any special relationship between their latitude and longitude after you’ve gone some distance s along both. Similarly, after two test particles have passed at E, the absence of relative acceleration between them at E is not a promise that they will strike some distant third object (e.g. the planar mass we’ve been considering) “at the same time” in some particular coordinate system.

    Now suppose we have an elevator in free fall above a planar mass, and two test particles A and B. At a certain time, t=0, adopt coordinates based on the centre of the elevator and the orientation of its walls. For a short interval of time, particle A will just sit at x=y=z=0. Particle B, with transverse motion, will (for the same short interval) have elevator coordinates well approximated by x=vt, y=z=0. It will not hit the elevator floor at any small value of t (and of course neither will particle A). Rather, it will fly in a straight line until it hits the wall of the elevator, just as it would in flat space-time.

    The equivalence principle is a local statement about events in space-time close to E in both space and time. If someone in a bad popular science book has written something incompatible with this, take it up with the author, but please stop hallucinating conspiracies by relativists to lock you out of their club. Relativists have bent over backwards to make the subject accessible even to people too tight to buy a single textbook.

    And I still want to know how a spinning ring or disk falls, please.

    Then learn how to calculate it yourself. If you study enough GR to be able to do that, there’s a reasonable chance you’ll also end up understanding what the results of such a calculation would mean.

  13. OK, Greg, I will look into this and at least avoid griping about GR as such (here to you anyway) until I know enough to do that much. I do feel compelled to point out something about the falling elevator above the planar mass, which I see as a general consistency issue:
    Given “g”, the elevator reference floor will of course have the down-falling mass just resting right at that level. But I think you misdescribed the behavior of the particle with transverse motion. It will, relative to the floor, have “an acceleration” in that common sense of d^2y/dt^2 (well, the sense by which you originally defined the difference!) of: -(v^2/c^2)g, and thus describe a classic parabolic curve. That is not appropriately characterized as “flying straight” like it would in flat space-time, even momentarily, any more than it would be appropriate for a bullet just fired straight out in a gravity already equal to that same value, but without such a velocity-dependent effect.

    Yes, you talk about tiny intervals and space and time together etc, but “accelerations” are respectable and measurable instantaneous conditions of functions. So we could *know* that we weren’t just “floating in space” without using tidal differences. Well, maybe that doesn’t violate the EP anyway, but I will continue to consider it thus to be a rather shaky principle until I am satisfied that the whole shebang really justifies its value and framing.

  14. But I think you misdescribed the behavior of the particle with transverse motion.

    You’re wrong. The equivalence principle says x=vt, y=z=0. These formulas are correct to second order in t: the instantaneous second derivatives of the elevator coordinates for the test particle are all zero, at t=0.

    You’re making false assumptions about general coordinate systems when you claim this is incompatible with different second derivatives for the two test particles’ coordinates, in the coordinate system fixed to the planar mass.

    Why don’t you try verifying the claims I made about geodesics on a sphere? You only need some simple vector geometry and calculus to do that.

    (1) Consider two geodesics passing through the point P. Travel a distance s along each geodesic, arriving at the points Q and R respectively. Compute the great-circle distance from Q to R. Prove that its second derivative as a function of s, evaluted at s=0, is zero.

    Hint: Without loss of generality, make P the north pole, and the first geodesic the prime meridian. The great-circle distance between two points on a unit sphere is the arccos of the dot product of the vectors from the centre of the sphere to the points. So I am asking for the second derivative wrt s of the function:

    arccos[ (sin s, 0, cos s).(sin s cos r, sin s sin r, cos s) ]

    where r is the constant longitude of the second geodesic (in radians), and the distance s we’ve travelled from the north pole gives us the co-latitude of both Q and R, because we’re assuming a unit sphere.

    (2) Compute the latitude and longitude of Q and R, as functions of s [here it would be a loss of generality to put P at the north pole, so that makes the calculation a bit harder]. Compute the second derivatives of these latitudes and longitudes, evaluated at s=0. Note that in general these will not be the same for Q and R.

    Once you believe both these results (and if you don’t currently believe them, do the calculations), then you ought to understand why there is no contradiction between the equivalence principle and the different coordinate accelerations of the test particles.

  15. As for anyone “showing” that collapses can be incorporated into the deterministic wave function: I humbly submit (on the most general terms of proper semantics and logical hygiene) that he could not have done so.

    He did. It’s called quantum decoherence:
    http://en.wikipedia.org/wiki/Quantum_decoherence

    That you call this “BS” in no way impacts the result. Take quantum mechanics, don’t include any wave function collapse axiom, and you get the appearance of wave function collapse as a result of interactions.

  16. OK Jason, I will replicate my comment from “Things Happen, Not Always for a Reason” for your convenience:

    40.

    Jason, go and read and understand my critiques of the decoherence scam elsewhere on these threads (compare to what I just said about the equally wooly “multiple worlds” racket.) Specific localizations are what we actually observe, unless you are BSing us with that “illusion” and “appearing” conceit which violates all classic standards of empirical frankness. A wave which doesn’t collapse is just a wave, period, forever, not one or even a bunch of localizations (separated from each other by literally God only knows what – do you?) If they decohere, they would just forever stay “waves” which aren’t in the same relationship as before, unless you assume the consequences to begin with, that you were trying to prove. None of the mathematics of waves per se does or even *can* express or contain the localizations (since mathematical structures can’t produce true randomness, they are in effect “deterministic”! – so-called “random variables” are fiat entities of discourse about probabilities in general, not a genuine, formed machinery that can give us actual sequences.)

    Collapses/localizations are a bizarre and logically absurd feature of the like-it-or-not *universe* we actually live in, for honest folk to acknowledge first and foremost even if *maybe* explainable in a sincere sense someday. Decoherence is a circular argument using the surreptitious putting in by hand of the very events it is presuming to explain.

    BTW, what do you think of Greg Egan’s argument, first: that the acceleration of transversely moving bodies in the field of a very extended/”infinite” planar mass is higher than that of bodies falling straight down, and second: those kinds of accelerations really doesn’t matter for purposes of defining comparative acceleration in the equivalence principle, despite our clear ability to use that to show the distinction in terms of progression relative to a “floor”?

  17. Neil

    I’ll make one last attempt to get you to actually think about curved spacetime and non-Cartesian coordinates.

    Consider all the geodesics that pass through the north pole. That’s easy: they are the lines of longitude. Call longitude phi and co-latitude (the angle measured from the north pole) theta.

    We can set up some nice coordinates in which these geodesics are locally linear. Define x = theta cos(phi), y = theta sin(phi). Geodesics are lines of fixed phi, phi=phi_0, so in our new (x,y) coordinates they will take the form y = tan(phi_0) x. In other words, they look just like straight lines. Taking the second derivative of y wrt x, we get zero. All geodesics look straight in this way, but note that we’re not just using the vacuous observation that “every smooth curve is a straight line to first order”. A non-geodesic curve passing through the north pole would not have the equation y = k x to second order; there would be a quadratic term as well.

    You can perform an analogous construction in space-time. In place of geodesics through the north pole of a sphere, use geodesics through the event E in space-time where the world lines of two free-falling test particles intersect. The spacetime coordinates you get this way will let you describe every geodesic through E with a linear equation, with no quadratic terms. That’s what anyone inside a free-falling elevator would measure: all the test particles would be seen to travel along straight lines with uniform velocities.

    Now, go back to the sphere, and look at all the geodesics that pass through another point: call it P, and place it at 0 degrees longitude, 45 degrees latitude (and 45 degrees co-latitude). Of course the north pole wasn’t special, and we could construct the same kind of nice x and y coordinates here that made the geodesics linear, with a bit more work, but we won’t do that. Instead, we want to know how the geodesics look using phi and theta as coordinates. On a map of a small region, latitude and longitude look almost Cartesian, so you might think you’d get linear equations for the geodesics in those coordinates too.

    But you don’t. Suppose you take a geodesic that passes through P, and hits the equator at longitude phi_E. Close to P (phi=0, theta=45 deg), to second order in phi such a geodesic is described by:

    theta(phi) = 45 deg + (1/2) cot (phi_E) phi + (1/4) cosec(phi_E)^2 phi^2

    So not only is there a non-zero quadratic term, it’s different for different geodesics.

    Of course the geodesics through P are no different from those through the north pole. They are still locally linear when described in sufficiently nice coordinates — the kind of coordinates a town planner might use if she doesn’t care about latitude and longitude but just wants to make distances and straight lines as easy to describe mathematically as possible.

    But anyone who has a reason to insist on describing things in terms of the coordinates phi and theta will find that these geodesics are quadratic functions theta(phi), with different quadratic coefficients for different geodesics.

    Equally, in the case of the falling test particles, anyone who is fixed relative to the planar mass and using coordinates in which the mass is stationary at z=0 will find z(t) for the falling test particles to be quadratic, and the quadratic coefficient in each case will be different.

    This is neither “torturing semantics” nor logically contradictory. It is simply a description of different measurements made with different coordinate systems. Is the second derivative of co-latitude as a function of longitude different for different geodesics that pass through P? Absolutely. Do people travelling along these geodesics close to P observe any mutual “acceleration” as a consequence of this fact? Absolutely not; as far as they’re concerned, everything measurable is linear to second order.

    If you understand (and preferably verify by your own calculations) what’s happening on the sphere, what’s happening in curved spacetime will no longer seem so strange.

  18. Neil B.,

    How does the appearance of collapse “violate all standards of empirical frankness”? There really is no question that quantum mechanics, without any axiom of collapse, has the appearance of collapse upon interaction of a wave function with the environment. Furthermore, we have seen this appearance of collapse turn on slowly through experimental tests.

    And yes, this is a fully deterministic view of quantum mechanics. The randomness stems directly from the appearance of collapse, and is purely an artifact of us viewing the world from within the system described. The “frog” view, if you will, has this appearance of randomness, while the “bird” view has nothing of the sort.

  19. Jason, talking about “the appearance” of collapse is sophistry. There *is collapse, by any honest accounting. That is just what happens in our public, shared experience, which is the basis for all genuine science. It is not an “axiom” because it actually happens, and it is not “deterministic” by definition because we can’t predict where the hits will be or when (or can *you*?) If theorists want to matherbate (sic) with ideas inside their self-absorbed little heads, I don’t care, but I don’t want them telling me that the empirical structure is just “an illusion” (whatever that means) because they in their infinite wisdom know what the universe “ought” to be like. That is as repulsive as the Aristotelian Scholastics who discarded experimental evidence that didn’t fit into the teachings of Aristotle. That is one of the things science was supposed to rise above. How ironic.

  20. “I’ll make one last attempt to get you to actually think about curved spacetime and non-Cartesian coordinates.”

    Imagine this sentence said by a wizened old schoolmaster with spectacles, slowly drumming his smacking ruler against his palm, while a sheepish Neil stands there in an oversized British schoolboy uniform, looking at his own nervously shuffling feet and biting his lip.

    Ok, don’t, because it’s silly. But I couldn’t help imagining it.

  21. Bad, I gotta admit that’s cute, but really: what do you know or think about Greg’s specific claim of the higher acceleration of the transverse-moving body in the field of the extended planar mass (said not to be like a simple uniform field), not just the GR concepts in general (no pun intended.) I hadn’t heard of that, and I’m just trying to get a second opinion. Greg can rest and not feel like going another round (just yet, heh.) Did you at least try to appreciate my objections to that idea, and to the consequences? Sometimes the students make good Socratic pokes.

  22. Jason, talking about “the appearance” of collapse is sophistry. There *is collapse, by any honest accounting.

    The thing you have to recognize is that all we have are the results of experiments. That is, all that we can be sure of is the appearance of collapse. The thing to do, then, is that when two different theories predict the same experimental outcome, we should apply Occam’s Razor: consider the theory with fewer hypothetical entities as the more likely.

    And so, if quantum mechanics without an axiom of collapse can explain all experiments that show collapse, then it is highly unlikely for the axiom of collapse to describe reality. But fortunately, it doesn’t end there. In fact, quantum decoherence is not always a sudden deocherence: often the number of states that the interaction decomposes the system into is small enough, and the change is small enough, that the docoherence is merely partial instead of total. So we can see the “collapse” turn on slowly by carefully dialing the interaction that causes the decoherence. There is no way that this could happen with an axiom of collapse, as you either measure something or you don’t. There is no in between. And this has been tested.

  23. Human beings have a natural tendency to look for meaning and purpose out there in the universe.

    Actually, meaning and purpose questions can be posed only in a religious framework (religion here indicates a system with Christianity as an exemplar); and cultures that don’t have religion (e.g., Buddhism is not a religion) don’t pose “meaning of life” questions.

    Therefore human beings don’t have a natural tendency to look for meaning and purpose in the universe, because those questions did not arise universally, but only in religious cultures, simultaneous with or after the rise of the religion.

    (The idea that religion is a human cultural universal is a nice piece of theology masquerading as knowledge, the argument is too big to fit in the margin, but the argument is now available on the web in the ebook here:

    http://colonial.consciousness.googlepages.com/theheatheninhisblindness
    )

  24. Arun,

    I might agree that people may not have any tendency to pose general “meaning of life” questions, but it seems perfectly clear that each of us is very much interested in the meaning and purpose of our own life. Of course, this meaning is whatever we make it to be: there is no meaning imposed externally. There is no objective meaning. Our purpose is what we choose it to be. And this is, I think, a far more uplifting sense of purpose than one imposed from the outside by some inexplicable deity.

  25. Jason sayeth:

    The thing you have to recognize is that all we have are the results of experiments. That is, all that we can be sure of is the appearance of collapse.

    Yes, that’s what we have, but no sensible thinker considers the objective results to be merely “an appearance” in any sane sense. Collapse is not “an axiom,” it is what happens.

    The thing to do, then, is that when two different theories predict the same experimental outcome, we should apply Occam’s Razor: consider the theory with fewer hypothetical entities as the more likely.

    If anything deserves to be called “hypothetical” it is the wave, not the collapse which is the “given.” We don’t even know what it means to say that the wave functions “exist” per se, but the collapses are little spots right there on a screen etc. How could “shut up and calculate” folks be brushed off so glibly, regardless of whether you agree with them?

    QM without an “axiom” of collapse is, as I said, just waves staying waves forever and in one universe – if MW and decoherence say otherwise, then they are playing tricks with the logical and semantic framing of the issues. The experiments you mention are worth reflecting on, but I think they just show that the tendency to collapse (which is still an actual event each time) is a variable based on interactive parameters, not any big deal that.

    PS: My regards to you and others for gracefully bearing the brunt of my orneriness and florid language at times. Oh – I want your opinion on the differential acceleration wrangle as well!

    As for your second post, you’d probably like the sentiments in “The Fall of Freddie the Leaf” by Leo Buscaglia, which I favorably review in the Huckabee thread.

Comments are closed.

Scroll to Top