Quantum Mechanics and Decision Theory

Several different things (all pleasant and work-related, no disasters) have been keeping me from being a good blogger as of late. Last week, for example, we hosted a visit by Andy Albrecht from UC Davis. Andy is one of the pioneers of inflation, and these days has been thinking about the foundations of cosmology, which brings you smack up against other foundational issues in fields like statistical mechanics and quantum mechanics. We spent a lot of time talking about the nature of probability in QM, sparked in part by a somewhat-recent paper by our erstwhile guest blogger Don Page.

But that’s not what I want to talk about right now. Rather, our conversations nudged me into investigating some work that I have long known about but never really looked into: David Deutsch’s argument that probability in quantum mechanics doesn’t arise as part of a separate ad hoc assumption, but can be justified using decision theory. (Which led me to this weekend’s provocative quote.) Deutsch’s work (and subsequent refinements by another former guest blogger, David Wallace) is known to everyone who thinks about the foundations of quantum mechanics, but for some reason I had never sat down and read his paper. Now I have, and I think the basic idea is simple enough to put in a blog post — at least, a blog post aimed at people who are already familiar with the basics of quantum mechanics. (I don’t have the energy in me for a true popularization at the moment.) I’m going to try to get to the essence of the argument rather than being completely careful, so please see the original paper for the details.

The origin of probability in QM is obviously a crucial issue, but becomes even more pressing for those of us who are swayed by the Everett or Many-Worlds Interpretation. The MWI holds that we have a Hilbert space, and a wave function, and a rule (Schrödinger’s equation) for how the wave function evolves with time, and that’s it. No extra assumptions about “measurements” are allowed. Your measuring device is a quantum object that is described by the wave function, as are you, and all you ever do is obey the Schrödinger equation. If MWI is to have some chance of being right, we must be able to derive the Born Rule — the statement that the probability of obtaining a certain result from a quantum measurement is the square of the amplitude — from the underlying dynamics, not just postulate it.

Deutsch doesn’t actually spend time talking about decoherence or specific interpretations of QM. He takes for granted that when we have some observable X with some eigenstates |xi>, and we have a system described by a state

|\psi\rangle = a |x_1\rangle + b |x_2\rangle ,

then a measurement of X is going to return either x1 or x2. But we don’t know which, and at this stage of the game we certainly don’t know that the probability of x1 is |a|2 or the probability of x2 is |b|2; that’s what we’d like to prove.

In fact let’s just focus on a simple special case, where

a = b = \frac{1}{\sqrt{2}} .

If we can prove that in this case, the probability of either outcome is 50%, we’ve done the hard part of the work — showing how probabilistic conclusions can arise at all from non-probabilistic assumptions. Then there’s a bit of mathematical lifting one must do to generalize to other possible amplitudes, but that part is conceptually straightforward. Deutsch refers to this crucial step as deriving “tends to from does,” in a mischievous parallel with attempts to derive ought from is. (Except I think in this case one has a chance of succeeding.)

The technique used will be decision theory, which is a way of formalizing how we make rational choices. In decision theory we think of everything we do as a “game,” and playing a game results in a “value” or “payoff” or “utility” — what we expect to gain by playing the game. If we have the choice between two different (mutually exclusive) actions, we always choose the one with higher value; if the values are equal, we are indifferent. We are also indifferent if we are given the choice between playing two games with values V1 and V2 or a single game with value V3 = V1 + V2; that is, games can be broken into sub-games, and the values just add. Note that these properties make “value” something more subtle than “money.” To a non-wealthy person, the value of two million dollars is not equal to twice the value of one million dollars. The first million is more valuable, because the second million has a smaller marginal value than the first — the lifestyle change that it brings about is much less. But in the world of abstract “value points” this is taken into consideration, and our value is strictly linear; the value of an individual dollar will therefore depend on how many dollars we already have.

There are various axioms assumed by decision theory, but for the purposes of this blog post I’ll treat them as largely intuitive. Let’s imagine that the game we’re playing takes the form of a quantum measurement, and we have a quantum operator X whose eigenvalues are equal to the value we obtain by measuring them. That is, the value of an eigenstate |x> of X is given by

V[|x\rangle] = x .

The tricky thing we would like to prove amounts to the statement that the value of a superposition is given by the Born Rule probabilities. That is, for our one simple case of interest, we want to show that

V\left[\frac{1}{\sqrt{2}}(|x_1\rangle + |x_2\rangle)\right] = \frac{1}{2}(x_1 + x_2) . \qquad\qquad(1)

After that it would just be a matter of grinding. If we can prove this result, maximizing our value in the game of quantum mechanics is precisely the same as maximizing our expected value in a probabilistic world governed by the Born Rule.

To get there we need two simple propositions that can be justified within the framework of decision theory. The first is:

Given a game with a certain set of possible payoffs, the value of playing a game with precisely minus that set of payoffs is minus the value of the original game.

Note that payoffs need not be positive! This principle explains what it’s like to play a two-person zero-sum game. Whatever one person wins, the other loses. In that case, the value of the game to the two participants are equal in magnitude and opposite in sign. In our quantum-mechanics language, we have:

V\left[\frac{1}{\sqrt{2}}(|-x_1\rangle + |-x_2\rangle)\right] = - V\left[\frac{1}{\sqrt{2}}(|x_1\rangle + |x_2\rangle)\right] . \qquad\qquad (2)

Keep that in mind. Here’s the other principle we need:

If we take a game and increase every possible payoff by a fixed amount k, the value is equivalent to playing the original game, then receiving value k.

If I want to change the value of a playing a game by k, it doesn’t matter whether I simply add k to each possible outcome, or just let you play the game and then give you k. I don’t think we can argue with that. In our quantum notation we would have

V\left[\frac{1}{\sqrt{2}}(|x_1+k\rangle + |x_2+k\rangle)\right] = V\left[\frac{1}{\sqrt{2}}(|x_1\rangle + |x_2\rangle)\right] +k . \qquad\qquad (3)

Okay, if we buy that, from now on it’s simple algebra. Let’s consider the specific choice

k = -x_1 - x_2

and plug this into (3). We get

V\left[\frac{1}{\sqrt{2}}(|-x_2\rangle + |-x_1\rangle)\right] = V\left[\frac{1}{\sqrt{2}}(|x_1\rangle + |x_2\rangle)\right] -x_1 - x_2.

You can probably see where this is going (if you’ve managed to make it this far). Use our other rule (2) to make this

-2 V\left[\frac{1}{\sqrt{2}}(|x_1\rangle + |x_2\rangle)\right] = -x_1 - x_2 ,

which simplifies straightaway to

V\left[\frac{1}{\sqrt{2}}(|x_1\rangle + |x_2\rangle)\right] = \frac{1}{2}(x_1 + x_2) ,

which is our sought-after result (1).

Now, notice this result by itself doesn’t contain the word “probability.” It’s simply a fairly formal manipulation, taking advantage of the additivity of values in decision theory and the linearity of quantum mechanics. But Deutsch argues — and on this I think he’s correct — that this result implies we should act as if the Born Rule is true if we are rational decision-makers. We’ve shown that the value of a game described by an equal quantum superposition of states |x1> and |x2> is equal to the value of a game where we have a 50% chance of gaining value x1 and a 50% chance of gaining x2. (In other words, if we acted as if the Born Rule were not true, someone else could make money off us by challenging us to such games, and that would be bad.) As someone who is sympathetic to pragmatism, I think that “we should always act as if A is true” is the same as “A is true.” So the Born Rule emerges from the MWI plus some seemingly-innocent axioms of decision theory.

While I certainly haven’t followed the considerable literature that has grown up around this proposal over the years, I’ll confess that it smells basically right to me. If anyone knows of any strong objections to the idea, I’d love to hear them. But reading about it has added a teensy bit to my confidence that the MWI is on the right track.

63 Comments

63 thoughts on “Quantum Mechanics and Decision Theory”

  1. I’m disappointed that CU Phil thinks that my objections in the Many Worlds@50 volume are less vociferous than those of Adrian Kent and David Albert! My piece is here: http://philsci-archive.pitt.edu/3886/

    I think that the plausibility of the Deutsch-Wallace axioms actually presupposes what needs to be shown, viz that there is some analogue of classical uncertainty in the MW picture. Moreover, if we assume that the argument is a good one with that assumption made explicit, then we can exploit a point noticed by Hilary Greaves to show the assumption must be false.

    Here’s how. Let P = “There is a suitable analogue of classical uncertainty in MW”, and Q= “Rationality requires that any Everettian agent should maximise her expected in-branch utility, using weights given by the Born Rule”.

    Then if the Deutsch argument works, it establishes:

    1. If P then Q.

    But Greaves’ observation shows that Q simply can’t be true, because MW introduces a new kind of outcome that an agent may have preferences about, namely the shape of the future wave function itself (or at least, the portion of it causally downstream from the agent’s current choice). In effect, Q is telling us that rationality requires us to prefer future wave functions with a characteristic feature, that of maximising Born-rule-weighted in-branch utility. But this is obviously and trivially wrong, in the case of an agent who has preferences about the shape of the wave function itself, and just prefers (i.e., assigns a higher utility to) some other kind of future wave function. Decision-theoretic rationality tells us what to do, given our preferences. It doesn’t tell us what our preferences should be. (But wouldn’t such an agent already be crazy, for some other reason? No — see the paper for details.)

    Given Greaves’ observation, then, there are only two possibilities: either P is false, or the Deutsch argument fails — either way, it’s bad news for the project of making sense of MW probabilities in terms of decision-theoretic considerations.

  2. Sean:

    It’s seems you’re approaching this ‘problem’ as a Platonist, looking for a model(s) which comes closest to some preconceived (and not widely entertained) concept of an absolutely true representation of reality – rather than from the general scientific requirement that a model(s) status must be judged by how closely the empirical record can be matched?

    For the Platonist ‘test’, the issue is whether or not Born’s added ‘axioms’ and his formulation fit together with QM better than do the construction of Decision Theory and ITS unique axioms – perhaps an Occam’s Razor type question.

    For the more generally accepted scientific requirement – match to the empirical record – you have so far offered no arguments to distinguish the ‘performance’ of Born’s probability interpretation from that of a Decision Theoretical approach!

  3. So… suppose that we have N identical systems, each with state |x1>+|x2>, where x1 and x2 are eigenvalues of operator X. And suppose we have an operator Y which represents a simultaneous measurement of X in all of the N systems. Operator Y gives the value 1 if nearly half of the measurements of X result in x1. Otherwise, operator Y gives the value 0.

    If I understand Deutsch’s paper, we cannot say that a measurement of Y has a high probability of returning 1. But if we are rational decision makers, we would treat the expected value of Y as being close to 1 (and getting even closer to 1 as N goes to infinity).

    This may not prove that the results actually follow the frequency distribution given by Born’s rule, but it sure seems like the next best thing.

  4. Huw– I’ll admit I haven’t read your paper or Greaves’s, but that objection doesn’t seem very convincing at face value. Can’t we just say that preferences are something that people have about outcomes of measurements, not about wave functions? Outcomes are what we experience, after all.

  5. Sean, I don’t think that response is going to help Deutsch and Wallace, who are trying to establish a claim about any rational agent, not just about agents with the kind of preferences we happen to have. But in any case, it is easy to think of examples of preferences for wave functions of the kind my objection needs, which are themselves grounded on what the wave functions imply about the experiences of people in the branches of those wave functions — e.g., a preference for a wave function in which I don’t get tortured in any branches (even very low weight branches), over a wave function in which I do get tortured in a very low weight branch, but get rich in all the high weight branches. (My Legless at Bondi example in the paper is much like this, and I discuss why MW makes such a difference, compared to the classical analogue.)

  6. The key equation seems to be asserting that the “Value” (expectation?) of an observation (of the observable X-(x1+x2)) where the possible values are -x2 and -x1 is the same as the subtracting (x1+x2) from the Value of an observation (of X) where the possible values are x1 and x2. And then the application of (2) seems to be saying that the Value of that observation (of the observable X-(x1+x2)) where the possible values are -x2 and -x1 is the negative of the Value of an observation (of X) where the possible values are x1 and x2. But if (2) is being applied this way – ie without regard to which observable is involved and so without regard to which of the two terms is associated with which value, then isn’t that essentially assuming that equal probabilistic weights are being assigned to each of the two outcomes which amounts to begging the question of probabilistic weights being equal when the vector magnitudes are?

    (After all, the principle that negating the payoffs negates the expectation requires keeping the same probabilities, and switching cases only works if the probabilites are equal:
    eg 1/3(-x1)+2/3(-x2)= -{1/3(x1)+2/3(x2)} but {1/3(-x2)+2/3(-x1)} is not the same)

  7. I think the Greaves/Price objection is a serious worry for probability in EQM in general, and for the decision-theory strategy in particular. Assigning objective probabilities to outcomes does seem to presuppose the possibility of uncertainty about which of the outcomes will occur. But EQM seems to say they all occur. So there’s a prima facie problem here. (Greaves’ response is: so much the worse for probability in Everett, but Everettians can do without it.)

    Wallace doesn’t think the problem is too serious these days (in contrast to his older papers which argue that Everettians must make sense of ‘subjective uncertainty’) – roughly, he now thinks that the objection appeals to pre-theoretic intuitions about the nature of uncertainty, and that intuition is unreliable in such areas. However, in his new book he does provide a semantic proposal which allows us to recover the truth of ordinary platitudes about the future (like ‘I will see only one outcome of this experiment’)’, by interpreting them charitably as referring only to events in the speaker’s own world.

    I have a new paper forthcoming in British Journal for Philosophy of Science which argues that the Greaves/Price objection can be met on its own terms, by leaving the physics, the epistemology and the semantics alone and instead tinkering with the metaphysics. Here’s the link: http://alastairwilson.org/files/opieqmweb.pdf

    Sean’s remarks above capture the spirit of my suggestion nicely: if Everett is right, then our ordinary thought and talk about alternative possibilities *just is* thought and talk about other Everett worlds. To reply to Huw’s last points from this perspective: a) if Everett worlds are (real) alternative possibilities then any possible rational agent (not just one with preferences like ours) is going to be an agent with in-branch preferences, b) the kinds of ‘preferences for wave-functions’ that you describe can be made sense of on this proposal, though I would describe them differently; they correspond to being highly risk-averse with respect to torture.

  8. Philip– I think it’s an interesting idea, although the chances that it’s right are pretty small. Andy takes the requirement of accounting for the arrow of time much more seriously than most cosmologists do, which is a good thing. But his intuition is that the real world is somehow finite, while my intuition is the opposite. (Intuition can’t ultimately carry the day, of course, but it can guide your research in the meantime.)

  9. Alastair, Thanks for the link, though as you know, I prefer to tinker with metaphysics as little as possible 😉

    Concerning your (a), my point doesn’t depend at all on denying that we have in-branch preferences, but only on pointing out that the new ontology of the Everett view makes it possible for us to have another kind of preference, too — a preference about the shape of the future wave function. Concerning (b), any ordinary notion of risk-aversion is still a matter of degree, whereas the worry about low weight branches isn’t a matter of degree. So you’ll need infinite risk aversion, won’t you? And in any case, what does the response buy you? A demonstration that the choices of an ordinary agent in an Everett world should be those of a highly risk-averse agent in a classical world? That doesn’t seem good enough, for the Deutsch-Wallace program. They want to show that the ordinary agent should make the same choices in the two cases.

  10. Daryl McCullough

    Alan,

    I’m not sure I understand what you’re saying.

    In Sean’s derivation, all the states are eigenstates of the X operator. The meaning of the state |x> is the eigenstate of the X operator with eigenvalue x. |x+k> is an eigenstate of the X operator with eigenvalue x+k.

    Sean’s assumptions might make more sense to you if we explicitly introduce some additional operators.

    Let T(k) be the operator (the translation operator) defined by T(k) |x> = |x+k>.
    Let P be the operator (the parity operator) defined by P |x> = |-x>.
    We assume that they are linear, which means
    T(k) (|Psi_1> + |Psi_2>) = T(k) |Psi_1> + T(k) |Psi_2>
    P (|Psi_1> + |Psi_2>) = P |Psi_1> + P |Psi_2>

    So Sean’s assumptions about the value function V(|Psi>) are basically:

    (1) V(|x>) = x
    (2) V(T(k) |Psi>) = V(|Psi>) + k
    (3) V(P |Psi>) = – V(|Psi>)

    (2) and (3) follow from (1) for eigenstates of the X operator, but we need the additional assumption that they hold for superpositions of eigenstates, as well.

  11. ok – Maybe trying three times is considered rude, but I would really appreciate it if someone could explain what I have wrong here. In the Deutsch paper we have “It follows from the zero-sum rule (3) that the value of the game of acting as ‘banker’in one of these games (i.e. receiving a payoff -xa
    when the outcome of the measurement is xa) is the negative of the value of the original game. In other words” followed by your equation (2). But acting as banker is *not* the same as just having a *set* of outcome values which are the negatives of those of the player. They also have to be matched to the outcomes – ie it is the *ordered* sets which must be negatives. And in the case with Y=X-(x1+x2) it is in the situation where X sees x2 that Y sees -x1 and in the situation where X sees x1, Y sees -x2. This is not the same as Y being the “banker” when X is the “player” so I don’t see why the values should sum to zero. Please, what am I missing?

  12. Daryl McCullough

    Alan,

    I don’t understand what you mean when you say “in the case with Y=X-(x1+x2) it is in the situation where X sees x2 that Y sees -x1 and in the situation where X sees x1, Y sees -x2”

    That doesn’t agree with the meaning of the “game” as described. I think you’re confusing a sum of states with a tensor product of states.

    There is no need to talk about X and Y. You only need to talk about one operator, X. The game works by starting in a state |Psi>, measuring X in that state to get a value x. If x > 0, then the banker pays the player x dollars. If x < 0, then the player pays the banker -x dollars. So it's not that the banker measures one observable and the player measures a different one. There is only one measurement, and that determines who pays who. The banker's winnings are always the negative of the player's winnings.

  13. Dear Sean,

    It seems to me that assuming the “two simple propositions” is just a way of putting the Born rule through a backdoor. Of course, they seem very intuitive but how sure can we be that nature upholds them? I’m reminded of Neumann’s proof of the impossibility of deriving QM from a deterministic theory. As Bell pointed out Neumann made seemingly innocuous assumptions which may not be true. After all why V|x+k> has to be V|x>+k? Why it can’t be V|x>+k^2? I understand that these are justified using decision theory. However decision theory is a theory of decision making by rational agents – why should it have any relevance in the natural world?

    I admit that I haven’t looked at Deutsch’s paper or at Zurek’s paper mentioned in the comments.

    On a related note, do you know of any attempt of defining what constitutes a measurement in the context of MWI? As the wave function branches there it seems to me that a fully formulated theory should explain where those branchings occur.

  14. Anonymous Coward

    @Sudip:
    As far as I understood MWI (correct me if I’m wrong; I didn’t read Everetts’s paper, just a coule of graduate textbboks) the words “branching” and “measurements” should be viewed as a heuristic description of the following process and theorem:
    Suppose you do a (for simplicity spin of an electron) measurement; the measurement is described by a unitary operator $U_M$ (time propagation of your apparatus). You call it a branching into two possible worlds (orthogonal subsapces spanning the entire Hilbert space of the MWI-world) $+$ and $-$, if the time-propagation for all later times leaves these subspaces almost invariant. If this should be the case, we can simplify all further calculations by projecting onto one of the subspaces and calculating the future evolution of each of these branches (“collapse the wavefunction”). What a nifty trick to get approximate results!

    Everetts contribution was to show that for suitable limits (larger hilbert-space, many particles, suitable definition of “almost invariant”) and actual measurement devices (full QM toy models of amplifiers), this does in fact occur. Therefore, Schroedinger’s equation alone implies the very good heuristic of collapsing wave-functions. Furthermore, if we should ever wish to assign weights to different branches, the only way to do this consistently is the Born rule — where consistently means “If I collapse after two measurements and calculate the evolution until the scond measurement in full QM, I get roughly the same result as if I collapsed after the first measurement and again afer the second one”.

    This way, even if we believed in magical “Kopenhagen collapse induced only by human observers”, Everett has shown that “occasional collapse + Born rule” yields very good approximate methods to calculate time-evolution until the “magical collapse”.

    From here it is not far fetched to postpone the “magical collapse” into the far future or *gasp* remove it at all. Futhermore, we can set out to precisely define “branching in the sense of invariance of subspaces up to $varepilon$” or “up to order such and so”. However, the words “branching” or “measurement” without further qualifiers should remain a (undefinable but not meaningless) heuristic, like “two points are close”.

  15. Daryl McCullough

    Sudip writes:

    “After all why V|x+k> has to be V|x>+k? Why it can’t be V|x>+k^2?”

    The meaning of |x> is that it is a state such that the measurement of operator X is certain to produce result x. So the expected result of an X-measurement is V(|x>) = x.
    Similarly, |x+k> is a state such that the measurement of X is certain to produce result x+k. So V(|x+k>) is x+k.

  16. What leaves me unsatisfied about this approach is that you are postulating the existence of an operator V with a complete set of states that behaves in the manner indicated; and then applying the inferred “Born’s rule” to the rest of quantum mechanics.

    Can you make the argument work for real quantum operators that we have some reason to believe in? Like the z-component of spin-1/2 ?

  17. I am not entirely sure why the Born rule is hard to understand.

    The point of process is to allow one to use the computational flexibility associated with functions on the order of the reals and extract certain features of those functions (like the peaks and valleys…or extrema of the function). Remember, the wave function itself is a continuous deterministic function.

    More specifically we operate in the complex plane in order to exploit the computational power associated with manipulating systems with uncountable basis’.

    If we accept the information extraction interpretation, the question is how to economise that process. Since we are dealing with complex numbers, and we are dealing with countable features of the wave function, we can ask the question what happens when we take the function to other powers.

    Since 2 is the smallest prime number, we can interpret any even numbered power to simply being a rescaling of the information associated with squaring the number.

    If we consider odd powers, we can interpret the effect as being a rescaling of the wave function by some real number.

    If we consider all the potential combinations, one quickly must consider all possibilities, and essentially one quickly realizes that what the are really doing is trying to capture all the information in the wave function and essentially are also building a type of matrix that should be recogizable as an operator in a type of transformation procedure.

    In any case, squaring the amplitude is a process that economizes the information extraction from the complex plane into a series of integer indexed real numbers.

  18. It makes me wonder if one can make and argument that if all the trivial zeros of the zeta function lie on the real line, then all the non-trivials have to be on the one-half real line. Interesting.

  19. Pingback: The Alternative to Many Worlds « My Discrete Universe

  20. @Anonymous Coward Thanks, that’s helpful.

    @Daryl Sorry, I didn’t mean to say that. Of course V|x+k>=V|x>+k by definition. What I intended to ask was why should V act linearly on a superposition of kets?

  21. The biggest criticism of Sean’s post is that the argument fails to explain why the Born Rule must obey a squared power relation rather than a quartic or higher one.

    Even Pauli recognized this problem back in 1933 (republished in english translation in his ‘General Principles of Quantum Mechanics’ Ch 2 p15) where he deduced that the Born Rule must be a positive definite quadratic form in ‘psi’ and anything not involving the product of psi.psi* would not be conserved by the Schrödinger Evolution so we only have terms in psi.psi* = |psi|^2 and higher powers as possibilities.

    Pauli, being a genius, realised that only Nature then determines that the rule is a squared one (rather than a higher even power) and ultimately the rule is fixed by experimental observation – not deduced from anything simpler (and certainly not from obfuscating arguments involving rational beings and decision theory!)

    I mentioned above that there is a Bohmian argument for how the absolute squared law is emergent from the dynamics of the Evolution ( eg http://arxiv.org/abs/1103.1589 ) – this is true unless your initial distribution was a higher power invariant one – so the squared power one seems favoured on a positive measure set of starting distributions (maybe even measure 1).

    But you can just chuck away all the troublesome baggage that the Bohmian model entails and accept fundamental randomness – then the squared power rule is the most likely outcome, a large numbers result – ie it is a thermodynamic property of the evolution

  22. Hakon Hallingstad

    Sean @ 16 and Moshe,

    If one carries out the calculation for a|-x2> – a|-x1>, one comes to the equation:
    V[a|x2> – a|x1>] + V[-a|x2> + a|x1>] = x1 + x2.

    However under the assumptions above, we are not allowed to assume
    V[a|x2> – a|x1>] = V[-a|x2> + a|x1>]
    and so the derivation stalls at this point. It is absolutely crucial for the argument that the coefficients in a|x1> + b|x2> are equal, contrary to QM which allows an arbitrary phase.

    For instance
    V[a|x1> + b|x2>] = (|a| x1 + |b| x2) / (|a| + |b|),
    would be consistent with the 2 axioms. As far as I can tell, the axioms imply
    – V is linear in x1 and x2
    – The coefficient of x1 is some function f(a, b), with f(1, 0) = 1
    – The coefficient of x2 is f(b, a) = 1 – f(a, b)

    In order to show V is the expected average value of a measurement of X, one will have to prove f(a, b) = |a|^2/(|a|^2 + |b|^2), so there is still a lot of derivations left to be done. And showing the coefficient goes as |a|^2 is the hard part of the Born rule.

  23. Huw – actually, I’d have thought that freely modifying metaphysics in situations like this is congenial to pragmatism. The ‘harder’ scientific claims of physics, confirmation theory, natural language semantics, etc aren’t meddled with; we just pick (on a pragmatic basis) whichever metaphysical framework allows the harder claims to hang together most naturally.

    On a) – I was suggesting that any possible agent is going to be an agent with *only* in-branch preferences – sorry for being unclear. From the perspective I advocate, the whole state of the wavefunction is a non-contingent subject-matter: the only contingency is self-locating. On a functionalist account of mental states, it makes no sense to ascribe preferences defined over non-contingent subject-matters. (What’s going on here is that the modal framework is helping reinforce Wallace’s ‘pragmatic’ argument for his principle Branching Indifference.)

    On b) – yes, the equivalent of wanting to avoid torture in any world, in the limiting case of infinitely many worlds, will be infinite risk aversion. Is that a problem? (In any case, the limiting case might turn out to be metaphysically impossible – that’s an empirical matter.) What the response is meant to buy is a translation between ‘preferences over wavefunctions’ and ordinary preferences. Everettians who take this line can explain away the apparent coherence of preferences over wavefunctions by showing that they’re just ordinary kinds of preferences (i.e. preferences about self-location) under an unfamiliar mode of presentation.

  24. Just one last note.

    Using ‘ to represent an index, an equation that makes some of the previous comments clearer is

    = Sum (E’ |z’|^2)

    which is understood as meaning that the probability of seeing eigenvalue E is the absolute value of complex number z squared.

    Now Dirac has some interesting points that should be considered in ‘The Principles of Quantum Mechanics 4th ed’.

    pg35

    “One might think one could measure a complex dynamical variable by measuring separately its real and imaginary parts. But this would involve two measurements or two observations, which would be all right in classical mechanics, but would not do in quantum mechanics, where two observations in general interfere with one another-it is not in general permissible to consider that two observations can be made exactly simultaneously,..”

    pg38

    “In the special case when the real dynamical variable is a number, every state is an eigenstate and the dynamical variable is obviously and observable. Any measurement of it always gives the same result, so it is just a physical constant, like the charge of an electron.”

    pg74

    “Even when one is interested only in the probability of an incomplete set of commuting observables having specified values, it is usually necessary first to make the set a complete one by the introduction of some extra commuting observables and to obtain the probability of the complete set having specified values (as the square of the modulus of a probability amplitude), and then to sum or integrate over all possible values of the extra observables.”

    So an observer can not make two simultaneous measurements of the same observable, physical constants are real numbers, and if you don’t have enough indices to fully describe the state you add more indices and consider all potential values.

    Since this procedure can continue indefinitely one begins running into the same problems with the continuum.

    The point in this rambling is that although we can not know whether such higher order hierarchy has real existence, we have to resort to it from a computational standpoint.

Comments are closed.

Scroll to Top