Guest Post: Joe Polchinski on Black Holes, Complementarity, and Firewalls

If you happen to have been following developments in quantum gravity/string theory this year, you know that quite a bit of excitement sprang up over the summer, centered around the idea of “firewalls.” The idea is that an observer falling into a black hole, contrary to everything you would read in a general relativity textbook, really would notice something when they crossed the event horizon. In fact, they would notice that they are being incinerated by a blast of Hawking radiation: the firewall.

This claim is a daring one, which is currently very much up in the air within the community. It stems not from general relativity itself, or even quantum field theory in a curved spacetime, but from attempts to simultaneously satisfy the demands of quantum mechanics and the aspiration that black holes don’t destroy information. Given the controversial (and extremely important) nature of the debate, we’re thrilled to have Joe Polchinski provide a guest post that helps explain what’s going on. Joe has guest-blogged for us before, of course, and he was a co-author with Ahmed Almheiri, Donald Marolf, and James Sully on the paper that started the new controversy. The dust hasn’t yet settled, but this is an important issue that will hopefully teach us something new about quantum gravity.


Introduction

Thought experiments have played a large role in figuring out the laws of physics. Even for electromagnetism, where most of the laws were found experimentally, Maxwell needed a thought experiment to complete the equations. For the unification of quantum mechanics and gravity, where the phenomena take place in extreme regimes, they are even more crucial. Addressing this need, Stephen Hawking’s 1976 paper “Breakdown of Predictability in Gravitational Collapse” presented one of the great thought experiments in the history of physics.

The experiment that Hawking envisioned was to let a black hole form from ordinary matter and then evaporate into radiation via the process that he had discovered two years before. According to the usual laws of quantum mechanics, the state of a system at any time is described by a wavefunction. Hawking argued that after the evaporation there is not a definite wavefunction, but just a density matrix. Roughly speaking, this means that there are many possible wavefunctions, with some probability for each (this is also known as a mixed state). In addition to the usual uncertainty that comes with quantum mechanics, there is the additional uncertainty of not knowing what the wavefunction is: information has been lost. As Hawking put it, “Not only does God play dice, but he sometimes confuses us by throwing them where they can’t be seen.”

Density matrices are much used in statistical mechanics, where they represent our ignorance of the exact situation. Our system may be in contact with a thermal bath, and we do not keep track of the state of the bath. Even for an isolated system, we may only look at some macroscopic variables and not keep track of every atom. But in both cases the complete description is in terms of a definite wavefunction. Hawking was arguing that for the final state of the black hole, the most complete description was in terms of a density matrix.

Hawking had thrown down a gauntlet that was impossible to ignore, arguing for a fundamental change in the rules of quantum mechanics that allowed information loss. A common reaction was that he had just not been careful enough, and that as for ordinary thermal systems the apparent mixed nature of the final state came from not keeping track of everything, rather than a fundamental property. But a black hole is different from a lump of burning coal: it has a horizon beyond which information cannot escape, and many attempts to turn up a mistake in Hawking’s reasoning failed. If ordinary quantum mechanics is to be preserved, the information behind the horizon has to get out, but this is something tantamount to sending information faster than light.

I have always been in awe of Hawking’s paper. His argument stood up to years of challenge, and subtle analyses that only sharpened his conclusion. Eventually it came to be realized that quantum mechanics in its usual form could be preserved only if our understanding of spacetime and locality broke down in a big way. In fact, as I will describe further below, this is now widely believed. So Hawking may have been wrong about what had to give (and he conceded in 2004, perhaps prematurely), but he was right about the most important thing: his argument required a change in some fundamental principle of physics.

Black hole complementarity

To get a closer look at the argument for information loss, suppose that an experimenter outside the black hole takes an entangled pair of spins |+-> + |-+> and throws the first spin into the black hole. The equivalence principle tells us that nothing exceptional happens at the horizon, so the spin passes freely into the interior. But now the outside of the black hole is entangled with the inside, and by itself the outside is in a mixed state. The spin inside can’t escape, so when the black hole decays, the mixed state on the outside is all that is left. In fact, this process is happening all the time without the experimenter being involved: the Hawking evaporation is actually due to production of entangled pairs, with one of each pair escaping and one staying behind the horizon, so the outside state always ends up mixed.

A couple of outs might come to mind. Perhaps the dynamics at the horizon copies the spin as it falls in and sends the copy out with the later Hawking radiation. However, such copying is not consistent with the superposition principle of quantum mechanics; this is known as the no-cloning theorem. Or, perhaps the information inside escapes at the last instant of evaporation, when the remnant black hole is Planck-sized and we no longer have a classical geometry. Historically, this was the third of the main alternatives: (1) information loss, (2) information escaping with the Hawking radiation, and (3) remnants, with subvariations such as stable and long-lived remnants. The problem with remnants that these very small objects need an enormous number of internal states, as many as the original black hole, and this leads to its own problems.

In 1993, Lenny Susskind (hep-th/9306069, hep-th/9308100), working with Larus Thorlacius and John Uglum and building on ideas of Gerard ‘t Hooft and John Preskill, tried to make precise the kind of nonlocal behavior that would be needed in order to avoid information loss. Their principle of black hole complementarity requires that different observers see the same bit of information in different places. An observer outside the black hole will see it in the Hawking radiation, and an observer falling into the black hole will see it inside. This sounds like cloning but it is different: there is only one bit in the Hilbert space, but we can’t say where it is: locality is given up, not quantum mechanics. Another aspect of the complementarity argument is that the external observer sees the horizon as a hot membrane that can radiate information, while in infalling observer sees nothing there. In order for this to work, it must be that no observer can see the bit in both places, and various thought experiments seemed to support this.

At the time, this seemed like an intriguing proposal, but not (for most of us) convincingly superior to information loss, or remnants. But in 1997 Juan Maldacena discovered AdS/CFT duality, which constructs gravity in an particular kind of spacetime box, anti-de Sitter space, in terms of a dual quantum field theory.
(Hawking’s paradox is still present when the black hole is put in such a box). The dual description of a black hole is in terms of a hot plasma, supporting the intuition that a black hole should not be so different from any other thermal system. This dual system respects the rules of ordinary quantum mechanics, and does not seem to be consistent with remnants, so we get the information out with the Hawking radiation. This is consistent too with the argument that locality must be fundamentally lost: the dual picture is holographic, formulated in terms of field theory degrees of freedom that are projected on the boundary of the space rather than living inside it. Indeed, the miracle here is that gravitational physics looks local at all, not that this sometimes fails.

A new paradox?

AdS/CFT duality was discovered largely from trying to solve the information paradox. After Andy Strominger and Cumrun Vafa showed that the Bekenstein-Hawking entropy of black branes could be understood statistically in terms of D-branes, people began to ask what happens to the information in the two descriptions, and this led to seeming coincidences that Maldacena crystallized as a duality. As for a real experiment, the measure of a thought experiment is whether it teaches us about new physics, and Hawking’s had succeeded in a major way.

For AdS/CFT, there are still some big questions: precisely how does the bulk spacetime emerge, and how do we extend the principle out of the AdS box, to cosmological spacetimes? Can we get more mileage here from the information paradox? On the one hand, we seem to know now that the information gets out, but we do not know the mechanism, the point at which Hawking’s original argument breaks down. But it seemed that we no longer had the kind of sharp alternatives that drove the information paradox. Black hole complementarity, though it did not provide a detailed explanation of how different observers see the same bit, seemed to avoid all paradoxes.

Earlier this year, with my students Ahmed Almheiri and Jamie Sully, we set out to sharpen the meaning of black hole complementarity, starting with some simple `bit models’ of black holes that had been developed by Samir Mathur and Steve Giddings. But we quickly found a problem. Susskind had nicely laid out a set of postulates, and we were finding that they could not all be true at once. The postulates are (a) Purity: the black hole information is carried out by the Hawking radiation, (b) Effective Field Theory (EFT): semiclassical gravity is valid outside the horizon, and (c) No Drama: an observer falling into the black hole sees no high energy particles at the horizon. EFT and No Drama are based on the fact that the spacetime curvature is small near and outside the horizon, so there is no way that strong quantum gravity effects should occur. Postulate (b) also has another implication, that the external observer interprets the information as being radiated from an effective membrane at (or microscopically close to) the horizon. This fits with earlier observations that the horizon has effective dynamical properties like viscosity and conductivity.

Purity has an interesting consequence, which was developed in a 1993 paper of Don Page and further in a 2007 paper of Patrick Hayden and Preskill. Consider the first two-thirds of the Hawking photons and then the last third. The early photons have vastly more states available. In a typical pure state, then, every possible state of the late photons will be paired with a different state of the early radiation. We say that any late Hawking photon is fully entangled with some subsystem of the early radiation.

However, No Drama requires that this same Hawking mode, when it is near the horizon, be fully entangled with a mode behind the horizon. This is a property of the vacuum in quantum field theory, that if we divide space into two halves (here at the horizon) there is strong entanglement between the two sides. We have used the EFT assumption implicitly in propagating the Hawking mode backwards from infinity, where we look for purity, to the horizon where we look for drama; this propagation backwards also blue-shifts the mode, so it has very high energy. So this is effectively illegal cloning, but unlike earlier thought experiments a single observer can see both bits, measuring the early radiation and then jumping in and seeing the copy behind the horizon.

After puzzling over this for a while we started to ask other people about it. The first one was Don Marolf, who remarkably had just come to the same conclusion by a somewhat different argument, mining the black hole by lowering a box near to the horizon and then pulling up some thermal excitations, rather than looking at the late Hawking photon. This is nicely complementary to our argument: it is a bit more involved, but it shows that if there is drama then it is everywhere on the horizon, whereas the Hawking radiation argument is only sensitive to photons in nearly spherically symmetric states. So if drama breaks down, it breaks down in a big way, with a firewall of Planck-energy photons just behind the horizon.

As we spoke to more and more people, no one could find a flaw in our reasoning. Eventually I emailed Susskind, expecting that he would quickly straighten us out. But his reaction, a common one, was first to tell us that there must be some trivial mistake in our reasoning, and a bit later to realize that he was as confused as we were. He is now a believer in the firewall, though we are still debating whether it forms at the Page time (half the black hole lifetime) or much faster, the so-called fast-scrambling time. The argument for the latter is that this is the time scale over which most black hole properties reach equilibrium. The argument for the former is that self-entanglement of the horizon should be the origin of the interior spacetime, and this runs out only at the Page time.

Actually, over the years many people have suggested that the black hole geometry ends at the horizon. Most of these arguments are based on questionable dynamics, with perhaps the most coherent proposal being Mathur’s fuzzball, the horizon being replaced by a shell of branes (though Samir himself is actually advocating a form of complementarity now).

If we want to avoid drama, we have to give up either purity or EFT. I am reluctant to give up purity: AdS/CFT is a guide that I trust, but even the earlier arguments for purity were strong. Giving up EFT is not so implausible. AdS/CFT tells us that locality, the basis for EFT, has to break down, and this need not stop at the horizon. Indeed, Giddings has recently been arguing for a nonlocal interaction that transfers bits from the inside of the black hole to a macroscopic distance outside. But it is hard to come up with a good scenario: the violation of EFT is much larger than might have been anticipated (it is an order one effect in the two-particle correlator). One might try to appeal to complementarity, since drama is measured by an infalling observer and purity by an asymptotic one, but these two can communicate. Also, the breakdown that is needed is subtle and difficult to implement, a `transfer of entanglement’ (this is particularly a problem for the nonlocal interaction idea). This transfer is reminiscent of an idea that Gary Horowitz and Maldacena put forward a while back, that there is future boundary condition, a final state, at the black hole singularity. Several authors have now proposed that some form of complementarity is operating, but it is telling that some of them have withdrawn their papers for rethinking, and there is no agreed picture among them.

Where is this going? So far, there is no argument that the firewall is visible outside the black hole, so perhaps no observational consequences there. For cosmology, one might try to extend this analysis to cosmological horizons, but there is no analogous information problem there, so it’s a guess. Do I believe in firewalls? My initial intuition was that EFT would break down and complementarity would save the day, but a nice scenario has not emerged, while the arguments for the firewall as arising from a loss of entanglement are seeming more plausible. But the main thing is that I am now as puzzled about the information paradox as I ever was in the past, and it seems like a good kind of puzzlement that may lead to new insights into quantum gravity.

62 Comments

62 thoughts on “Guest Post: Joe Polchinski on Black Holes, Complementarity, and Firewalls”

  1. Tom,

    Re: your further query on experimental outcomes. I simply don’t know. If you are interested, I’d search arivx for Clifford Johnson.

    e.

  2. @Joe: Thanks for the answer. I’m only just learning about these things, and have a little way to go before I have a firm opinion.

    @Tom: The questions you’re asking, although important, don’t really have anything to do with the issue at hand. For the current purposes, Prof. Polchinski is assuming that AdS/CFT is correct (for which there is a huge amount of evidence), and using it to provide guidance on quantum gravity more generally. You may think that is the wrong approach, but that’s fine; this is trying to push back the boundaries of knowledge, and there will be disagreements (and mistakes) along the way.

    Your questions about applications of AdS/CFT to condensed matter are pertinent, even though they’re irrelevent here. For what it’s worth, I’ve heard quite a bit about this, and I’m very skeptical. What people seem to do in practice is pick a simple gravitational theory in AdS, do some fairly easy classical calculations, re-interpret the results in terms of a hypothesised dual field theory, and then go looking for a complicated condensed matter system which it might match. It doesn’t seem to offer any real understanding, in my opinion.

    (Also, the fluid/gravity correspondence mentioned by Dilaton is really completely different, and purely classical.)

  3. There were several comments above arguing that black holes might not exist, that Hawking radiation is a conjecture and that experiments on Planck scale and quantum gravity are needed to properly describe the Hawking radiation.

    This is all far-fetched, to say the least. First, black holes are predicted to exist by general relativity, and by now there are some generally accepted astronomical observations of such objects out there. Second, Hawking radiation is far from being a conjecture. It is a prediction of Standard Model physics near the black hole horizon. There is no Planck-scale physics involved, just SM and classical GR. The quantum gravity effects are important only near the black hole singularity, while the horizon is quite well described with the classical theory.

    Therefore, the BH information paradox is a real puzzle to solve, not just some conjectured scenario. Even without ever doing any real experiments near the horizon of any astronomical BH, we still have two theories (SM and GR) which lead to a paradoxical situation when combined to describe information loss in black holes. This is a conceptual problem with one of the two theories (or both) and needs to be resolved, regardless of any lack of experimental data. The so-far-unknown theory of quantum gravity is expected to give a resolution of the paradox, so thought-experiments on this topic are extremely useful for people doing research in quantum gravity.

    The only conjecture in the whole story is whether AdS/CFT does or does not have anything to do with non-AdS quantum gravity. While personally I don’t believe AdS/CFT is applicable to the real-world gravity, it is a legal research avenue to assume otherwise, and discuss its implications to the BH information problem.

    So please, folks, this isn’t some conjectures-all-around-non-realistic-theoretical-mumbo-jumbo-made-up-for-easy-paycheck problem. It is a quite real theoretical incompatibility between QM and GR, and needs to be addressed, one way or another.

    HTH 🙂

  4. Joe: Regarding remnants. The whole problem rests on the use of effective field theory. Remnants necessarily bring in regions of Planck scale curvature, and can contain regions with a large volume and a small surface area. In contrast to the small curvatures at which you want to give up eft, it is perfectly reasonable to expect eft to break down when dealing with such remnants. And there goes the so-called “pair-production problem.” Regarding giving up AdS/CFT and the statistical interpretation of the BH entropy: Well, your own paper says that if you want to cling on to this, you have to take rather drastic means to modify eft in the small curvature regime to achieve consistency. I find this very unappealing. I’d rather deal with remnants. Either way, I guess this is personal taste to some extent, I’m just saying it’s a possibility that shouldn’t be neglected.

  5. Joe: Regarding the traced-back mode. The b-mode you trace back isn’t just entangled with the early radiation. That you can trace it back on its own (without considering other contributions to the state at the horizon) is actually a consequence of the measurement the observer does at I^+. If it was just entangled with the late radiation, what would prevent you from mirroring the outside modes to negative energy modes to the inside and have them cancel pairwise as it is usually the case?

    I’ll save you the effort of replying to this and add what Don explained to me: You’ve assumed the inside modes to be independent of the outside modes, so whatever goes on with the outside state by making the measurement at I^+ doesn’t affect the negative energy modes, so you can’t expect them to cancel. (Did I finally get this straight?) There’s two problems I have with that. First, that assumption isn’t explicitly stated. And second, I still don’t see why a reshuffling of occupation numbers is the only way to encode information in the outgoing radiation. And if that’s not what you do, how would the infalling observer notice the difference?

  6. Captain Not-So-Obvious

    vmarko, I don’t know about Tom, but Christian has a problem with special relativity, he think it needs more work. So consider this hypothesis, that when a discussion about advanced theoretical physics sputters to a halt because of noisy expressions of skepticism about whether it applies to reality, at least half of the skeptics will also be skeptical about matters that are far more basic than what’s under discussion.

  7. This thread seems to have ground to a halt, but I still have some issues, and don’t know where to turn to clear them up, so let me write them here just in case anybody has some light to shed on them. This all comes from pages 3-5 of the firewall paper, where the main argument is presented.

    – AMPS state that since the energy is finite, the Hawking radiation can be considered as living in a finite-dimensional Hilbert space. I don’t see how this is the case. In a theory with massless particles (e.g. photons), the initial ‘stuff’ which made the black hole could have consisted of arbitrarily many quanta (of sufficiently small energy), giving an infinite-dimensional Hilbert space. This point might not actually be important though…

    – I don’t understand the relation between B and C. B is some field mode, corresponding to part of the ‘late’ Hawking radiation. C is then “…its interior partner mode.” What does that mean?

    – As far as I can tell, AMPS (and Bousso in his follow-up) are taking “A and B are maximally entangled” to be equivalent to S_AB = 0. That doesn’t correspond to my understanding of entanglement. A two-qubit system in the state |00> has zero entropy, but the two qubits are not entangled. For the same reason, I don’t see how different field modes are entangled in (say) the Minkowski vacuum; all the oscillators are in their ground state.

    – There is a part of the (seemingly crucial) discussion about entropy at the bottom of page 4 which I don’t understand. We have a three-part system ABC, and strong sub-additivity of entropy, S_AB + S_BC >= S_B + S_ABC. The inequality S_AB < S_A is argued for, and this seems fine (the Hawking radiation becomes 'more pure' as more of it is emitted). But then the following sentence appears: "The absence of infalling drama means that S_BC = 0 and so S_ABC = S_A." Let's take S_BC = 0 for granted; I don't see why that implies the second equality. The general inequality is S_ABC <= S_A + S_BC, with equality only if A and BC are uncorrelated, in the sense that the density matrix for the whole system is just the tensor product of those for the two subsystems, A and BC.

  8. I am puzzled by the separation of radiation into “early” and “late”— you are saying “wait for 99% the radiation to come out, then consider something dumped into the remaining black hole, and it is entangled with the early radiation, and this means that the late radiation is determined”, but this implicitly assumes that you can do detailed experiments on the precise entangled quantum state of the outgoing radiation, even after knowing it is early (this is a brutal measurement already— you have learned to a certain extent when the radiation has come out! Why should you still be able to extract anything now? This measurement already ruins the phase coherence of the late state), and you also implicitly assume you know the late black hole thermal ensemble ( you know about where the black hole is at late times, and about how big it is—- this is also an extremely brutal measurement).

    Given that you assume you have these brutal bits of knowledge, namely which photons are early, and which are late, and how big the black hole is and where it is, I don’t see any reason to suppose you can entangle the remaining coherence in the early radiation distant from the black hole and learn anything at all about the emissions of the late-classical black hole from the radiation. I think all you have shown is that the separation “early” and “late” is just not compatible with a unitary S-matrix for the black hole.

    Just by measuring a black hole’s approximate position and approximate horizon location, you are restricting it’s thermal ensemble in a way that prevents certain kinds of entanglement from surviving. While I don’t see any proof that what I am saying is right, I also don’t see any guarantee that the implicit entanglement involves in measuring that the radiation is early leaves the early radiation state pure enough to do the measurements you need. Obviously if it does, your argument goes through, but the very fact that your have a paradox must mean it is not so— in order to determine the late radiation, you need to mae measurements on the “early” radiation over such a very long time that you aren’t even sure when you are done if it is early or late. This means that you are working over an entire black hole S-matrix event, not separating it into a two-step scattering where you know something about the intermediate black hole state (that it is a certain size, with a certain amount and kind of early radiation).

    So, as far as I see, there is an additional unjustified assumption here, which is common to all the referenced literature about the Page time, namely that it is possible to simultaneously produce semiclassical black hole states entangled with pure-enough early Hawking radiation to make measurements on the whole set of early Hawking radiated particles which determine something about the late radiation. This is a heuristic assumption, and I think all you are doing is showing that it is false.

    The only case in which I can see this early/late separation is completely justified is if you throw something into a highly charge black hole, and wait for the hole to decay to extremality, and look at _all_ the radiation emitted during this process (so all the radiation is “early” in this definition, since once the black hole is extremal again, it’s cold asymptotic S-matrix state). In this case, the arguement is surely completely coherent, and the end-state of the black hole is a known pure-state, it’s an extremal black hole with charge Q and velocity V (assuming a perfectly BPS model black hole, so there is no further decay). Then you can measure the outgoing radiation state, and determine the Q and V of the final state.

    But in this case, the end result is no longer decaying at all, so there is no paradox, no thermal horizon and no Hawking radiation. The only time you get a paradox is when the late-state black hole is truly thermal and truly macroscopically entropic, so any intuition that associates the GR solution to a quantum state of some sort is not particularly clear (you have to associate the GR solution to a thermal ensemble).

    So I can’t internalize the argument enough to see whether it is correct, it seems obviously wrong (but that’s only because the holographic complementarity seems obviously right to me), and the sticking point in understanding the argument for me is the heuristic regarding describing hugely entropic black holes using some sort of unknown quantum state for the black hole alone, rather than an entangled state of the black hole and all the radiation, early and late, with no way to make the distinction between early and late without completely ruining the ability to measure anything interesting at all about the late state.

    So while I don’t find the argument persuasive, it’s only because I don’t buy the assumptions in the related literature on Page times (assumptions which don’t appear in Page’s original paper, I should add). I am questioning these obscure assumptions, not the detailed stuff in the latest paper.

    In Susskind’s reply, since he at times made similar arguments about early and late radiation, he also ends up using a classical black hole picture, and pretends that you can talk about the state of infalling mater and outgoing early/late radiation separately and coherently. So Susskind already implicitly internalized this framework, and perhaps this is the reason the argument was persuasive for him. I would not give up complementarity for this, or honestly, just about anything barring someone taking an instrument and throwing it into a black hole and getting a contradiction with complementarity. It’s just too obviously correct to be false.

    Regarding the “firewall” resolution, it is not satisfactory, because the firewall stress, in the same semiclassical approximation, is nonzero on the horizon, and falls inward along with anything else. This means that the singularity needs to constantly replenish the firewall with new stress by some crazy mechanism, something which is not really reasonable at early times. To see this, consider charged black holes, because the domain of communication with the singularity does not extend past the Cauchy horizon (which degenerates to r=0 in the neutral limit).

    I really think that this is finding an inconsistency in the implicit assumptions in the Page time literature, not in black hole theory itself. This is very interesting and important, but please don’t discard complementarity, as I think it is almost surely fine as is.

  9. @Kaleberg #50: the blueshift doesn’t approach infinity as you approach the outer event horizon of a black hole according to general relativity, nor would you see the entire future history of the universe in fast forward (though these things would theoretically be true for someone attempting to cross the inner horizon of an ideal rotating black hole). See the section titled “Will you see the universe end?” in the following section of an online physics FAQ (from the website of physicist John Baez): http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/fall_in.html

  10. I have a suggestion for a new thought experiment, which someone might enjoy analysing. Take a black hole that’s happily emitting Hawking radiation, shrinking, and on the road to vanishing eventually. (Make it big, so the radiation is very soft [low-energy photons etc.] and gentle and slow.) Now surround it with mirrors, so that the Hawking radiation bounces back into the hole. (Maybe after several bounces, hole -> mirror-shell -> mirror-shell -> … -> hole – it doesn’t matter, the space between the hole and the mirrors will fill up to just the extent required to equalise the inflow and outflow.) The black hole’s lifetime is now infinite. In fuzzball terms, it can be an exact energy eigenstate – whereas without the mirrors, it’s changing (shrinking) with time, and so can’t be.

    Can someone perhaps study this exact energy eigenstate, which in the quantum sense is static, with tools not available for dynamically evolving states (which are awkward superpositions of energy eigenstates and often incredibly complicated to analyse)? I’m thinking, in particular, of the possibility of calculating the frequency spectrum of the Hawking radiation – which we should perhaps rename “the Hawking standing-wave pattern” since it’s not really radiating any more. Would this help to reveal whether the near-horizon environment is “violent” – a firewall – or “gentle” – a classical-GR-like quiet place?

    Just a thought! I hope this thought experiment is useful to someone!

  11. The argument for a firewall has several problems that need to be solved, in order for it to be taken seriously.

    The first one is that it is impossible to determine the entanglement of a particular quantum state. The proof is as following:

    Assume that you have a detector that can detect if the state of two spins is entangled or not. So we have (here |0> is the “null” state of the detector and U is the unitary operator determining the interaction of the detector with the spins)

    U |PSI0>|0> = |PSI0>|:-)>,

    if the state |PSI0> of the two spins is entangled, and

    U |PSI1>|0> = |PSI1>|:-(>,

    if the state |PSI1> of the two spins is not entangled.

    The state |+>|+> is not entangled, so we have

    U |+>|+>|0> = |+>|+>|:-(>,

    and also

    U |->|->|0> = |->|->|:-(>.

    Now consider the entangled state |+>|+>+|->|->. Because of the linearity of quantum evolution, we must have

    U( |+>|+>+|->|->)|0> = ( |+>|+>+|->|->)|:-(>.

    This proves that the “entanglement detector” cannot work in general.

    The second problem has to do with information storage. If you say that “a single observer can see both bits, measuring the early radiation and then jumping in and seeing the copy behind the horizon,” you have to account for how the observer can store the information describing the state of the early radiation. The number of the micro-states of the early radiation is larger than exp(A/4), where A is the present area of the black-hole event horizon. The information required to describe a particular state (which is the log of the number of the states) is larger than A/4. Now, the holographic principle implies that the observer, in order to be able to store the information, has to be surrounded by an area larger than A. The observer has to be larger than the black hole! This is an alternative explanation of why the observer cannot just “jump in.”

Comments are closed.

Scroll to Top