If you happen to have been following developments in quantum gravity/string theory this year, you know that quite a bit of excitement sprang up over the summer, centered around the idea of “firewalls.” The idea is that an observer falling into a black hole, contrary to everything you would read in a general relativity textbook, really would notice something when they crossed the event horizon. In fact, they would notice that they are being incinerated by a blast of Hawking radiation: the firewall.

This claim is a daring one, which is currently very much up in the air within the community. It stems not from general relativity itself, or even quantum field theory in a curved spacetime, but from attempts to simultaneously satisfy the demands of quantum mechanics and the aspiration that black holes don’t destroy information. Given the controversial (and extremely important) nature of the debate, we’re thrilled to have Joe Polchinski provide a guest post that helps explain what’s going on. Joe has guest-blogged for us before, of course, and he was a co-author with Ahmed Almheiri, Donald Marolf, and James Sully on the paper that started the new controversy. The dust hasn’t yet settled, but this is an important issue that will hopefully teach us something new about quantum gravity.

**Introduction**

Thought experiments have played a large role in figuring out the laws of physics. Even for electromagnetism, where most of the laws were found experimentally, Maxwell needed a thought experiment to complete the equations. For the unification of quantum mechanics and gravity, where the phenomena take place in extreme regimes, they are even more crucial. Addressing this need, Stephen Hawking’s 1976 paper “Breakdown of Predictability in Gravitational Collapse” presented one of the great thought experiments in the history of physics.

The experiment that Hawking envisioned was to let a black hole form from ordinary matter and then evaporate into radiation via the process that he had discovered two years before. According to the usual laws of quantum mechanics, the state of a system at any time is described by a wavefunction. Hawking argued that after the evaporation there is not a definite wavefunction, but just a density matrix. Roughly speaking, this means that there are many possible wavefunctions, with some probability for each (this is also known as a mixed state). In addition to the usual uncertainty that comes with quantum mechanics, there is the additional uncertainty of not knowing what the wavefunction is: information has been lost. As Hawking put it, “Not only does God play dice, but he sometimes confuses us by throwing them where they can’t be seen.”

Density matrices are much used in statistical mechanics, where they represent our ignorance of the exact situation. Our system may be in contact with a thermal bath, and we do not keep track of the state of the bath. Even for an isolated system, we may only look at some macroscopic variables and not keep track of every atom. But in both cases the complete description is in terms of a definite wavefunction. Hawking was arguing that for the final state of the black hole, the most complete description was in terms of a density matrix.

Hawking had thrown down a gauntlet that was impossible to ignore, arguing for a fundamental change in the rules of quantum mechanics that allowed information loss. A common reaction was that he had just not been careful enough, and that as for ordinary thermal systems the apparent mixed nature of the final state came from not keeping track of everything, rather than a fundamental property. But a black hole is different from a lump of burning coal: it has a horizon beyond which information cannot escape, and many attempts to turn up a mistake in Hawking’s reasoning failed. If ordinary quantum mechanics is to be preserved, the information behind the horizon has to get out, but this is something tantamount to sending information faster than light.

I have always been in awe of Hawking’s paper. His argument stood up to years of challenge, and subtle analyses that only sharpened his conclusion. Eventually it came to be realized that quantum mechanics in its usual form could be preserved only if our understanding of spacetime and locality broke down in a big way. In fact, as I will describe further below, this is now widely believed. So Hawking may have been wrong about what had to give (and he conceded in 2004, perhaps prematurely), but he was right about the most important thing: his argument required a change in some fundamental principle of physics.

**Black hole complementarity**

To get a closer look at the argument for information loss, suppose that an experimenter outside the black hole takes an entangled pair of spins |+-> + |-+> and throws the first spin into the black hole. The equivalence principle tells us that nothing exceptional happens at the horizon, so the spin passes freely into the interior. But now the outside of the black hole is entangled with the inside, and by itself the outside is in a mixed state. The spin inside can’t escape, so when the black hole decays, the mixed state on the outside is all that is left. In fact, this process is happening all the time without the experimenter being involved: the Hawking evaporation is actually due to production of entangled pairs, with one of each pair escaping and one staying behind the horizon, so the outside state always ends up mixed.

A couple of outs might come to mind. Perhaps the dynamics at the horizon copies the spin as it falls in and sends the copy out with the later Hawking radiation. However, such copying is not consistent with the superposition principle of quantum mechanics; this is known as the no-cloning theorem. Or, perhaps the information inside escapes at the last instant of evaporation, when the remnant black hole is Planck-sized and we no longer have a classical geometry. Historically, this was the third of the main alternatives: (1) information loss, (2) information escaping with the Hawking radiation, and (3) remnants, with subvariations such as stable and long-lived remnants. The problem with remnants that these very small objects need an enormous number of internal states, as many as the original black hole, and this leads to its own problems.

In 1993, Lenny Susskind (hep-th/9306069, hep-th/9308100), working with Larus Thorlacius and John Uglum and building on ideas of Gerard ‘t Hooft and John Preskill, tried to make precise the kind of nonlocal behavior that would be needed in order to avoid information loss. Their principle of *black hole complementarity* requires that different observers see the same bit of information in different places. An observer outside the black hole will see it in the Hawking radiation, and an observer falling into the black hole will see it inside. This sounds like cloning but it is different: there is only one bit in the Hilbert space, but we can’t say where it is: locality is given up, not quantum mechanics. Another aspect of the complementarity argument is that the external observer sees the horizon as a hot membrane that can radiate information, while in infalling observer sees nothing there. In order for this to work, it must be that no observer can see the bit in both places, and various thought experiments seemed to support this.

At the time, this seemed like an intriguing proposal, but not (for most of us) convincingly superior to information loss, or remnants. But in 1997 Juan Maldacena discovered AdS/CFT duality, which constructs gravity in an particular kind of spacetime box, anti-de Sitter space, in terms of a dual quantum field theory.

(Hawking’s paradox is still present when the black hole is put in such a box). The dual description of a black hole is in terms of a hot plasma, supporting the intuition that a black hole should not be so different from any other thermal system. This dual system respects the rules of ordinary quantum mechanics, and does not seem to be consistent with remnants, so we get the information out with the Hawking radiation. This is consistent too with the argument that locality must be fundamentally lost: the dual picture is *holographic*, formulated in terms of field theory degrees of freedom that are projected on the boundary of the space rather than living inside it. Indeed, the miracle here is that gravitational physics looks local at all, not that this sometimes fails.

**A new paradox?**

AdS/CFT duality was discovered largely from trying to solve the information paradox. After Andy Strominger and Cumrun Vafa showed that the Bekenstein-Hawking entropy of black branes could be understood statistically in terms of D-branes, people began to ask what happens to the information in the two descriptions, and this led to seeming coincidences that Maldacena crystallized as a duality. As for a real experiment, the measure of a thought experiment is whether it teaches us about new physics, and Hawking’s had succeeded in a major way.

For AdS/CFT, there are still some big questions: precisely how does the bulk spacetime emerge, and how do we extend the principle out of the AdS box, to cosmological spacetimes? Can we get more mileage here from the information paradox? On the one hand, we seem to know now that the information gets out, but we do not know the mechanism, the point at which Hawking’s original argument breaks down. But it seemed that we no longer had the kind of sharp alternatives that drove the information paradox. Black hole complementarity, though it did not provide a detailed explanation of how different observers see the same bit, seemed to avoid all paradoxes.

Earlier this year, with my students Ahmed Almheiri and Jamie Sully, we set out to sharpen the meaning of black hole complementarity, starting with some simple `bit models’ of black holes that had been developed by Samir Mathur and Steve Giddings. But we quickly found a problem. Susskind had nicely laid out a set of postulates, and we were finding that they could not all be true at once. The postulates are (a) Purity: the black hole information is carried out by the Hawking radiation, (b) Effective Field Theory (EFT): semiclassical gravity is valid outside the horizon, and (c) No Drama: an observer falling into the black hole sees no high energy particles at the horizon. EFT and No Drama are based on the fact that the spacetime curvature is small near and outside the horizon, so there is no way that strong quantum gravity effects should occur. Postulate (b) also has another implication, that the external observer interprets the information as being radiated from an effective membrane at (or microscopically close to) the horizon. This fits with earlier observations that the horizon has effective dynamical properties like viscosity and conductivity.

Purity has an interesting consequence, which was developed in a 1993 paper of Don Page and further in a 2007 paper of Patrick Hayden and Preskill. Consider the first two-thirds of the Hawking photons and then the last third. The early photons have vastly more states available. In a typical pure state, then, every possible state of the late photons will be paired with a different state of the early radiation. We say that any late Hawking photon is fully entangled with some subsystem of the early radiation.

However, No Drama requires that this same Hawking mode, when it is near the horizon, be fully entangled with a mode behind the horizon. This is a property of the vacuum in quantum field theory, that if we divide space into two halves (here at the horizon) there is strong entanglement between the two sides. We have used the EFT assumption implicitly in propagating the Hawking mode backwards from infinity, where we look for purity, to the horizon where we look for drama; this propagation backwards also blue-shifts the mode, so it has very high energy. So this is effectively illegal cloning, but unlike earlier thought experiments a single observer can see both bits, measuring the early radiation and then jumping in and seeing the copy behind the horizon.

After puzzling over this for a while we started to ask other people about it. The first one was Don Marolf, who remarkably had just come to the same conclusion by a somewhat different argument, mining the black hole by lowering a box near to the horizon and then pulling up some thermal excitations, rather than looking at the late Hawking photon. This is nicely complementary to our argument: it is a bit more involved, but it shows that if there is drama then it is everywhere on the horizon, whereas the Hawking radiation argument is only sensitive to photons in nearly spherically symmetric states. So if drama breaks down, it breaks down in a big way, with a firewall of Planck-energy photons just behind the horizon.

As we spoke to more and more people, no one could find a flaw in our reasoning. Eventually I emailed Susskind, expecting that he would quickly straighten us out. But his reaction, a common one, was first to tell us that there must be some trivial mistake in our reasoning, and a bit later to realize that he was as confused as we were. He is now a believer in the firewall, though we are still debating whether it forms at the Page time (half the black hole lifetime) or much faster, the so-called fast-scrambling time. The argument for the latter is that this is the time scale over which most black hole properties reach equilibrium. The argument for the former is that self-entanglement of the horizon should be the origin of the interior spacetime, and this runs out only at the Page time.

Actually, over the years many people have suggested that the black hole geometry ends at the horizon. Most of these arguments are based on questionable dynamics, with perhaps the most coherent proposal being Mathur’s fuzzball, the horizon being replaced by a shell of branes (though Samir himself is actually advocating a form of complementarity now).

If we want to avoid drama, we have to give up either purity or EFT. I am reluctant to give up purity: AdS/CFT is a guide that I trust, but even the earlier arguments for purity were strong. Giving up EFT is not so implausible. AdS/CFT tells us that locality, the basis for EFT, has to break down, and this need not stop at the horizon. Indeed, Giddings has recently been arguing for a nonlocal interaction that transfers bits from the inside of the black hole to a macroscopic distance outside. But it is hard to come up with a good scenario: the violation of EFT is much larger than might have been anticipated (it is an order one effect in the two-particle correlator). One might try to appeal to complementarity, since drama is measured by an infalling observer and purity by an asymptotic one, but these two can communicate. Also, the breakdown that is needed is subtle and difficult to implement, a `transfer of entanglement’ (this is particularly a problem for the nonlocal interaction idea). This transfer is reminiscent of an idea that Gary Horowitz and Maldacena put forward a while back, that there is future boundary condition, a final state, at the black hole singularity. Several authors have now proposed that some form of complementarity is operating, but it is telling that some of them have withdrawn their papers for rethinking, and there is no agreed picture among them.

Where is this going? So far, there is no argument that the firewall is visible outside the black hole, so perhaps no observational consequences there. For cosmology, one might try to extend this analysis to cosmological horizons, but there is no analogous information problem there, so it’s a guess. Do I believe in firewalls? My initial intuition was that EFT would break down and complementarity would save the day, but a nice scenario has not emerged, while the arguments for the firewall as arising from a loss of entanglement are seeming more plausible. But the main thing is that I am now as puzzled about the information paradox as I ever was in the past, and it seems like a good kind of puzzlement that may lead to new insights into quantum gravity.

Didn’t we already expect that EFT would break down from e.g. Lowe, Polchinski, Susskind, Thorlacius and Uglum (1995)?

If the nice slice argument fails, presumably locality goes with it, no?

Interesting article. Thank you.

Sociology question: is this business convincing *anybody* to go back to believing in information loss?

Curious. I am a bit confused, however, why there is opposition to the firewall idea at all. I had thought that a firewall (though I’d never heard that specific term before) was a natural consequence of Hawking radiation: that the Hawking radiation would be so dramatically blue-shifted for an infalling observer that they would be blasted by a dramatic blast of radiation upon crossing the horizon.

I mean, I get that the curvature at the event horizon of a black hole isn’t anything special. But why should this mean that nothing happens there for an infalling observer?

Wow, this is so nice indeed !

It is such a nice reading and perfectly accessible even to me.

I’ve just asked about these firewalls in the context of black holes at physics SE some days ago …

Thanks so very much for this awesome guest post

“

why should this mean that nothing happens there for an infalling observer?”Short answer: Hawking’s derivation of black hole radiation assumes that the quantum field is in a

vacuumfor an infalling observer. So he (and people following him) interpreted this as proof that someone falling into a black hole will see nothing out of the ordinary while passing through the event horizon.Fascinating post, especially having read Susskind’s book “The Black Hole War” a few years ago–guess that one will need some updating! Also, a thought: I’ve read that in several GR solutions which in pure GR could lead to closed timelike curves (traversable wormholes, the Alcubierre warp bubble) there are arguments suggesting that vacuum fluctuations would build up to infinity (or planck energy density) on the boundary of the region where CTCs would become possible, suggesting that in quantum gravity there will be effects that make the CTCs of GR impossible (this is Hawking’s “chronology protection conjecture”–see this paper on the effect for wormholes, and this one for the effect for the Alcubierre bubble). So I wonder, could the “firewall” suggested above possibly be related in some way, even though a non-rotating black hole doesn’t contain any CTCs in GR?

As one who’s day job is actually selling “firewalls” (the internet protective kind) I wonder if the analogy can be reasonably be expanded. The purpose of an internet firewall is not to completely eliminate information flow in/out but to filter it allowing “permissible data” to flow but blocking that which should not flow. I’m wondering if there is a scenario where you have what I am going to describe in the language above as “constrained or controlled drama”. I’m way way over my head here so this may be a totally nonsensical attempt to connect this to something I understand and are familiar with. Anyway just felt like chiming in. I genuinely appreciate this post and the attempt to make this conceptually accessible to those of us who don’t do this for a living.

e.

“But in 1997 Juan Maldacena discovered AdS/CFT duality…”

I’m shocked by the implicit statement that AdS/CFT duality has been proven, which I hadn’t heard about. If it hasn’t been proven, Maldacena can hardly be argued to have “discovered” a statement that may not be true. If it has been proven, could I have a reference?

Now, this post seems to claim that this “firewall” business is derived in part by using the AdS/CFT duality (if it exists). To what extent can I trust this result if it is dependent on an unproven conjecture? I’d appreciate a direct email answer to this question.

Kernel,

Stating that someone has discovered a conceptual framework does not imply proof. So there is really nothing to be shocked about. It has been shown to be a very useful technique in other areas of physics such as analyzing condensed matter. If a different way of looking at the world provides new insights I’m not sure what the downside is with or without formal proof.

The earlier papers asserting that something breaks down at the event horizon usually had the flaw that they insisted event horizons couldn’t exist at all, which creates all kinds of problems because an event horizon is defined anticausally by the future behavior of lightcones.

Jacques Distler responded to one of them (which had gotten some “black holes do not exist” media attention) by talking about a thought experiment in which you pass through the event horizon of a black hole that’s going to be created by a collapsing shell of radiation that has, itself, just been created surrounding you millions of light-years away. If you feel anything out of the ordinary, it seems to be a means of instantaneous or possible backwards-in-time communication.

But it seems to me that the firewall argument gets around this kind of problem by not being so strict about removing event horizons from the universe. If the firewall forms at the fast-scrambling time or the Page time, that’s way after the situation Distler was concerned about, right? The stuff that’s collapsing to make the event horizon actually has to get into the event horizon, at least.

I still have no idea what I think about this: it bears such a startling *superficial* resemblance to old crackpot papers and erroneous arguments I thought up in college, while being apparently valid enough to get support from leading lights of the field.

Thank you all for your kind comments.

1. Physicalist: a good question. Many discussions of the information problem use `nice’ coordinate slices that extend deep into the black hole interior. One surprising feature of the current argument is that it is based only on observations outside and just inside the black hole horizon. So ideas like LPSTU that are based on the geometry of nice slices seem irrelevant.

3. Sam: I don’t know if anyone has changed their mind, but those who had stuck with information loss all along are saying “I told you so.”

4. Jason: Physicalist has answered this. Essentially, if you evolve forward in time, you only ever see a red shift. But in our case the consequences of purity tell us something about the future state, and so the blue shift enters.

7. Jesse: Susskind has been drawing an analogy between chronology protection and the firewall, essentially to prevent an observer from falling through the horizon and ending up in the earlier Hawking radiation. A paper may be forthcoming.

8. Elliot: We adopted the name firewall without really contemplating its other connotations, but like your firewalls it does provide a kind of protection.

9. Kernel: In physics, and especially in QFT, what we can prove is vastly less than what we can understand. Forty years after QCD, we still can’t prove mathematically that it confines quarks, but we have many physical arguments. Right now I am so puzzled as to say that everything is up in the air, and I hope the current puzzle leads to a deeper understanding of AdS/CFT, but falsifying it would be surprising in light of all the evidence for it.

11. Matt: Exactly, the just-formed horizon would not yet have a firewall. And yes, I agree with your last paragraph.

Or give up the strong interpretation of the BH entropy (your postulate 3), let information come out in the Planckian phase. See paper w/ Lee http://arxiv.org/abs/0901.3156

Jason: In the normal case the infalling observer doesn’t notice any highly energetic modes, you can actually go and calculate that. There is unfortunately a lot of confusion about this in the pop sci literature. If it wasn’t the case that the infalling observer doesn’t notice anything remarkable, you’d have thrown out the equivalence principle already with the Hawking radiation, which would have been worrisome indeed (people discussed that in the late 70s I think, but it was quickly resolved).

As a curious side note when an internet attack occurs such as a massive denial of service attempt to swamp a web site, one remedy is to redirect the flow of information via it’s IP address to basically nowhere. This has been given the industry name blackhole filtering.

I am not surprised. The difference between the process of the course of time in the centre of a Black Hole and at its event horizon must be very big. This implies a very big difference in temperature with all the thermodynamic effects resulting from it.

Joe: I’m still confused about the physical relevance of tracing back the late mode. How’s this give you an observable for the infalling observer? I mean, the infalling observer doesn’t know what’s being measured at I^+, so how can this matter for him? I’d think he sits in a state that must still contain all possible measurements at I^+. Now mixing a mixed state doesn’t give you a pure state, but then you still have to take into account the modes inside the horizon. I fail to see how this is necessarily inconsistent.

Just a remark. There is reason to be suspicious of arguments that deduce supposedly fundamental properties of gravity from the AdS/CFT duality. And these reasons have nothing to do with the fact that it is still a conjecture. Rather, unless a maraculous coincidence occurs, the gravitational theory (i.e., one that includes a metric among its possibly many other dynamical fields) on the AdS side of the duality is likely not equivalent to GR, nor GR + SM, nor GR + SM + promising-elementary-particle-DM-candidate.

So, whatever properties the AdS/CFT duality exhibits, they may be true for some specific gravitational theory, but one that need not have much to do with gravity in our own universe (at least as far as the experimental evidence goes).

On the other hand, experimental considerations and minimalism lead to something like GR + SM as fundamental theory. In that scenario, states of quantum fields are most definitely non-firewall like, at least restricted to the neighborhood of the horizon far away from where it meets the spacelike singularity, and information traverses the horizon irreversibly. The information mystery still remains: What happens at late evaporation stages or for small black holes? Unfortunately, to the best of my knowledge the only answer we can give is a big question mark. These regimes are simply intractible with current calculational methods. And the biggest challenge at the moment is to improve on that.

@Joe: A couple of questions, if you get the chance:

– My gut feeling was that Bousso’s paper (http://arxiv.org/abs/arXiv:1207.5192) cleared up the firewall issue pretty neatly. You obviously disagree; can you say why in a few sentences?

– Your argument seemed to hinge on a technical (although perhaps elementary) result that a quantum subsystem can be maximally entangled with at most one other subsystem. I can’t find a nice discussion of this anywhere (I’m not even quite sure what it means; I only know how to define the ‘amount of entanglement’ when a system is split into exactly two pieces); are you able to give a reference?

@Jason: I’m not sure how satisfactory other answers have been, but I like to think about it like this. Suppose I am freely-falling into a black hole. I get very close to the horizon, and haven’t experienced any unusual effects (assuming the equivalence principle holds). Now, I want to return to infinity to report my findings. To do so, I would have to undergo a huge acceleration to avoid falling through the horizon, and this acceleration generates Unruh radiation at a high temperature. By the time I get back out to infinity, this radiation is heavily red-shifted, and just looks like part of the Hawking radiation.

There is much much more that could be said along these lines (and there are calculations to be done — presumably they are out there in the literature), but I think the above is a reasonably clear physical picture.

Very clear explanation of the information paradox. Thank you.

Does entropy depend on the rest of frame? It is widely believed that the black hole entropy is a concept defined in the rest frame of an external observer. What about an infalling observer? If an infalling observer can determine an entropy, can she detect a temperature?

Observers outside the black hole can’t see a particle crossing the event horizon. Then in the example of the entangled pair of spins |+-> + |-+> the external observer never sees that his pure state descrbing an entangled pair evolves to a sigle spin in a mixed state.

It seems like this argument hinges on an infalling observer being able to cross the event horizon. How does this work in the context of an evaporating black hole? In the reference frame of a far away observer, the evaporation proceeds extremely slowly, but still happens in a finite time. On the other hand, the same observer will calculate that an infalling particle will take even longer to fall in (infinite for a non-radiating hole, shorter for an evaporating one). So to a far-away observer, the hole will evaporate before the particle has a chance to fall in.

What does this look like from the point of view of an infalling observer? In his description, the fall takes a finite amount of time, and he will not see any significant amount of radiation while falling in, but that does not mean that the hole isn’t evaporating. The thought experiments I have seen for infalling observers have always assumed a static black hole, not a shrinking one, but aren’t these cases very different from the point of view of an infalling observer? By comparing the situation to a remote observer’s description, it seems like an infalling observer would see the black hole lose mass at an ever growing pace as he approaches the horizon, and that the horizon would therefore recede in front of him. If so, the “passing through the horizon and then observing the other part of the entangled modes” part of the argument failes, doesn’t it?

“- My gut feeling was that Bousso’s paper (http://arxiv.org/abs/arXiv:1207.5192) cleared up the firewall issue pretty neatly. You obviously disagree; can you say why in a few sentences?”

+1

Perhaps black holes do not evaporate. Hawking radiation is only a conjecture as to what the correct theory of `quantum gravity’ would predict. If black holes do not evaporate then there is no information paradox because the information is frozen in place under the event horizon. What we call black holes really are what the older literature calls frozen stars.

Elliot Tarabour,

“It has been shown to be a very useful technique in other areas of physics such as analyzing condensed matter.” Really? I thought real condensed matter physicists don’t use AdS/CFT. Use it for what? Calculating numbers that agree with experiment. Ha.

I love how these people are trying to find out more about a speculative theory with no experimental evidence in favor of it, and with no derivations of the actual world we live in (Standard Model with the precise masses and coupling constants physicists have measured) by using *thought experiments* taking place near black hole horizons. And these people are being *paid* to do this nonsense?? Until we become smart enough to figure out how to do *real* experiments at the Planck scale (which may never happen), how about we acknowledge defeat and do something more useful?

How many angels can dance on the head of a pin?

What would falling into a hypothetical entity we have never truly observed feel like based off an incomplete physics of relativity combined with computer modeling based on unproven mathematical conjecture which has no underpinning in reality?

What could possibly go wrong with this… pretense of physics being called a thought experiment? Why not instead do something remotely useful, like finishing up a few loose ends in relativity, such as a transform from v to v’? Surely if you’re ready to ressolve such important cosmic concerns facing humanity… such as imaginary joyrides across imaginary event horizons into imaginary gravitational singularities which you just might or might not imagine to be evaporating, you can figure out a relativistic transform of velocity?

Hope someone will answer Jose’s and Sigurd’s questions. They are similar to questions that have also puzzled me.

Tom,

http://arxiv.org/abs/1002.2947

e.

13. Bee: You are discussing remnants, which indeed are an alternative to firewalls. But one has to give up a lot: as you say, the statistical interpretation of the Bekenstein-Hawking entropy, and AdS/CFT, and on top of this there is the old problem of infinite production of virtual remnants. This is still a logical possibility, but not one that I would bet on.

17. Bee: Purity tells us that if an observer measures the mode at I^+ they get a state entangled with the early radiation. EFT tells us how b propagates, so we know it would still have been entangled with the early radiation if it had been measured earlier and closer to the horizon.

18. Igor: I think you are saying there may be more than one theory of quantum gravity, in which the black hole behaves in different ways. This is possible, but given the difficulty in finding even a single consistent outcome, I expect that in the end there can be only one.

19. Rhys, 23. rfp: Bousso (and others) want to say that an infalling observer sees the mode entangled with a mode behind the horizon, and the asymptotic observer sees it entangled with the early radiation. This is an appealing idea, and was what I initially expected. The problem is that the infalling observer can measure the mode and send a signal to infinity, giving a contradiction. Bousso now realizes this, and is trying to find an improved version. The precise entanglement statement in our paper is an inequality known as strong subadditivity of entropy, discussed with references in the wikipedia article on Von Neumann entropy.

20. Kostiantyn: the entropies used in our argument are always Von Neumann entropies (wikipedia again), which measure the purity of a state and are frame independent. They are not the coarse-grained thermal entropies.

21. Jose, Sigurd: The Penrose diagram for an evaporating black hole (wikipedia again, Black hole information paradox) shows an interior region into which one can freely fall. The last photons one emits before passing the horizon stay close to the horizon for a very long time (even as it shrinks due to evaporation) and then emerge much later. It is strange (though already in special relativity there is a lot that is counterintuitive), but doesn’t seem to help with this.

24. R.L.: The calculations showing black hole evaporation are pretty simple (looked at in the right way) and robust. One can also see this as ordinary evaporation in the CFT dual.

@Tom and Christian @26

Could you please stop trolling and spoiling the nice and interesting discussion we actually have here and make use of your possibility to go away if you have nothing constructive to contribute ?

Thanks !

@Dilaton,

Just because my comment was something you very much disagree with, doesn’t mean I’m trolling. Can’t you take a little criticism?

@Elliot Tarabour,

Thanks for the paper reference. I read the abstract, which refers to the claim that “insights” were obtained into that condensed matter problem by AdS/CFT. Do you know if those insights have been confirmed by experiment? Do you know if any numerical values of any observables were calculated from AdS/CFT in that paper and compared with experiment? I’m just asking. I’m not insinuating anything! Thanks again.

@Tom

I dont know why this is, but I cant get rid of the suspicion that you are not really interested in what you are asking but have rather a preconceived dismissive opinion (probably obtained from reading a particular well known blog ;-)) about AdS/CFT or more generally about the whole framework it is embeded in, that nothing and nobody in the world could ever change.

And note that valuable insights do not exclusively consist of exact calculations of particular numbers, but being able to newly explain the reason or mechanism leading to observed phenomena (in particular if this has not been possible before) is worthwile by itself too.

@Dilaton,

What’s wrong with criticism and asking questions? What’s wrong with trying to hold a conjecture accountable by asking if it conforms to experimentation? Isn’t that what science is about?

And how good is an explanation of the reason or mechanism behind an observed phenomena if the basis behind your explanation is a conjecture with no experimental support, and with no actual calculations of a numerical value of an observable that can be measured in the laboratory?

Why don’t you ask that question to the writer of the blog you probably read?

18. Igor: I think you are saying there may be more than one theory of quantum gravity, in which the black hole behaves in different ways. This is possible, but given the difficulty in finding even a single consistent outcome, I expect that in the end there can be only one.Dear @Joe#29,

I think “consistent” in the way you use it is a loaded word. There is a perfectly physically reasonable way to relax this criterion such that the number of gravity + matter Lagrangians that one could use is not only non-unique but essentially infinite (drop perturbative renormalizablity, regularize/renormalize EFTs without cutoffs, allow EFTs to be their own UV completions). The trouble is not that any of these models are inconsistent, it’s that we have no idea what the late statge dynamics of black holes in these theories are, with few exceptions.

I’m sure you are well aware of this point of view, though you seem to disagree with it. However, I thought it important to clarify where the disagreement would be, that is, hidden behind the otherwise inconspicuous word “consistent”.

@Tom

What upsets me is not critizism or asking questions per se (which is indeed needed in science) but the horrible destructive attitudes (they want certain research to completely disappear as you admitted yourself) of most people critzising or asking questions. Their questions are not real questions since they have preconceived opinions and are not ready to listen to or accept any whatever serious and convincing argument that contradicts them. Explainig things to such people is pointless and a waste of time.

Saying in your previous comment that people should not be allowed to think about or investigate Planck scale physics, black holes (which we observe!), etc IS indeed trolling and very offensive to the author of this nice guest post for example.

By bringing up as a reason for this outragous claim that the Planck scale is not DIRECTLY accsessible today, you belittle the work of thousends of experimental, phenomenological, and theoretical physicists, who are actually working on how to find INDIRECT hints of higher or yes, even Planck scale BSM physics at the LHC, in cosmological data, or by other means nobody has thought about yet.

What people are doing at the LHC for example is nicely explained by Prof. Strassler http://profmattstrassler.com/ for example. This site makes a better and more balanced reading if you are really interested in fundamental physics than the one you obviously come from.

Cheers

@Dilaton,

Questions are questions no matter who asks them and regardless of whether you choose to answer them or not. Do my questions not have answers? If so, do you know the answers? If not, I’d greatly appreciate the response of someone who can answer them. Cheers.

Tom,

I do not know if any specific values were calculated in this instance. I know that Professor Clifford Johnson at USC has been actively engaged in similar research and direct conversations with him indicated that the insights obtained from using the ADS/CFT framework did conform very closely to experimental values obtained in some RHIC experiments. Whether there is a direct causal relationship is an open question.

Regards,

Elliot

Here’s the thing that has always gotten me:

Shrinking black holes have apparent horizons that form timelike surfaces that are two-way traversible. If the black hole evaporates completely, there is no singularity at timelike future infinity. Any information that enters the apparent horizon eventually can leave the apparent horizon. Where is the paradox?

@Kellsy Largo,

Hey, Kellsy. You have just lost any credibility of any argument you may or may not have had. Ad hominem (even if badly mispelled profanity) is a fallacy of reasoning, you might want to brush up on your logic after you wash out your mouth with soap or get medical help with your coprolalia.

@ Dilaton, Joe Polchinski

Quite seriously, there is no way to do any testible calculations or predictions on what happens when one face dives into a black hole. No one, including Hawking, has ever examined, created, or presently has the ability to run tests on what they believe to be a black hole. “The experiment that Hawking envisioned was to let a black hole form from ordinary matter and then evaporate into radiation via the process that he had discovered two years before.” Good grief, ‘let a black hole form from ordinary matter and then evaporate’ You might as well say “And then a miracle occurred and I was proven correct!”… You can’t stack this many free floating assumptions and speculations on top of each other with no means of refutability and call it a ‘thought experiment’, much less science or physics.

Has anyone ever observed (not speculated) a blackhole shrinking? Has anyone ever observed (not speculated) a blackhole evaporating? Has anyone ever demonstrated (not speculated) a singularity is even physically possible, outside of the math that created it? Even the ‘Schrödinger’s cat’ thought experiment had a cat you initially put in the box with known parameters before the uncertainty kicked in. You don’t have a box, a cat, a blackhole, or a way of knowing if a black hole is even what you speculate it to be. All you have is lots and lots of uncertainty and pure conjecture, you have nothing you can even test with.

Having noticed these discrepancies in your till, and then having it pointed out to you by Tom, you respond with “…horrible destructive attitudes..” and even better “Their questions are not real questions since they have preconceived opinions and are not ready to listen to or accept any whatever serious and convincing argument that contradicts them”. Listen to your own words. The questions Tom was asking were ‘real questions’. You didn’t answer them, you simply avoided them and dismissed the person asking them, which is a another form of Ad Hominem, or an Appeal to Authority argument. @Dilaton, don’t libel. Tom never said ” Saying in your previous comment that people should not be allowed to think about or investigate Planck scale physics, black holes (which we observe!)…” What he did say was “And these people are being *paid* to do this nonsense?? Until we become smart enough to figure out how to do *real* experiments at the Planck scale (which may never happen), how about we acknowledge defeat and do something more useful?” He made a healthy suggestion that there are far more useful things that might be discovered with actual experimentation than presently untestable speculation.

@Dilaton,

As for your snarky ‘Trolling’ comments, if you think sceptical questions upset your delightful conversations about Black Holes Complimentarity, firewalls, etc, you really must be a stranger to any kind of intellectual rigour and find yourself offended quite often when other people don’t simply nod and agree with you. Did you actually read the article above, it did say ” Given the controversial (and extremely important) nature of the debate”. Notice the key words ‘controversial’ and ‘debate’. These words actually mean things, namely, there is nowhere close to agreement on the subject, and, there has been quite a bit of sceptical criticism directed at this discussion already, by others in the same field… which I can hopefully assume will not be labeled ‘trolls’ as well, or something worse by Kellsy.

Christian, Kellsy is undoubtedly a real troll, pretending to take Dilaton’s side.

@Tom

Even though this will not satisfy or appease you either (cause nothing in the world ever would), my favorite appliction of the AdS/CFT business is the fluid dynamics – gravity correspondence : http://physics.stackexchange.com/q/28371/2751. It gives new means to calculate turbulence coefficients or to derive the observed(!) slopes in spectra of turbulence kinetic energy. The slopes of the spectra are conventially only motivated by hand waving and terribly unsatisfying arguments.

@Christian

Saying that people should not be paid for investigating certain topics is exactly equivalent to demanding that such research should not be allowed and forbidden. And this IS a horribly trolling claim insulting many people, including Joe Polchinski. It IS a horribly destructive claim since it gives people not even any chance or time they need to figure out the things that are still difficult to calculate but in principle possible to achieve, but wants to destroy any research into such topics immediately.

@Tom

You are trolling because your comments are condescending and snarky with the overall objective of criticizing every topic posted. That’s the definition of it. You need to be polite. You are lucky enough to find a website with some of the most respected physicists in physics, and you’re being an asshole to them; I think it would only be natural that people jump all over you for it because we want to defend those who have graciously taken time out of their day to write a comprehensive and thoughtful topic. You remind me of this character in a cartoon I used to watch 20 years ago when I was growing up, called the Simpsons; the comic book guy.

in regards to your first comment posted above; you blatantly have no idea what you’re talking about, not just on condensed matter physics, but in overall physics. YES, they are getting paid to try and tackle problems via thought experiments. Thought experiments are how we “become smart enough to perform *real* experiments at the planck scale”. And that’s why you are an ignorant troll, because you don’t just throw darts at a dart board to come up with an experiment, you think over very clearly. I mean after all, the LHC was just the result of a bunch of frat brothers getting drunk one night, right? No thought or planning had to go into that multibillion dollar project, right? don’t come here to be a prick. I’m sorry you’re peers don’t like you in whatever the hell it is you do in life, but that’s no reason to dump your bad personality on us.

Pingback: Mostly physics

Dilaton and Elliot,

Thanks for your responses. I will check out Johnson’s papers and the website you both referenced.

Elliot,

When you say “did conform very closely to experimental values”, does this mean that the value is close enough to the experimental value in that it matches the value within the uncertainty of the experiment, or that it’s “close” but still incompatible after taking the uncertainty of the experimental result into account? Everyone, this is an honest question, I’m not insinuating anything. I just want to know what “close” means in this particular case, so please relax.

Brett and Kellsy,

Relax. I’m getting actual answers and learning the status of these methods. That’s all I wanted.

@Captain Obvious,

Thank you, nice to know some still know bad manners when they see it.

@ Dilaton,

I honestly have to say, observing your various reactions, that you seem to make up your own meanings to words depending on how you feel about a subject, which is not a good thing. You just said “Saying that people should not be paid for investigating certain topics is exactly equivalent to demanding that such research should not be allowed and forbidden.” Tom was suggesting that it was a waste of time and money to perform thought experiments on something so far removed from testible experimentation (much less application) when other avenues of testible research are available. That is not ‘exactly equivalent’ to saying such research ‘should not be allowed or forbidden’, Those were your words, not his. The simple reality is that funding is finite, and growing more so in these harsh economic times. All kinds of research are being reevaluated (even at the LHC) and are having to compete for many of the same lines of funding. If funding dries up, or is withdrawn for whatever reason, it does not mean, nor has it ever meant that such research ‘should not be allowed or forbidden’. Please read more carefully and quote more accurately.

@Brett,

You are the one being rude, I’m not going to repeat your profanity back to you, keep it in your own mouth please. If you want to be a fawning physics groupie , fine, but please, ‘graciously taken time out of their day to write a comprehensive and thoughtful topic.”, um, the whole point of the debate (yes, it’s a debate, the author even says so) raging in the HEP community about this topic is that it’s not comprehensive, nor thoughtful, It is not even falsifiable conjecture, so it is not really in the realm of science or physics, at best it could be called unrigourous speculation. If you really want to know what an ‘ignorant troll’ or ‘bad personality’ sounds like, please read Stephen Hawking’s book “The Grand Design”, where Hawking (the person Joe Polchinski so admires the work of) gives up on the scientific method completely, and exscuses any wrong predictions of his theories with the multiverse… he basically says if he’s not correct, it is only because it isn’t so in this insignifigant universe, but in some other universe he is actually correct. Woit nailed Hawking to the wall for it, thank goodness. See for yourself.

http://www.math.columbia.edu/~woit/wordpress/?p=3141

@Brett

Thanks for your supporting comment. It is really I shame to see that not more people have the courage to vocally stand up and defend a top physicists if he, after having written such a nice guest post and even having stayed around to answer questions, gets nothing but insulted and his work gets condemned.

@Christian

If in your opinion fundamental physics, such as done even at the LHC for example, should be abolished because we live in terse financial times today you should in the same veine fight against all non natural scientific academic activities such as research into history, philosophy, linguistics, the arts, etc (you know what I mean) and against everything that does not immediately return a bang of great importance for the money invested, to be consistent.

The site you are cheering up for heavilly trolling against Stephen Hawking (I have not and I will not read it) is, as I’ve mentioned elsewhere, the worst thing I have ever seen in the internet. It is always the source of the most unfair, horrible, and destructive sourballs that spread out into the whole physics blogosphere to prohibit any nice and constructive discussion about fundamental physics. The owner of this site and his fans heavily underminine the natural scientific process by spreading incorrect statements or even blatant dishonest lies (if they can even in the media) on purpose which threatens the ongoing work of thousends of experimental, phenomenological, and theoretical particle physicists for example.

So @fans of this by Christian cheered up site, please stay there and let people on other physics sites, such as Cosmic Variance, Prof. Strassler’s site, and Phil Gibbs’s site for example, enjoy learning and discussing things without being constantly terrorized by trolls!

I will now leave this thread since things have gone far enough off topic in a very ugly and nonconstructive direction. People who wanted to ask about or discuss the topic the actual CV article is about, are certainly driven away now anyway, and Joe Polchinski will probably never again write a guest blog anywhere …

Bye

My physics knowledge (QFT on the level of Srednicki, GR on the level of Carroll, string theory Ziewbach) stops way short of being able to follow all these arguments but one thing that I am concerned is that when I went on the archives and looked at the 8 recent papers on these subjects, almost none had any equations. There were a couple of spacetime diagrams and a lot of words. The arguments seemed almost like legal briefs rather than recent physics developments. Of course I didn’t have time to go to all the past references, and I am sure the further back you went, there would be more detail and equations. All the papers seemed to have many assumptions which were treated like axiom systems more appropriate to mathematics. And many of the papers seemed to start with different sets of assumptions. If I were not a wannabe theoretical physicist and just an educated layman, the arguments would seem more like the “how many angels can dance on the head of a pin.” type.

At least it was a useful nights diversion.

Dilaton,

I never insulted Prof. Polchinski. Also, I was only being critical of how promising these investigations are at teaching us anything about quantum gravity. We don’t know anything about particle physics above the electroweak scale, a region which spans many many orders of magnitude until the Planck scale is reached. I’m also interested in knowing whether numbers calculated using AdS/CFT matches experimental values *within the uncertainty* allowed by the experiment. I’m definitely not an expert in these methods, but if you find a gravitational theory that is dual to the system being experimentally investigated and you get numbers which are “close” but not compatible with experimental values, then it seems to me like you’ve either picked the wrong dual theory, or you picked the right dual theory but that AdS/CFT is wrong. That’s why I was asking Elliot that question in my previous comment.

The other possibility is that the experimental system under investigation doesn’t have a gravitational dual because the correspondence does not include that particular system. So, another question I have is, should AdS/CFT or its generalizations, apply to these condensed matter systems under investigation? Or do the correspondences not include these systems?

When I first heard that an object falling into a black hole would observe the outside universe to be increasingly blue shifted, I wondered it it would get fried by the cosmic background radiation. While a few degrees kelvin isn’t all that dangerous to us on earth, it would be dramatically blue shifted for anyone falling into a black hole. Don’t you have to integrate the energy across all of time to reach the boundary? It sounded like a really neat way to deal with the problem of what goes in and what comes out of a black hole. Sure, you can fall in, but you’d be too crispy to be much of an observer.

Then I realized that there was a problem with this. The cosmic background radiation isn’t constant. It is fading as the universe expands, so the integral, even with a big blue shift, might be little more than a faint glow as the outside universe expands to infinity in the rear view mirror. (I suppose it’s a race against time.) If the author of this blog is correct, you might also notice the rapidly rising entropy in your rear view mirror though you wouldn’t be able to email him about it.

Tom,

Re: your further query on experimental outcomes. I simply don’t know. If you are interested, I’d search arivx for Clifford Johnson.

e.

Maybe Dilaton knows. Do you, Dilaton?

@Joe: Thanks for the answer. I’m only just learning about these things, and have a little way to go before I have a firm opinion.

@Tom: The questions you’re asking, although important, don’t really have anything to do with the issue at hand. For the current purposes, Prof. Polchinski is assuming that AdS/CFT is correct (for which there is a huge amount of evidence), and using it to provide guidance on quantum gravity more generally. You may think that is the wrong approach, but that’s fine; this is trying to push back the boundaries of knowledge, and there will be disagreements (and mistakes) along the way.

Your questions about applications of AdS/CFT to condensed matter are pertinent, even though they’re irrelevent here. For what it’s worth, I’ve heard quite a bit about this, and I’m very skeptical. What people seem to do in practice is pick a simple gravitational theory in AdS, do some fairly easy classical calculations, re-interpret the results in terms of a hypothesised dual field theory, and then go looking for a complicated condensed matter system which it might match. It doesn’t seem to offer any real understanding, in my opinion.

(Also, the fluid/gravity correspondence mentioned by Dilaton is really completely different, and purely classical.)

There were several comments above arguing that black holes might not exist, that Hawking radiation is a conjecture and that experiments on Planck scale and quantum gravity are needed to properly describe the Hawking radiation.

This is all far-fetched, to say the least. First, black holes are predicted to exist by general relativity, and by now there are some generally accepted astronomical observations of such objects out there. Second, Hawking radiation is far from being a conjecture. It is a prediction of Standard Model physics near the black hole horizon. There is no Planck-scale physics involved, just SM and classical GR. The quantum gravity effects are important only near the black hole singularity, while the horizon is quite well described with the classical theory.

Therefore, the BH information paradox is a real puzzle to solve, not just some conjectured scenario. Even without ever doing any real experiments near the horizon of any astronomical BH, we still have two theories (SM and GR) which lead to a paradoxical situation when combined to describe information loss in black holes. This is a conceptual problem with one of the two theories (or both) and needs to be resolved, regardless of any lack of experimental data. The so-far-unknown theory of quantum gravity is expected to give a resolution of the paradox, so thought-experiments on this topic are extremely useful for people doing research in quantum gravity.

The only conjecture in the whole story is whether AdS/CFT does or does not have anything to do with non-AdS quantum gravity. While personally I don’t believe AdS/CFT is applicable to the real-world gravity, it is a legal research avenue to assume otherwise, and discuss its implications to the BH information problem.

So please, folks, this isn’t some conjectures-all-around-non-realistic-theoretical-mumbo-jumbo-made-up-for-easy-paycheck problem. It is a quite real theoretical incompatibility between QM and GR, and needs to be addressed, one way or another.

HTH

Joe: Regarding remnants. The whole problem rests on the use of effective field theory. Remnants necessarily bring in regions of Planck scale curvature, and can contain regions with a large volume and a small surface area. In contrast to the small curvatures at which you want to give up eft, it is perfectly reasonable to expect eft to break down when dealing with such remnants. And there goes the so-called “pair-production problem.” Regarding giving up AdS/CFT and the statistical interpretation of the BH entropy: Well, your own paper says that if you want to cling on to this, you have to take rather drastic means to modify eft in the small curvature regime to achieve consistency. I find this very unappealing. I’d rather deal with remnants. Either way, I guess this is personal taste to some extent, I’m just saying it’s a possibility that shouldn’t be neglected.

Joe: Regarding the traced-back mode. The b-mode you trace back isn’t just entangled with the early radiation. That you can trace it back on its own (without considering other contributions to the state at the horizon) is actually a consequence of the measurement the observer does at I^+. If it was just entangled with the late radiation, what would prevent you from mirroring the outside modes to negative energy modes to the inside and have them cancel pairwise as it is usually the case?

I’ll save you the effort of replying to this and add what Don explained to me: You’ve assumed the inside modes to be independent of the outside modes, so whatever goes on with the outside state by making the measurement at I^+ doesn’t affect the negative energy modes, so you can’t expect them to cancel. (Did I finally get this straight?) There’s two problems I have with that. First, that assumption isn’t explicitly stated. And second, I still don’t see why a reshuffling of occupation numbers is the only way to encode information in the outgoing radiation. And if that’s not what you do, how would the infalling observer notice the difference?

vmarko, I don’t know about Tom, but Christian has a problem with special relativity, he think it needs more work. So consider this hypothesis, that when a discussion about advanced theoretical physics sputters to a halt because of noisy expressions of skepticism about whether it applies to reality, at least half of the skeptics will also be skeptical about matters that are far more basic than what’s under discussion.

This thread seems to have ground to a halt, but I still have some issues, and don’t know where to turn to clear them up, so let me write them here just in case anybody has some light to shed on them. This all comes from pages 3-5 of the firewall paper, where the main argument is presented.

– AMPS state that since the energy is finite, the Hawking radiation can be considered as living in a finite-dimensional Hilbert space. I don’t see how this is the case. In a theory with massless particles (e.g. photons), the initial ‘stuff’ which made the black hole could have consisted of arbitrarily many quanta (of sufficiently small energy), giving an infinite-dimensional Hilbert space. This point might not actually be important though…

– I don’t understand the relation between B and C. B is some field mode, corresponding to part of the ‘late’ Hawking radiation. C is then “…its interior partner mode.” What does that mean?

– As far as I can tell, AMPS (and Bousso in his follow-up) are taking “A and B are maximally entangled” to be equivalent to S_AB = 0. That doesn’t correspond to my understanding of entanglement. A two-qubit system in the state |00> has zero entropy, but the two qubits are not entangled. For the same reason, I don’t see how different field modes are entangled in (say) the Minkowski vacuum; all the oscillators are in their ground state.

– There is a part of the (seemingly crucial) discussion about entropy at the bottom of page 4 which I don’t understand. We have a three-part system ABC, and strong sub-additivity of entropy, S_AB + S_BC >= S_B + S_ABC. The inequality S_AB < S_A is argued for, and this seems fine (the Hawking radiation becomes 'more pure' as more of it is emitted). But then the following sentence appears: "The absence of infalling drama means that S_BC = 0 and so S_ABC = S_A." Let's take S_BC = 0 for granted; I don't see why that implies the second equality. The general inequality is S_ABC <= S_A + S_BC, with equality only if A and BC are uncorrelated, in the sense that the density matrix for the whole system is just the tensor product of those for the two subsystems, A and BC.

I am puzzled by the separation of radiation into “early” and “late”— you are saying “wait for 99% the radiation to come out, then consider something dumped into the remaining black hole, and it is entangled with the early radiation, and this means that the late radiation is determined”, but this implicitly assumes that you can do detailed experiments on the precise entangled quantum state of the outgoing radiation, even after knowing it is early (this is a brutal measurement already— you have learned to a certain extent when the radiation has come out! Why should you still be able to extract anything now? This measurement already ruins the phase coherence of the late state), and you also implicitly assume you know the late black hole thermal ensemble ( you know about where the black hole is at late times, and about how big it is—- this is also an extremely brutal measurement).

Given that you assume you have these brutal bits of knowledge, namely which photons are early, and which are late, and how big the black hole is and where it is, I don’t see any reason to suppose you can entangle the remaining coherence in the early radiation distant from the black hole and learn anything at all about the emissions of the late-classical black hole from the radiation. I think all you have shown is that the separation “early” and “late” is just not compatible with a unitary S-matrix for the black hole.

Just by measuring a black hole’s approximate position and approximate horizon location, you are restricting it’s thermal ensemble in a way that prevents certain kinds of entanglement from surviving. While I don’t see any proof that what I am saying is right, I also don’t see any guarantee that the implicit entanglement involves in measuring that the radiation is early leaves the early radiation state pure enough to do the measurements you need. Obviously if it does, your argument goes through, but the very fact that your have a paradox must mean it is not so— in order to determine the late radiation, you need to mae measurements on the “early” radiation over such a very long time that you aren’t even sure when you are done if it is early or late. This means that you are working over an entire black hole S-matrix event, not separating it into a two-step scattering where you know something about the intermediate black hole state (that it is a certain size, with a certain amount and kind of early radiation).

So, as far as I see, there is an additional unjustified assumption here, which is common to all the referenced literature about the Page time, namely that it is possible to simultaneously produce semiclassical black hole states entangled with pure-enough early Hawking radiation to make measurements on the whole set of early Hawking radiated particles which determine something about the late radiation. This is a heuristic assumption, and I think all you are doing is showing that it is false.

The only case in which I can see this early/late separation is completely justified is if you throw something into a highly charge black hole, and wait for the hole to decay to extremality, and look at _all_ the radiation emitted during this process (so all the radiation is “early” in this definition, since once the black hole is extremal again, it’s cold asymptotic S-matrix state). In this case, the arguement is surely completely coherent, and the end-state of the black hole is a known pure-state, it’s an extremal black hole with charge Q and velocity V (assuming a perfectly BPS model black hole, so there is no further decay). Then you can measure the outgoing radiation state, and determine the Q and V of the final state.

But in this case, the end result is no longer decaying at all, so there is no paradox, no thermal horizon and no Hawking radiation. The only time you get a paradox is when the late-state black hole is truly thermal and truly macroscopically entropic, so any intuition that associates the GR solution to a quantum state of some sort is not particularly clear (you have to associate the GR solution to a thermal ensemble).

So I can’t internalize the argument enough to see whether it is correct, it seems obviously wrong (but that’s only because the holographic complementarity seems obviously right to me), and the sticking point in understanding the argument for me is the heuristic regarding describing hugely entropic black holes using some sort of unknown quantum state for the black hole alone, rather than an entangled state of the black hole and all the radiation, early and late, with no way to make the distinction between early and late without completely ruining the ability to measure anything interesting at all about the late state.

So while I don’t find the argument persuasive, it’s only because I don’t buy the assumptions in the related literature on Page times (assumptions which don’t appear in Page’s original paper, I should add). I am questioning these obscure assumptions, not the detailed stuff in the latest paper.

In Susskind’s reply, since he at times made similar arguments about early and late radiation, he also ends up using a classical black hole picture, and pretends that you can talk about the state of infalling mater and outgoing early/late radiation separately and coherently. So Susskind already implicitly internalized this framework, and perhaps this is the reason the argument was persuasive for him. I would not give up complementarity for this, or honestly, just about anything barring someone taking an instrument and throwing it into a black hole and getting a contradiction with complementarity. It’s just too obviously correct to be false.

Regarding the “firewall” resolution, it is not satisfactory, because the firewall stress, in the same semiclassical approximation, is nonzero on the horizon, and falls inward along with anything else. This means that the singularity needs to constantly replenish the firewall with new stress by some crazy mechanism, something which is not really reasonable at early times. To see this, consider charged black holes, because the domain of communication with the singularity does not extend past the Cauchy horizon (which degenerates to r=0 in the neutral limit).

I really think that this is finding an inconsistency in the implicit assumptions in the Page time literature, not in black hole theory itself. This is very interesting and important, but please don’t discard complementarity, as I think it is almost surely fine as is.

@Kaleberg #50: the blueshift doesn’t approach infinity as you approach the outer event horizon of a black hole according to general relativity, nor would you see the entire future history of the universe in fast forward (though these things would theoretically be true for someone attempting to cross the

innerhorizon of an ideal rotating black hole). See the section titled “Will you see the universe end?” in the following section of an online physics FAQ (from the website of physicist John Baez): http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/fall_in.htmlI have a suggestion for a new thought experiment, which someone might enjoy analysing. Take a black hole that’s happily emitting Hawking radiation, shrinking, and on the road to vanishing eventually. (Make it big, so the radiation is very soft [low-energy photons etc.] and gentle and slow.) Now surround it with mirrors, so that the Hawking radiation bounces back into the hole. (Maybe after several bounces, hole -> mirror-shell -> mirror-shell -> … -> hole – it doesn’t matter, the space between the hole and the mirrors will fill up to just the extent required to equalise the inflow and outflow.) The black hole’s lifetime is now infinite. In fuzzball terms, it can be an exact energy eigenstate – whereas without the mirrors, it’s changing (shrinking) with time, and so can’t be.

Can someone perhaps study this exact energy eigenstate, which in the quantum sense is static, with tools not available for dynamically evolving states (which are awkward superpositions of energy eigenstates and often incredibly complicated to analyse)? I’m thinking, in particular, of the possibility of calculating the frequency spectrum of the Hawking radiation – which we should perhaps rename “the Hawking standing-wave pattern” since it’s not really radiating any more. Would this help to reveal whether the near-horizon environment is “violent” – a firewall – or “gentle” – a classical-GR-like quiet place?

Just a thought! I hope this thought experiment is useful to someone!

The argument for a firewall has several problems that need to be solved, in order for it to be taken seriously.

The first one is that it is impossible to determine the entanglement of a particular quantum state. The proof is as following:

Assume that you have a detector that can detect if the state of two spins is entangled or not. So we have (here |0> is the “null” state of the detector and U is the unitary operator determining the interaction of the detector with the spins)

U |PSI0>|0> = |PSI0>|:-)>,

if the state |PSI0> of the two spins is entangled, and

U |PSI1>|0> = |PSI1>|:-(>,

if the state |PSI1> of the two spins is not entangled.

The state |+>|+> is not entangled, so we have

U |+>|+>|0> = |+>|+>|:-(>,

and also

U |->|->|0> = |->|->|:-(>.

Now consider the entangled state |+>|+>+|->|->. Because of the linearity of quantum evolution, we must have

U( |+>|+>+|->|->)|0> = ( |+>|+>+|->|->)|:-(>.

This proves that the “entanglement detector” cannot work in general.

The second problem has to do with information storage. If you say that “a single observer can see both bits, measuring the early radiation and then jumping in and seeing the copy behind the horizon,” you have to account for how the observer can store the information describing the state of the early radiation. The number of the micro-states of the early radiation is larger than exp(A/4), where A is the present area of the black-hole event horizon. The information required to describe a particular state (which is the log of the number of the states) is larger than A/4. Now, the holographic principle implies that the observer, in order to be able to store the information, has to be surrounded by an area larger than A. The observer has to be larger than the black hole! This is an alternative explanation of why the observer cannot just “jump in.”