The firewall puzzle is the claim that, if information is ultimately conserved as black holes evaporate via Hawking radiation, then an infalling observer sees a ferocious wall of high-energy radiation as they fall through the event horizon. This is contrary to everything we’ve ever believed about black holes based on classical and semi-classical reasoning, so if it’s true it’s kind of a big deal.

The argument in favor of firewalls is based on everyone’s favorite spooky physical phenomenon, quantum entanglement. Think of a Hawking photon near the event horizon of a very old (mostly-evaporated) black hole, about to sneak out to the outside world. If there is no firewall, the quantum state near the horizon is (pretty close to) the vacuum, which is unique. Therefore, the outgoing photon will be completely entangled with a partner ingoing photon — the negative-energy guy who is ultimately responsible for the black hole losing mass. However, if information is conserved, that outgoing photon must also be entangled with the radiation that left the hole much earlier. This is a problem because quantum entanglement is “monogamous” — one photon can’t be maximally entangled with two other photons at the same time. (Awww.) The simplest way out, so the story goes, is to break the entanglement between the ingoing and outgoing photons, which means the state is not close to the vacuum. Poof: firewall.

You folks read about this some time ago in a guest post by Joe Polchinski, one of the authors (with Ahmed Almheiri, Don Marolf, and James Sully, thus “AMPS”) of the original paper. I’m just updating now to let you know: almost a year later, the controversy has not gone away.

You can read about some of the current state of play in An Apologia for Firewalls, by the above authors plus Douglas Stanford. (Those of us with good Catholic educations understand that “apologia” means “defense,” not “apology.”) We also had a physics colloquium by Joe at Caltech last week, where he masterfully explained the basics of the black hole information paradox as well as the recent firewall brouhaha. Caltech is not very good at technology (don’t let the name fool you), so we don’t record our talks, but Joe did agree to put his slides up on the web, which you can now all enjoy. Aimed at physics students, so there might be an equation or two in there.

**Fire.cit**from

**Joe Polchinski**

Just to point out a couple of intriguing ideas that have come along in response to the AMPS proposal, one paper that has deservedly received a lot of attention is An Infalling Observer in AdS/CFT by Kyriakos Papadodimas and Suvrat Raju. They consider the AdS/CFT correspondence, which relates a theory of gravity in anti-de Sitter space to a non-gravitational field theory on its boundary. One can model black holes in such a theory, and see what the boundary field theory has to say about them. Papadodimas and Raju argue that they don’t see any evidence of firewalls. It’s suggestive, but like many AdS/CFT constructions, comes across as a bit of a black box; even if there aren’t any firewalls, it’s hard to pinpoint exactly what part of the original AMPS argument is at fault.

More radically, there was just a new paper by Juan Maldacena and Lenny Susskind, Cool Horizons for Entangled Black Holes. These guys have tenure, so they aren’t afraid of putting forward some crazy-sounding ideas, which is what they’ve done here. (Note the enormous difference between “crazy-sounding” and “actually crazy.”) They are proposing that, when two particles are entangled, there is actually a tiny wormhole connecting them through spacetime. This seems bizarre from a classical general-relativity standpoint, since such wormholes would instantly collapse upon themselves; but they point out that their wormholes are “highly quantum objects.” They claim there is evidence that such a conjecture makes sense, although they can’t confidently argue that it gets rid of the firewalls.

I suspect further work is required. Good times.

I’m about to speak out of (you know where), but …

It seems to me like a lot of the arguments rely on the picture of particle/antiparticle production at the horizon. While a nice picture, having these “particle states” depends on a choice of basis for the Hilbert space, which is slicing dependent … and in a curved spacetime, not trivial like it is in Minkowski. Should I believe these arguments? Can somebody tell me that there is actually some solidly grounded QFT behind it, and people are doing their Bogoliubov transformations?

I’m also concerned about how people try to ‘localise’ states and information. The picture I have in my head is the following: You should choose a global (Cauchy) slicing of your spacetime, and a slice has a Hilbert space for the field content of the theory; then at some time, there is a density matrix describing the fields, and you can measure the entropy, which does not depend on choice of basis, since it is a trace. But drawing pictures with particles going along certain paths makes it seem like you are constructing wave packet states which are localized, and that sounds like you have a preferred basis for the states. Back to entropy localization, it’s not clear to me how entropy can be localized as being in the interior of the BH. A quantum state does not live *somewhere* in spacetime, it just lives in Hilbert space!

I just refer to my comment on the mentioned guest post by Joe Polchinski: firewalls seem to be the logic consequence of thermodynamical effects caused by enormous differences betwwen the course of time in the centres of black holes and on their outsides.

Shouldn’t there be a zone where electromagnetic energy “orbits” the center of the black hole? Wouldn’t this energy be coherent like laser light owing to constructive interference? Is that an information firewall?

Theoretical physicists are the new philosophers.

Pingback: “when two particles are entangled, there is actually a tiny wormhole connecting them through spacetime” | Gordon's shares

Interestingly the connection ER=EPR. It reminded me of a blog comment I made a couple of years ago at FQXi, in which I propose something similar:

http://fqxi.org/community/forum/topic/976#post_40460

I proposed it for theoretical purposes only, as an example of local explanation of EPR by using ER, which circumvents Bell’s theorem, but I did not consider it a big deal.

http://www.youtube.com/user/PhilosophyCosmology/videos

The videos are here!! Can’t wait to enjoy them.. just wanted to let you know 🙂

I’ve always thought about it like Maldacena and Susskind, but I’m an undergrad with of course no tenure or credibility. I don’t think the firewall goes away even with this explanation, but I do think this is what is happening.,

slide #16 is pretty funny. Good slides that even an undergrad could follow.

Yes, further work is required. And it starts with the guys at NIST who have demonstrated that an optical clock* runs slower where gravitational potential is lower. You can idealise this to parallel-mirror light clocks, like this:

http://www.sciforums.com/attachment.php?attachmentid=6257&d=1367740440

Then you really appreciate that the coordinate speed of light varies with gravitational potential, and what Einstein was on about when he said the speed of light varies with gravitational potential repeatedly from 1911 thru 1916. The locally-measured speed of light is only constant because we use the local motion of light to define the second and the metre, which we then use… to measure the speed of light. So at the event horizon where we say the coordinate speed of light is zero, the

speed of lightis zero, and thus the infalling observer doesn’t see anything at all. He doesn’t even get there, because as he falls in his speed increases, and there’s a crossover. He can’t go faster than the local speed of light. So he gets maybe electron-stripped and annihilated. Bad things happen way before he gets to firewalls and negative-energy particles and quantum fluctuations that take infinite coordinate time to happen. They haven’t happened yet, and never ever will.* Clocks don’t literally measure “the flow of time”. Be it a grandfather clock or a quartz wristwatch or an atomic clock, they all feature some kind of regular cyclic local motion, which they “clock up” and display as a cumulative result that we call the time. Kruskal-Szekeres coordinates suffer a conceptual error wherein putting a stopped observer in front of a stopped clock somehow makes it start ticking again.

Sounds alot like something that would have to do with quantum fluctuations near the event horizon. What applys then is pretty much anything: Firewalls included. It works…in theory, of course. Thermodynamics is in this year, you know…

Yipes. *applies. *sadface*

I have always heard the concept of one of the infalling particles having “negative energy” which leads to the Black Hole losing mass, but that has honestly never made much sense to me. The concept of negative energy seems highly unphysical and I can only remember utilizing negative potentials when you set the potential at infinity to 0 to help simplify a physics problem. One description of this I saw was this:

“…particle-antiparticle radiation is emitted from just beyond the event horizon. This radiation does not come directly from the black hole itself, but rather is a result of virtual particles being “boosted” by the black hole’s gravitation into becoming real particles. As the particle-antiparticle pair was produced by the black hole’s gravitational energy, the escape of one of the particles takes away some of the mass of the black hole.”

That description makes perfect sense to me (as does another model of quantum tunneling by particles already inside the event horizon), at least when compared to negative energy descriptions. Perhaps Sean or someone else can shed some light on the “negative energy” concept I have always heard dropped when speaking of a zero energy universe, black hole evaporation, and futuristic warp drives. Any intuitive idea of what it actually is (rather than saying “think of it as a repulsive force!” or something along those lines) would be greatly appreciated.

Since there’s no other takers, I can say something about the “negative energy” concept when speaking of a zero energy universe: it’s misguided I’m afraid. See page 185 of the Doc 30 translation of The Foundation of the General Theory of Relativity and note the bit that says “the energy of the gravitational field shall act gravitatively in the same way as any other kind of energy”. A gravitational field is a positive energy region. The negative-energy idea comes from the mass deficit associated with binding energy. Binding energy is said to be negative energy, but all it really is, is less positive energy.

For example imagine a 1kg brick in free space. It’s made of matter, and that matter is made of energy by virtue of E=mc². This brick comprises some amount of positive energy. A planet now rolls onto the scene, and you drop the brick. It falls towards the planet, where it impacts the ground. The kinetic energy is dissipated as heat and motion of water/dust etc, and you take a trip down to retrieve your brick, which has survived and cooled off. It still appears to weigh a kilogram. But actually it has lost mass. Potential energy within the brick was converted into kinetic energy then dissipated. The amount of lost energy is the binding energy. But that lost energy is positive energy, and the brick still comprises some amount of positive energy. Conservation of energy applies, and the total energy hasn’t reduced at all. See “mass in general relativity” on wiki and look at the questions and answers.

what about dark energy? isn’t that negative energy?

@ Meh

I am afraid that it takes a lot of negative energy. 🙂

A firewall on the horizon as seen by the freely falling observer violates the equivalence principle in a very bad way, and I don’t believe in that.

The most obvious thing to give up from Polchinski’s list is actually AdS/CFT. Joe didn’t give any arguments against giving it up outside of “I still trust AdS/CFT”, which is not very convincing, so to say. Besides, the real world is not AdS, unfortunately for holography, and the duality between realistic gravity and any kind of QFT (be it C or otherwise) is not yet shown to even exist, let alone constructed.

To paraphrase Polchinski — I still trust the equivalence principle. And I would ditch AdS/CFT any day if it helps any… 🙂

HTH, 🙂

Marko

Meh: IMHO dark energy isn’t negative energy. It’s like you have only the energy-pressure diagonal in the stress-energy tensor, and you’ve got homogeneous space as per the FLRW assumption. I think the problem is that people say dark energy is responsible for the increasing expansion rather than the expansion, and Einstein introduced Lambda to “balance” a dusty universe against gravitational collapse. Lose the dust and the universe expands like a stress-ball when you open your fist. The dimensionality of energy is pressure x volume, and neither is negative.

Marko: I think one can sketch out a rough duality between QFT and gravity, see comment 4 here: http://physicsworld.com/cws/article/news/2012/nov/06/highly-charged-ions-could-make-better-atomic-clock . NB: I’m a “relativity guy”, but I think the equivalence principle is of limited applicability. But not so limited that you start seeing things that just aren’t there.

I have not followed details of ADS/CFT etc. Let me ask a very naive question. If I understand all tests of GR verify Einstein universe with small positive Lambda.How can you draw serious conclusions from a model based on supposed anti desitter universe (negative Lambda?). Is there any reason that interior of black hole could be anti desitter?

I will appreciate answer from any knowledgeable person on this blog. Thanks.

The wormhole manifest itself with gravitational lensing – the entanglement not. Anyway, I do perceive the idea, that the process of observation manifest itself with some space-time thread connecting the observer and observed object a surprisingly naive.

I’m somewhat out of my depth here, but I notice the following points:

There’s something funny with event horizons in classical GR, because of its “points of no return” on the one hand, and the smoothness of spacetime there on the other hand. Indeed, the problem with an event horizon is of course that it is in principle impossible to get any experimental data from “behind” the event horizon. That makes event horizons principal limits of the “observable universe” ; and as such one could wonder whether it even makes sense to develop theories that say “what is going on behind it”. In other words, does it even make sense to talk about “what’s behind the event horizon” ?

Classically, it does make sense in the following way: you could be the bold adventurer crossing the event horizon (silly you!) and you would still have some finite time to appreciate what’s happening before being exterminated in the singularity. That’s because spacetime is still smooth at the event horizon. For big black holes, it is even approximately Minkowskian. You might (classically) not even notice directly you’ve crossed it. So there’s no reason to “stop” the universe from existing at the event horizon.

But if there’s a firewall, you won’t cross it. Nothing will cross it without being annihilated, by the intense radiation. So practically, you CAN’T cross it. You are just smoked out if you try.

So the question arises again whether in that case it makes sense to consider “the inside” of a black hole, as you won’t know anything from the outside about it, and you won’t even be able to cross it. Isn’t it then just not “a boundary” of spacetime ?

And if the inside doesn’t make sense physically, as it is impossible to get there, aren’t event horizons then not finally what we were looking after as “objective projectors of quantum states” that violate unitarity, and as such, resolve the measurement problem without having to go to many worlds interpretations (which seem to me unavoidable if we take quantum theory ontologically, as strict unitarity is not compatible with the projection postulate) ?

If event horizons can objectively transform pure states in mixed states, because unitarity gets lost, then this might eventually solve the measurement problem ?

@ Patrick:

“So the question arises again whether in that case it makes sense to consider “the inside” of a black hole, as you won’t know anything from the outside about it, and you won’t even be able to cross it. Isn’t it then just not “a boundary” of spacetime ?”

There is a difference between saying “a human cannot survive going through the firewall” and “nothing crosses the firewall”. An elementary particle can surely cross the firewall and enter the black hole, since otherwise a black hole could not form in the first place. In addition, an elementary particle carries “information” (its mass, spin, charges, entanglement, etc.) as it gets inside. So basically you need to have physics beyond the black hole horizon, since you need to have a model which says what happens to particles which do get inside.

“If event horizons can objectively transform pure states in mixed states, because unitarity gets lost, then this might eventually solve the measurement problem ?”

Um, no, I don’t think so. Giving up unitarity would arguably resolve the black hole information paradox (in a rather trivial way, by saying that gravitational interaction does not conserve pure states of QM). That is certainly a legitimate possibility, but people often try to avoid going that route, since it essentially tells us that some basic concepts of quantum mechanics are wrong — and then you have to rethink the whole QM *and* gravity from the ground up. That is arguably even harder than trying to resolve just the BH information paradox in some other way.

As for the measurement problem — no, giving up unitarity does not resolve the measurement problem (not so easily at least). The “measurement problem” manifests itself as a nonunitary evolution of a quantum system, even in cases where gravity is absent or unimportant (in everyday labs, so to speak). There is no black hole horizon in the lab to account for nonunitary evolution of a tabletop quantum system that I happen to measure.

There indeed are ways around this argument — saying that gravity collapses the wavefunction even without the BH horizon, or that the BH horizon is an ultimate “observer” which collapses the wavefunction of the whole universe, while the rest happens through decoherence, or something similar — but then you need to construct a very elaborate model of how exactly all this happens, and try not to violate any experimental results in the process. And this is again a very hard thing to do, if possible at all. People who research quantum gravity do have that idea “in the back of their heads”, but one can investigate whether or not gravity can perform objective collapse only *after* one already has a working model for a theory of quantum gravity. And unfortunately, so far we don’t have one, not yet. 🙂

HTH, 🙂

Marko

@Patrick re:

Isn’t it then just not “a boundary” of spacetime?I think it is, but that the measurement problem is related to the optical Fourier transform rather than event horizons.@Marko re:

“…An elementary particle can surely … enter the black hole, since otherwise a black hole could not form in the first place…”That’s what people usually say, for example in this article where you can read this:“In both of these interpretations we find that an object goes to future infinity (of coordinate time) as it approaches an event horizon, and its rate of proper time as a function of coordinate time goes to zero. The difference is that the field interpretation is content to truncate its description at the event horizon, while the geometric interpretation carries on with its description right through the event horizon and down to r = 0 (where it too finally gives up). “Note that in the usual black hole description, the infalling body

goes to the end of time and back. See this page from MTW and note the truncation of the vertical axis on the chart on the left. I’m sorry, but that just has to be wrong. Throw an object into a black hole, and ask yourselfhas it crossed the event horizon yet?The answer is no, and is always no. Hence IMHO black holes have to grow like hailstones, and like the gravastar, feature a central region which is a “void in the fabric of space and time”.@ John:

“Note that in the usual black hole description, the infalling body goes to the end of time and back.”

So what if it does? If you have a problem digesting an infinite value for a coordinate, then switch to some other coordinates which are not singular. The very same page of MTW that you quote has the very same physics drawn in Kruskal-Szekeres coordinates, which are regular everywhere except at the physical singularity at the black hole center.

“Throw an object into a black hole, and ask yourself has it crossed the event horizon yet? The answer is no, and is always no.”

It is a wrong question to ask (and consequently the answer does not make any substantial sense). The event horizon is not a space boundary, but a time boundary. There is no point in space where a particle could be *now* (i.e. at some particular moment of coordinate time), and be *inside* the event horizon.

To put it another way, it makes no sense to ask “where is the horizon now?” from the point of view of far-away observer. The event horizon is

in the future, and it does not exist “now” at any particular “place”. Just as it doesn’t make sense to ask “How far away in space is ‘tomorrow’?”. There is no space distance from ‘now’ to ‘tomorrow’, so that question does not make sense (for a yet another example of a senseless question, remember the popular “north from the north pole” analogy for times before the Big Bang). The article you quoted explains this in detail down towards the end, and even has some nice diagrams to visualize it.The pseudo-Riemannian geometry in 4D is not exactly the easiest thing in the world for human intuition, so I can understand that most laypeople have some trouble to properly grasp the black hole geometry. 😉

As for the gravastars, they are a very interesting and nice solution of GR, but they rely on the gravitational phase transition to de Sitter space inside the star. Short of experimental data, the only thing that could convince me that gravastars can exist would be a model of quantum gravity that would predict this phase transition process. And as I said in my previous post, we don’t have a QG model yet.

HTH, 🙂

Marko

Marko: see my June 6th comment. I said I think Kruskal-Szekeres coordinates suffer a conceptual error wherein putting a stopped observer in front of a stopped clock somehow makes it start ticking again. And of course it’s a light clock. So the observer doesn’t see things normally “in his frame”, he sees nothing at all. Because yes, the event horizon is in the future. And it’s forever in the future, which is why I think the article I quoted from draws the wrong conclusion.

I feel I do understand something about the pseudo-Riemannian geometry. Let me give you a taste of it by stepping down a dimension: place gedanken light-clocks in an equatorial plane through and around the Earth. The light-clocks run at different rates, and when you plot your measurements on a chart, your plot is curved just like the wikipedia plot of gravitational potential, which is in turn like the bowling-ball analogy. The curvature you can see is the second derivative of potential which relates to tidal force and Weyl curvature, which relates to Ricci and Riemann curvature. There is no volume in this rubber-sheet chart of course, but Riemann curvature is the “defining feature” of a gravitational field because in essence you need curvature for your chart to get off the flat and level. Now note this: your light clocks do not run slower nearer the Earth because your chart of light-clock-rates is curved. And if some of your light clocks are stopped, you cannot make them tick by drawing a different chart. And those light clocks can’t go slower than stopped, so there is no more curvature, no more geometry, no more gravity, and no more space and time.

I’m not convinced by the gravastar either, but I do think it’s more like the original “frozen star” black hole concept, which I think is right – the Schwarzschild singularity is not some mere artefact that can be transformed away. See my post of June 8th and follow the link to comment 4 for an outline QG sketch. You end up not with a firewall, but an icewall.

@ John:

“I said I think Kruskal-Szekeres coordinates suffer a conceptual error wherein putting a stopped observer in front of a stopped clock somehow makes it start ticking again.”

Umm, no, the problem is with Schwarzschild coordinates having a non-physical singularity at the horizon. The clock on the horizon does not stop ticking, as can be easily verified by the comoving observer, as well as by using Kruskal coordinates. The issue is that in Schwarzschild coordinates the clock *appears* to have stopped — but not because it has “really” stopped, but because the Schwarzschild coordinates have a (mathematical) singularity there — the Jacobian matrix becomes singular at the horizon, and those coordinates stop being valid at those points in spacetime. Kruskal coordinates do not develop this problem, so they are better suited for describing the black hole geometry.

“And if some of your light clocks are stopped, you cannot make them tick by drawing a different chart.”

Yes, but no clock is going to stop — not until it reaches the physical singularity at the black hole center. You should not rely too much on Schwarzschild coordinates to interpret physics. 🙂

“See my post of June 8th and follow the link to comment 4 for an outline QG sketch.”

I failed to see anything relevant there. But regardless of that, a sketch is not enough — in order to establish a mechanism for the phase transition of geometry, one needs a working model, not a sketch. I know several sketches of QG, but so far they all fall short of being well-defined mathematical formulations. A sketch is just a wishlist, and as it is often said — the devil is in the details! 🙂

HTH, 🙂

Marko