The firewall puzzle is the claim that, if information is ultimately conserved as black holes evaporate via Hawking radiation, then an infalling observer sees a ferocious wall of high-energy radiation as they fall through the event horizon. This is contrary to everything we’ve ever believed about black holes based on classical and semi-classical reasoning, so if it’s true it’s kind of a big deal.

The argument in favor of firewalls is based on everyone’s favorite spooky physical phenomenon, quantum entanglement. Think of a Hawking photon near the event horizon of a very old (mostly-evaporated) black hole, about to sneak out to the outside world. If there is no firewall, the quantum state near the horizon is (pretty close to) the vacuum, which is unique. Therefore, the outgoing photon will be completely entangled with a partner ingoing photon — the negative-energy guy who is ultimately responsible for the black hole losing mass. However, if information is conserved, that outgoing photon must also be entangled with the radiation that left the hole much earlier. This is a problem because quantum entanglement is “monogamous” — one photon can’t be maximally entangled with two other photons at the same time. (Awww.) The simplest way out, so the story goes, is to break the entanglement between the ingoing and outgoing photons, which means the state is not close to the vacuum. Poof: firewall.

You folks read about this some time ago in a guest post by Joe Polchinski, one of the authors (with Ahmed Almheiri, Don Marolf, and James Sully, thus “AMPS”) of the original paper. I’m just updating now to let you know: almost a year later, the controversy has not gone away.

You can read about some of the current state of play in An Apologia for Firewalls, by the above authors plus Douglas Stanford. (Those of us with good Catholic educations understand that “apologia” means “defense,” not “apology.”) We also had a physics colloquium by Joe at Caltech last week, where he masterfully explained the basics of the black hole information paradox as well as the recent firewall brouhaha. Caltech is not very good at technology (don’t let the name fool you), so we don’t record our talks, but Joe did agree to put his slides up on the web, which you can now all enjoy. Aimed at physics students, so there might be an equation or two in there.

**Fire.cit**from

**Joe Polchinski**

Just to point out a couple of intriguing ideas that have come along in response to the AMPS proposal, one paper that has deservedly received a lot of attention is An Infalling Observer in AdS/CFT by Kyriakos Papadodimas and Suvrat Raju. They consider the AdS/CFT correspondence, which relates a theory of gravity in anti-de Sitter space to a non-gravitational field theory on its boundary. One can model black holes in such a theory, and see what the boundary field theory has to say about them. Papadodimas and Raju argue that they don’t see any evidence of firewalls. It’s suggestive, but like many AdS/CFT constructions, comes across as a bit of a black box; even if there aren’t any firewalls, it’s hard to pinpoint exactly what part of the original AMPS argument is at fault.

More radically, there was just a new paper by Juan Maldacena and Lenny Susskind, Cool Horizons for Entangled Black Holes. These guys have tenure, so they aren’t afraid of putting forward some crazy-sounding ideas, which is what they’ve done here. (Note the enormous difference between “crazy-sounding” and “actually crazy.”) They are proposing that, when two particles are entangled, there is actually a tiny wormhole connecting them through spacetime. This seems bizarre from a classical general-relativity standpoint, since such wormholes would instantly collapse upon themselves; but they point out that their wormholes are “highly quantum objects.” They claim there is evidence that such a conjecture makes sense, although they can’t confidently argue that it gets rid of the firewalls.

I suspect further work is required. Good times.

I’m about to speak out of (you know where), but …

It seems to me like a lot of the arguments rely on the picture of particle/antiparticle production at the horizon. While a nice picture, having these “particle states” depends on a choice of basis for the Hilbert space, which is slicing dependent … and in a curved spacetime, not trivial like it is in Minkowski. Should I believe these arguments? Can somebody tell me that there is actually some solidly grounded QFT behind it, and people are doing their Bogoliubov transformations?

I’m also concerned about how people try to ‘localise’ states and information. The picture I have in my head is the following: You should choose a global (Cauchy) slicing of your spacetime, and a slice has a Hilbert space for the field content of the theory; then at some time, there is a density matrix describing the fields, and you can measure the entropy, which does not depend on choice of basis, since it is a trace. But drawing pictures with particles going along certain paths makes it seem like you are constructing wave packet states which are localized, and that sounds like you have a preferred basis for the states. Back to entropy localization, it’s not clear to me how entropy can be localized as being in the interior of the BH. A quantum state does not live *somewhere* in spacetime, it just lives in Hilbert space!

I just refer to my comment on the mentioned guest post by Joe Polchinski: firewalls seem to be the logic consequence of thermodynamical effects caused by enormous differences betwwen the course of time in the centres of black holes and on their outsides.

Shouldn’t there be a zone where electromagnetic energy “orbits” the center of the black hole? Wouldn’t this energy be coherent like laser light owing to constructive interference? Is that an information firewall?

Theoretical physicists are the new philosophers.

Pingback: “when two particles are entangled, there is actually a tiny wormhole connecting them through spacetime” | Gordon's shares

Interestingly the connection ER=EPR. It reminded me of a blog comment I made a couple of years ago at FQXi, in which I propose something similar:

http://fqxi.org/community/forum/topic/976#post_40460

I proposed it for theoretical purposes only, as an example of local explanation of EPR by using ER, which circumvents Bell’s theorem, but I did not consider it a big deal.

http://www.youtube.com/user/PhilosophyCosmology/videos

The videos are here!! Can’t wait to enjoy them.. just wanted to let you know

I’ve always thought about it like Maldacena and Susskind, but I’m an undergrad with of course no tenure or credibility. I don’t think the firewall goes away even with this explanation, but I do think this is what is happening.,

slide #16 is pretty funny. Good slides that even an undergrad could follow.

Yes, further work is required. And it starts with the guys at NIST who have demonstrated that an optical clock* runs slower where gravitational potential is lower. You can idealise this to parallel-mirror light clocks, like this:

http://www.sciforums.com/attachment.php?attachmentid=6257&d=1367740440

Then you really appreciate that the coordinate speed of light varies with gravitational potential, and what Einstein was on about when he said the speed of light varies with gravitational potential repeatedly from 1911 thru 1916. The locally-measured speed of light is only constant because we use the local motion of light to define the second and the metre, which we then use… to measure the speed of light. So at the event horizon where we say the coordinate speed of light is zero, the

speed of lightis zero, and thus the infalling observer doesn’t see anything at all. He doesn’t even get there, because as he falls in his speed increases, and there’s a crossover. He can’t go faster than the local speed of light. So he gets maybe electron-stripped and annihilated. Bad things happen way before he gets to firewalls and negative-energy particles and quantum fluctuations that take infinite coordinate time to happen. They haven’t happened yet, and never ever will.* Clocks don’t literally measure “the flow of time”. Be it a grandfather clock or a quartz wristwatch or an atomic clock, they all feature some kind of regular cyclic local motion, which they “clock up” and display as a cumulative result that we call the time. Kruskal-Szekeres coordinates suffer a conceptual error wherein putting a stopped observer in front of a stopped clock somehow makes it start ticking again.

Sounds alot like something that would have to do with quantum fluctuations near the event horizon. What applys then is pretty much anything: Firewalls included. It works…in theory, of course. Thermodynamics is in this year, you know…

Yipes. *applies. *sadface*

I have always heard the concept of one of the infalling particles having “negative energy” which leads to the Black Hole losing mass, but that has honestly never made much sense to me. The concept of negative energy seems highly unphysical and I can only remember utilizing negative potentials when you set the potential at infinity to 0 to help simplify a physics problem. One description of this I saw was this:

“…particle-antiparticle radiation is emitted from just beyond the event horizon. This radiation does not come directly from the black hole itself, but rather is a result of virtual particles being “boosted” by the black hole’s gravitation into becoming real particles. As the particle-antiparticle pair was produced by the black hole’s gravitational energy, the escape of one of the particles takes away some of the mass of the black hole.”

That description makes perfect sense to me (as does another model of quantum tunneling by particles already inside the event horizon), at least when compared to negative energy descriptions. Perhaps Sean or someone else can shed some light on the “negative energy” concept I have always heard dropped when speaking of a zero energy universe, black hole evaporation, and futuristic warp drives. Any intuitive idea of what it actually is (rather than saying “think of it as a repulsive force!” or something along those lines) would be greatly appreciated.

Since there’s no other takers, I can say something about the “negative energy” concept when speaking of a zero energy universe: it’s misguided I’m afraid. See page 185 of the Doc 30 translation of The Foundation of the General Theory of Relativity and note the bit that says “the energy of the gravitational field shall act gravitatively in the same way as any other kind of energy”. A gravitational field is a positive energy region. The negative-energy idea comes from the mass deficit associated with binding energy. Binding energy is said to be negative energy, but all it really is, is less positive energy.

For example imagine a 1kg brick in free space. It’s made of matter, and that matter is made of energy by virtue of E=mc². This brick comprises some amount of positive energy. A planet now rolls onto the scene, and you drop the brick. It falls towards the planet, where it impacts the ground. The kinetic energy is dissipated as heat and motion of water/dust etc, and you take a trip down to retrieve your brick, which has survived and cooled off. It still appears to weigh a kilogram. But actually it has lost mass. Potential energy within the brick was converted into kinetic energy then dissipated. The amount of lost energy is the binding energy. But that lost energy is positive energy, and the brick still comprises some amount of positive energy. Conservation of energy applies, and the total energy hasn’t reduced at all. See “mass in general relativity” on wiki and look at the questions and answers.

what about dark energy? isn’t that negative energy?

@ Meh

I am afraid that it takes a lot of negative energy.

A firewall on the horizon as seen by the freely falling observer violates the equivalence principle in a very bad way, and I don’t believe in that.

The most obvious thing to give up from Polchinski’s list is actually AdS/CFT. Joe didn’t give any arguments against giving it up outside of “I still trust AdS/CFT”, which is not very convincing, so to say. Besides, the real world is not AdS, unfortunately for holography, and the duality between realistic gravity and any kind of QFT (be it C or otherwise) is not yet shown to even exist, let alone constructed.

To paraphrase Polchinski — I still trust the equivalence principle. And I would ditch AdS/CFT any day if it helps any…

HTH,

Marko

Meh: IMHO dark energy isn’t negative energy. It’s like you have only the energy-pressure diagonal in the stress-energy tensor, and you’ve got homogeneous space as per the FLRW assumption. I think the problem is that people say dark energy is responsible for the increasing expansion rather than the expansion, and Einstein introduced Lambda to “balance” a dusty universe against gravitational collapse. Lose the dust and the universe expands like a stress-ball when you open your fist. The dimensionality of energy is pressure x volume, and neither is negative.

Marko: I think one can sketch out a rough duality between QFT and gravity, see comment 4 here: http://physicsworld.com/cws/article/news/2012/nov/06/highly-charged-ions-could-make-better-atomic-clock . NB: I’m a “relativity guy”, but I think the equivalence principle is of limited applicability. But not so limited that you start seeing things that just aren’t there.

I have not followed details of ADS/CFT etc. Let me ask a very naive question. If I understand all tests of GR verify Einstein universe with small positive Lambda.How can you draw serious conclusions from a model based on supposed anti desitter universe (negative Lambda?). Is there any reason that interior of black hole could be anti desitter?

I will appreciate answer from any knowledgeable person on this blog. Thanks.

The wormhole manifest itself with gravitational lensing – the entanglement not. Anyway, I do perceive the idea, that the process of observation manifest itself with some space-time thread connecting the observer and observed object a surprisingly naive.

I’m somewhat out of my depth here, but I notice the following points:

There’s something funny with event horizons in classical GR, because of its “points of no return” on the one hand, and the smoothness of spacetime there on the other hand. Indeed, the problem with an event horizon is of course that it is in principle impossible to get any experimental data from “behind” the event horizon. That makes event horizons principal limits of the “observable universe” ; and as such one could wonder whether it even makes sense to develop theories that say “what is going on behind it”. In other words, does it even make sense to talk about “what’s behind the event horizon” ?

Classically, it does make sense in the following way: you could be the bold adventurer crossing the event horizon (silly you!) and you would still have some finite time to appreciate what’s happening before being exterminated in the singularity. That’s because spacetime is still smooth at the event horizon. For big black holes, it is even approximately Minkowskian. You might (classically) not even notice directly you’ve crossed it. So there’s no reason to “stop” the universe from existing at the event horizon.

But if there’s a firewall, you won’t cross it. Nothing will cross it without being annihilated, by the intense radiation. So practically, you CAN’T cross it. You are just smoked out if you try.

So the question arises again whether in that case it makes sense to consider “the inside” of a black hole, as you won’t know anything from the outside about it, and you won’t even be able to cross it. Isn’t it then just not “a boundary” of spacetime ?

And if the inside doesn’t make sense physically, as it is impossible to get there, aren’t event horizons then not finally what we were looking after as “objective projectors of quantum states” that violate unitarity, and as such, resolve the measurement problem without having to go to many worlds interpretations (which seem to me unavoidable if we take quantum theory ontologically, as strict unitarity is not compatible with the projection postulate) ?

If event horizons can objectively transform pure states in mixed states, because unitarity gets lost, then this might eventually solve the measurement problem ?

@ Patrick:

“So the question arises again whether in that case it makes sense to consider “the inside” of a black hole, as you won’t know anything from the outside about it, and you won’t even be able to cross it. Isn’t it then just not “a boundary” of spacetime ?”

There is a difference between saying “a human cannot survive going through the firewall” and “nothing crosses the firewall”. An elementary particle can surely cross the firewall and enter the black hole, since otherwise a black hole could not form in the first place. In addition, an elementary particle carries “information” (its mass, spin, charges, entanglement, etc.) as it gets inside. So basically you need to have physics beyond the black hole horizon, since you need to have a model which says what happens to particles which do get inside.

“If event horizons can objectively transform pure states in mixed states, because unitarity gets lost, then this might eventually solve the measurement problem ?”

Um, no, I don’t think so. Giving up unitarity would arguably resolve the black hole information paradox (in a rather trivial way, by saying that gravitational interaction does not conserve pure states of QM). That is certainly a legitimate possibility, but people often try to avoid going that route, since it essentially tells us that some basic concepts of quantum mechanics are wrong — and then you have to rethink the whole QM *and* gravity from the ground up. That is arguably even harder than trying to resolve just the BH information paradox in some other way.

As for the measurement problem — no, giving up unitarity does not resolve the measurement problem (not so easily at least). The “measurement problem” manifests itself as a nonunitary evolution of a quantum system, even in cases where gravity is absent or unimportant (in everyday labs, so to speak). There is no black hole horizon in the lab to account for nonunitary evolution of a tabletop quantum system that I happen to measure.

There indeed are ways around this argument — saying that gravity collapses the wavefunction even without the BH horizon, or that the BH horizon is an ultimate “observer” which collapses the wavefunction of the whole universe, while the rest happens through decoherence, or something similar — but then you need to construct a very elaborate model of how exactly all this happens, and try not to violate any experimental results in the process. And this is again a very hard thing to do, if possible at all. People who research quantum gravity do have that idea “in the back of their heads”, but one can investigate whether or not gravity can perform objective collapse only *after* one already has a working model for a theory of quantum gravity. And unfortunately, so far we don’t have one, not yet.

HTH,

Marko

@Patrick re:

Isn’t it then just not “a boundary” of spacetime?I think it is, but that the measurement problem is related to the optical Fourier transform rather than event horizons.@Marko re:

“…An elementary particle can surely … enter the black hole, since otherwise a black hole could not form in the first place…”That’s what people usually say, for example in this article where you can read this:“In both of these interpretations we find that an object goes to future infinity (of coordinate time) as it approaches an event horizon, and its rate of proper time as a function of coordinate time goes to zero. The difference is that the field interpretation is content to truncate its description at the event horizon, while the geometric interpretation carries on with its description right through the event horizon and down to r = 0 (where it too finally gives up). “Note that in the usual black hole description, the infalling body

goes to the end of time and back. See this page from MTW and note the truncation of the vertical axis on the chart on the left. I’m sorry, but that just has to be wrong. Throw an object into a black hole, and ask yourselfhas it crossed the event horizon yet?The answer is no, and is always no. Hence IMHO black holes have to grow like hailstones, and like the gravastar, feature a central region which is a “void in the fabric of space and time”.@ John:

“Note that in the usual black hole description, the infalling body goes to the end of time and back.”

So what if it does? If you have a problem digesting an infinite value for a coordinate, then switch to some other coordinates which are not singular. The very same page of MTW that you quote has the very same physics drawn in Kruskal-Szekeres coordinates, which are regular everywhere except at the physical singularity at the black hole center.

“Throw an object into a black hole, and ask yourself has it crossed the event horizon yet? The answer is no, and is always no.”

It is a wrong question to ask (and consequently the answer does not make any substantial sense). The event horizon is not a space boundary, but a time boundary. There is no point in space where a particle could be *now* (i.e. at some particular moment of coordinate time), and be *inside* the event horizon.

To put it another way, it makes no sense to ask “where is the horizon now?” from the point of view of far-away observer. The event horizon is

in the future, and it does not exist “now” at any particular “place”. Just as it doesn’t make sense to ask “How far away in space is ‘tomorrow’?”. There is no space distance from ‘now’ to ‘tomorrow’, so that question does not make sense (for a yet another example of a senseless question, remember the popular “north from the north pole” analogy for times before the Big Bang). The article you quoted explains this in detail down towards the end, and even has some nice diagrams to visualize it.The pseudo-Riemannian geometry in 4D is not exactly the easiest thing in the world for human intuition, so I can understand that most laypeople have some trouble to properly grasp the black hole geometry.

As for the gravastars, they are a very interesting and nice solution of GR, but they rely on the gravitational phase transition to de Sitter space inside the star. Short of experimental data, the only thing that could convince me that gravastars can exist would be a model of quantum gravity that would predict this phase transition process. And as I said in my previous post, we don’t have a QG model yet.

HTH,

Marko

Marko: see my June 6th comment. I said I think Kruskal-Szekeres coordinates suffer a conceptual error wherein putting a stopped observer in front of a stopped clock somehow makes it start ticking again. And of course it’s a light clock. So the observer doesn’t see things normally “in his frame”, he sees nothing at all. Because yes, the event horizon is in the future. And it’s forever in the future, which is why I think the article I quoted from draws the wrong conclusion.

I feel I do understand something about the pseudo-Riemannian geometry. Let me give you a taste of it by stepping down a dimension: place gedanken light-clocks in an equatorial plane through and around the Earth. The light-clocks run at different rates, and when you plot your measurements on a chart, your plot is curved just like the wikipedia plot of gravitational potential, which is in turn like the bowling-ball analogy. The curvature you can see is the second derivative of potential which relates to tidal force and Weyl curvature, which relates to Ricci and Riemann curvature. There is no volume in this rubber-sheet chart of course, but Riemann curvature is the “defining feature” of a gravitational field because in essence you need curvature for your chart to get off the flat and level. Now note this: your light clocks do not run slower nearer the Earth because your chart of light-clock-rates is curved. And if some of your light clocks are stopped, you cannot make them tick by drawing a different chart. And those light clocks can’t go slower than stopped, so there is no more curvature, no more geometry, no more gravity, and no more space and time.

I’m not convinced by the gravastar either, but I do think it’s more like the original “frozen star” black hole concept, which I think is right – the Schwarzschild singularity is not some mere artefact that can be transformed away. See my post of June 8th and follow the link to comment 4 for an outline QG sketch. You end up not with a firewall, but an icewall.

@ John:

“I said I think Kruskal-Szekeres coordinates suffer a conceptual error wherein putting a stopped observer in front of a stopped clock somehow makes it start ticking again.”

Umm, no, the problem is with Schwarzschild coordinates having a non-physical singularity at the horizon. The clock on the horizon does not stop ticking, as can be easily verified by the comoving observer, as well as by using Kruskal coordinates. The issue is that in Schwarzschild coordinates the clock *appears* to have stopped — but not because it has “really” stopped, but because the Schwarzschild coordinates have a (mathematical) singularity there — the Jacobian matrix becomes singular at the horizon, and those coordinates stop being valid at those points in spacetime. Kruskal coordinates do not develop this problem, so they are better suited for describing the black hole geometry.

“And if some of your light clocks are stopped, you cannot make them tick by drawing a different chart.”

Yes, but no clock is going to stop — not until it reaches the physical singularity at the black hole center. You should not rely too much on Schwarzschild coordinates to interpret physics.

“See my post of June 8th and follow the link to comment 4 for an outline QG sketch.”

I failed to see anything relevant there. But regardless of that, a sketch is not enough — in order to establish a mechanism for the phase transition of geometry, one needs a working model, not a sketch. I know several sketches of QG, but so far they all fall short of being well-defined mathematical formulations. A sketch is just a wishlist, and as it is often said — the devil is in the details!

HTH,

Marko

You’re still not seeing the big picture, Marko. Remember NIST have demonstrated optical clocks running at different rates at different elevations. Idealise that with parallel-mirror light clocks. The elephant isn’t “in two places at once when it goes to the end of time and back”, it’s in the room, and more and more people have spotted it. Look at those parallel-mirror light clocks. Once you learn to step out of your frame and look at all frames at once, everything changes. Go back to the equatorial light clocks. The speed of the infalling observer relates to clock rate differences, and his speed at some location cannot exceed the speed of light at that location. So if he makes it to the event horizon he isn’t moving, his light isn’t moving, and he can’t verify anything. Go back to SR and try claiming that the gedanken observer moving at c with respect to us sees his clock ticking normally because all frames are equally valid.

Shame you didn’t like the QG sketch. IMHO a well-defined mathematical formulation needs something like this to get off the ground. LQG has rambled on for decades going nowhere because its adherents don’t have any concept of how electromagnetism and gravity fit together.

Nice to talk to you Marko. I suspect though we’ll have to agree to differ. Such is life: if we all agreed about everything life would be very dull.

I read

Space against Timein New Scientist.“This does point to the fact that we may be missing something in our conceptual description”.says Steve Giddings.You betchya.

“This allowed him, for example, to relax the restriction that nothing can travel through space-time faster than light”.Red flag. Nothing travels through space-time. It’s a static all-times-at-once mathematical model. It’s an artefact that doesn’t actually exist. We draw worldlines in it to represent motion through space. We “time” this motion using something that… moves through space. Motion is king*.

“A lot of people have this intuition that in some sense the existence of these null directions might be more fundamental than space or time”Those

null directionsdon’t actually exist. Light moves. Only when it doesn’t, such as at the black hole event horizon, then there is no more space, and there is more time.* You can use repeated Compton scattering to turn a photon into the motion of electrons. Keep doing it, and in the limit you have no wave energy and so no photon left. It has been completely converted into the motion of electrons. But you could have put that photon through pair production instead, and made an electron (and a positron). The bottom line is that the electron is

made of motion. Ergo motion is king.John Duffield wrote:

“The locally-measured speed of light is only constant because we use the local motion of light to define the second and the metre, which we then use… to measure the speed of light.”

It’s not just a matter of circular definitions–by the equivalence principle, a free-falling observer in a small region of spacetime should be able to define length and time using any method that can be used in an inertial frame in flat SR spacetime (for example, multiples of cesium oscillations for time and multiples of interatomic spacing in some crystalline solid for distance), and the speed of light should still work out to c when measured by this method, a prediction that obviously isn’t just trivially true by definition.

The crystal solid doesn’t help because metre doesn’t change Jesse. It’s

“the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second.”When the light goes slower, the slower light and the bigger second cancel each other out, and the metre stays the same.The free-falling observer always measures c to be 299,792,458 m/s, but we know that the coordinate speed of light varies in a non-inertial reference frame. We know that one 299,792,458 m/s is not the same as another. And we also know about the wave nature of matter. The free-falling observer doesn’t measure any change because whatever the wave speed is, he uses it to calibrate the clock he uses to measure wave speed. He might convince himself that time is going slower and waves aren’t, but his clock is not literally measuring “the flow of time”. He cannot open up his clock and see time flowing within it. All he sees is regular cyclic motion, of a crystal, or cogs, or hyperfine spin flips creating microwaves. So when the clock goes slower, it’s because that motion goes slower, not because of anything else.

“The crystal solid doesn’t help because metre doesn’t change Jesse. It’s “the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second.””

This is merely the latest conventional definition, it wasn’t defined this way at the time Einstein formulated his theory, and the fundamental physics doesn’t say anything about which definition you “should” use. And even if you do choose to define the speed of light this way, then it is still a nontrivial prediction that the freefalling observer will not see any changes in any local physical observations in her local inertial frame; for example, the theory predicts she’ll always get the same answer if she measures the number of cesium atom oscillations between the time a light ray departs from the cesium clock and the time it traverses X number of atoms in the crystal lattice, hits a mirror, and returns to the clock (assuming the time and distance are small so that the observations can be considered “local”).

Agreed, Jesse.

The problem comes when you take it to the limit. Imagine you have a gedanken telescope panning to keep the infalling observer in view. You see her and her light and her measurements going slower and slower, much as you would if she was an SR observer going faster and faster. In the limit she and her light and her measurements grind to a halt. She will not see any changes because she sees nothing.

Apply that to your MTW page and hopefully you’ll appreciate that going to the end of time and back is a fantasy that leads to an elephant in two places at once. And that a new time coordinate is another fantasy. When a clock is stopped, you cannot make it start ticking again by changing your coordinate system. Like I said, it’s like putting a stopped observer in front of a stopped clock and pretending that for her, everything carries on as normal. It doesn’t. She’s stopped, her clock is stopped, and that’s the way it’s going to stay, forever.

“Imagine you have a gedanken telescope panning to keep the infalling observer in view. You see her and her light and her measurements going slower and slower, much as you would if she was an SR observer going faster and faster. In the limit she and her light and her measurements grind to a halt. She will not see any changes because she sees nothing.”

OK, now imagine you are carrying a telescope in flat SR spacetime and using a rocket to experience constant proper acceleration, so that you have a Rindler horizon. As you watch an inertial observer “falling” towards the Rindler horizon, it is likewise true that “You see her and her light and her measurements going slower and slower”, so that she never actually appears to reach it no matter how long you wait, but presumably you don’t therefore conclude that in the limit as her distance from your Rindler horizon approaches zero, her measurements “grind to a halt” in some objective sense, or that she no longer continues to perceive things after she crosses it (but the only way you or anyone else can see what happened to her after she crossed the horizon is to cross it yourself, just like with a black hole event horizon). Your pet theories aside, do you have any actual physical or logical argument for why others should agree with you that the slowdown is any more objective in the case of a black hole event horizon than it is in the case of a Rindler horizon? (assuming classical GR, leaving aside quantum arguments for firewalls)

The Rindler horizon is a mere artefact. I’m not the accelerating observer, I’m watching her through my gedanken telescope, and there is no cone of darkness following her. It just isn’t in the same league as a black hole.

Re:

Your pet theories aside, do you have any actual physical or logical argument for why others should agree with you that the slowdown is any more objective in the case of a black hole event horizon than it is in the case of a Rindler horizon?This is not some pet theory, this is general relativity. The slowdown is objective. Check out the NIST optical clock, see this and this where you can read this:

“if one clock in one lab is 30 centimeters higher than the clock in the other lab, we can see the difference in the rates they run at”. All observers agree that the lower clock goes slower. Oh, and it’s an optical clock. Parallel-mirror light-clocks will do the same. This gif is idealised and exaggerated, but it is not misleading:parallel-mirror light-clocks

Look at the gif, Jesse. Do not fool yourself that those two light pulses are moving at the same speed, because they’re not. There is no

time flowingthrough this or any other clock. Ellis was wrong, Einstein was right. The speed of light varies with gravitational potential. A light clock can’t go slower than stopped. And youcannotmake a stopped clock tick by adopting a fantasy coordinate system that does a hop skippity jump over the end of time.The Rindler horizon is a mere artefact. I’m not the accelerating observer, I’m watching her through my gedanken telescope, and there is no cone of darkness following her.My gedanken was that you (and the telescope) were the accelerating observer, not the inertial one. Remember, in this analogy the accelerating observer is analogous to the observer hovering above the event horizon (so neither one crosses the relevant horizon, and both must experience continual proper acceleration to avoid falling in), the inertial observer is analogous to the observer in freefall approaching it (so both experience zero proper acceleration, and both should experience crossing the horizon according to SR/GR). So this is a non-response to my scenario, like if I had responded to your black hole question by saying “I’m not the observer hovering above the horizon, I’m the falling observer watching the hovering one through the telescope, there is no cone of darkness following her”.

This is not some pet theory, this is general relativity. The slowdown is objective. Check out the NIST optical clock, see this and this where you can read this: “if one clock in one lab is 30 centimeters higher than the clock in the other lab, we can see the difference in the rates they run at”. All observers agree that the lower clock goes slower.The ratio between the rates the clocks are ticking at any given moment is not objective, since that depends on the simultaneity convention. In general relativity, the only objective comparisons are local ones like “what does clock A read at the moment it receives the signal from clock B saying that clock B has elapsed 200 nanoseconds”. And by the standard of local comparisons like this, it’s equally “objective” that clocks that maintain different constant Rindler distances from the Rindler horizon (which require that they have different proper accelerations) tick differently, with the clock closer to the horizon elapsing less time than the clock farther from the horizon over any given interval (for example, if the closer one sends one signal when it reads 0 seconds and another when it reads 10, the farther one might elapse 20 seconds between receiving those two signals, then if the closer one sent a third signal 10 seconds after the second the farther one would have to wait another 20 seconds to receive it, etc.). So again, you have failed to explain why your argument couldn’t equally well be applied to make the case that time objectively stops at the Rindler horizon–saying the Rindler horizon is a “mere artefact” is not a physical argument, it’s just a denigrating rhetorical phrase, unless you can provide a precise physical definition of when a horizon is an “artefact” and when it isn’t.

“Ellis was wrong, Einstein was right. The speed of light varies with gravitational potential. ”

Ellis and Einstein would have no real physical disagreement, this is only a matter of the convention of how you choose to define the phrase “speed of light”. I’m sure both would agree that if you define it in terms of the coordinate speed of a light beam in some non-inertial coordinate system (in GR or SR), the the speed of light can vary (and the way it varies would depend entirely on the choice of coordinate system, nowhere in Einstein’s theory will you find any mathematics that gives you an objective coordinate-independent answer to the ratio of light’s speed at different distances from a black hole); I’m equally sure both would agree that if you define it in terms of local measurements in an infinitesimally small freefalling inertial system, they would both agree the speed of light doesn’t vary from one location to another.

It’s not a non-response, Jesse. It’s an attempt to make you distinguish artefact from objective reality. Your gedanken scenario features an accelerating observer, but there is nothing following behind him. He merely

can’t see some things behind him. This is very different to the black hole. It’s there, it’s massive, it pulls the observer in. And the inertial observer is not analogous to the observer in freefall, because the former is subject to constant SR time dilation while the latter is subject to decreasing gravitational potential and increasing GR time dilation. The objective fact is that optical clocks go slower when they’re lower. No observer sees them going faster. And defining the speed of light to be constant in a gravitational field isnotsomething Einstein would agree with. He said on repeated occasions that the SR postulate did not apply to GR:1911: If we call the velocity of light at the origin of co-ordinates cₒ, then the velocity of light c at a place with the gravitation potential Φ will be given by the relation c = cₒ(1 + Φ/c²)1912 : On the other hand I am of the view that the principle of the constancy of the velocity of light can be maintained only insofar as one restricts oneself to spatio-temporal regions of constant gravitational potential.

1913: I arrived at the result that the velocity of light is not to be regarded as independent of the gravitational potential. Thus the principle of the constancy of the velocity of light is incompatible with the equivalence hypothesis.

1915: the writer of these lines is of the opinion that the theory of relativity is still in need of generalization, in the sense that the principle of the constancy of the velocity of light is to be abandoned.

1916: In the second place our result shows that, according to the general theory of relativity, the law of the constancy of the velocity of light in vacuo, which constitutes one of the two fundamental assumptions in the special theory of relativity and to which we have already frequently referred, cannot claim any unlimited validity. A curvature of rays of light can only take place when the velocity of propagation of light varies with position.

Unfortunately the English translations feature the word “velocity”, and many assume this to be a vector-quantity velocity rather than the common usage, as per “high velocity bullet”. The German word Einstein used was geschwindigkeit. That’s speed. It’s clear he was referring to speed because the SR postulate concerns speed, not vector-quantity velocity. And because he if was referring to velocity, he would have been saying “A curvature of rays of light can only take place when light curves”. That’s a tautology that simply doesn’t make sense. I’m sorry Jesse, but there’s no other way for me to say this: the GR you’ve been taught is not in line with Einstein, and in some important respects, it is

wrong.Your gedanken scenario features an accelerating observer, but there is nothing following behind him. He merely can’t see some things behind him. This is very different to the black hole. It’s there, it’s massive, it pulls the observer in.The event horizon isn’t massive though, it’s just a boundary he can’t see past because light from beyond it won’t ever make it to him, just like light beyond the Rindler horizon won’t ever make it to the accelerating observer (or any observer who accelerates in any way that prevents them from crossing the Rindler horizon). Sure, the horizon in the event horizon is related to a massive object and the Rindler horizon isn’t, but it’s a complete non-sequitur to say “therefore, time really stops at the event horizon but doesn’t really stop at the Rindler horizon”–you haven’t presented any

argumentas to why the presence of mass should be relevant to our conclusions about whether time “really stops”, this is just vague handwaving.And the inertial observer is not analogous to the observer in freefall, because the former is subject to constant SR time dilation while the latter is subject to decreasing gravitational potential and increasing GR time dilation.“Time dilation” entirely depends on your coordinate system–it’s always measured in terms of a ratio of clock time to coordinate time. The inertial observer experiences constant time dilation in an inertial frame in SR, but not in a non-inertial one like Rindler coordinates (which are constructed in such a way to ensure that different accelerating Rindler observers all have fixed position coordinates that don’t change with coordinate time). In GR all large-scale coordinate systems are non-inertial, and all are equally valid–you could construct a coordinate system where the falling observer’s time dilation was constant, or was decreasing as she fell in, if you wished. Again, the only objective claims you can make about times in either theory are ones based on local events, like a signal from the first clock reaching the location of the second clock, or the two clocks being brought together to compare their readings at the same location (as in the “twin paradox”).

The German word Einstein used was geschwindigkeit. That’s speed. It’s clear he was referring to speed because the SR postulate concerns speed, not vector-quantity velocity.How is this supposed to be relevant to my comment about Ellis and Einstein being in agreement about all physical questions? Speed is just as much a coordinate-dependent quantity as velocity (it’s just the magnitude of the velocity vector, ignoring the direction), and I even used the word “speed” rather than “velocity” in my comment. Again, when Einstein talked about the speed of light varying he was talking about speed in global non-inertial coordinate systems in GR, it’s obvious he would agree that A) there is no objective coordinate-independent way to define the ratio of light speeds at different locations in a gravitational field, it depends entirely on what coordinate system you choose, and B) if you choose to restrict your attention to local inertial frames as opposed to global non-inertial ones, then relative to

these specificcoordinate systems, the speed of a light ray will be the same at any point on its path (if you think Einstein would disagree on either of these points, please writings of his where he specifically addresses either the issue of coordinate-invariance for A, or the issue of speed in local inertial frames for B). Likewise, it’s obvious Ellis would agree that the coordinate speed of light does vary in non-inertial coordinate systems (if you think Ellis would disagree, please provide a quote where he specifically discusses coordinate speed in non-inertial coordinate systems). So again, there would be no disagreement between them about any real physical question involving the speed of light, it’s just a matter of different possible ways of defining what you mean by that phrase.