Thank you for the explanation and taking the time to write it. I appreciate it.

]]>The minute and a half of chirping inspiral encodes a certain combination of the masses of the two neutron stars (the “chirp mass,” so-called because it controls the rate at which the binary’s frequency chirps) to high precision — GW170817 has a chirp mass of 1.188 solar masses, with roughly 0.3% error bars. There’s a moderately wide range of individual neutron star masses that are consistent with this chirp mass, which is why the reported individual neutron star masses span a somewhat wide range. The results also depend upon what kind of binary models one uses. When you examine the papers you’ll see different numbers depending on whether the models allow “high spins” or assume “low spins.” Low spins are consistent with binary neutron stars that are observed as radio pulsars in the galaxy, and so there’s a strong plausibility case that nature likes this. However, the laws of physics don’t forbid high spins, so its worth examining that case too and seeing how things change.

]]>In addressing your question why things appear deterministic in classical while remaining probabilistic in QM, measurement was seen to mediate between those two worlds. Let’s see if measurement could shed light on the nature of their difference.

Classical physics says:

A. Things physically exist whether one measures them or not.

B. Nothing can travel faster than light.

A standard QM experiment is the double-slit. The observed interference pattern behind the slit screen corresponds to the probabilities where one can find a particle, at measurement.

Imagine now the double-slit screen extends across the observable universe, fully matching its diameter. When the experiment is repeated using this modified double-slit screen, a photon described above in “A” will violate “B”.

How so?

Per QM observation, a photon is capable of appearing, at the instant of measurement, at either end of the slit screen (and anywhere in between). Why? Because that slit screen, modified or not, corresponds to the interference pattern of the photon.

But in classical physics, for a detector located at the center of the observable universe, it takes two photons, one from each edge, at any given instant, to cover the width of the universe, now also equaling that of the modified slit screen.

One photon, if asked to do the job of those two photons, would violate “B”.

Conclusion: Local realism, which underpins our understanding of classical reality, when asked to explain the QM world, breaks down due to the internal conflict just noted.

Implication: QM reality is likely not the physical reality of classical physics. Familiar as this may sound, now we know the reason lies at the heart of GR local realism. What QM reality is remains a mystery. With this thought experiment, at least we could sense one thing it is not.

What has made this revelation possible is the scaling of a standard QM experiment to GR size. Whereupon, the existing physics of both GR and QM are brought to bear, in the same experiment, on the same particle, at the same instant, of measurement.

]]>Can the mass of each NS, or the total combined mass of the binary system, be derived from the GW chirp, perhaps in conjunction with other EM data, in a way that it couldn’t be derived with only EM data? Or only the rate of change of the mass quadrupole.

]]>The electron is a smooth closed wave function everywhere inside the atom makes so much sense.

The ’empty space’ idea I have read so often always seemed somewhere between meaningless and confused to me. ]]>

To be fair, the actual claim was 69+13/-19 km/s/Mpc at 95 per cent confidence. Later work showed that at least the error estimate and perhaps the value itself were wrong, or at least should be modified in a better analysis (which, of course, has been done). We nevertheless got the right answer since various errors in different directions cancelled. However, in the words of Henry Norris Russell, “A hundred years hence all this work of mine will be utterly superseded: but I am getting the fun of it now. “

]]>Yes. I helped write a paper on this, and also talked about some of the uncertainties due to other cosmological parameters as well as the fact that the universe is not completely homogeneous at this conference.

One can, of course, use gravitational lensing to determine other cosmological parameters as well. I’m an author or co-author and several such papers; the JVAS analysis probably gives the best flavour.

Why do so few talk about this anymore? Basically because systematic uncertainties are much larger than those of more modern tests such as the *m–**z* relation from type-Ia supernovae (where it seems that we don’t have to worry about small-scale inhomogeneities and can even say something about how dark matter is distributed), the CMB, and **BAO**.

Of course, back when this work (determining cosmological parameters via gravitational lensing), none of the more precise tests had produced any interesting results, and for a while gravitational lensing actually had the lead. This is probably now of interest mostly to historians of science, though. I was even on a paper using lensing to set a lower limit on the cosmological constant. OK, the limit was negative, but it might be the first paper to set such a limit which didn’t turn out to be based on data which were later shown to be too imprecise or just wrong. What was the first paper to actually set limits on the cosmological constant from observational data? Does anyone know of an older one than the one by Solheim?

]]>Knowing that it will be corrected if too simplified or plain wrong, allow me to venture one explanation why “we observe a fairly deterministic world while the true nature of reality is probabilistic”.

Because most everything we observe are “post-measurement” already.

Right or wrong, the implication is unmeasured, quantum properties need not exist. But once measured, such properties are no longer described by probability. Rather, those properties have been “determined” (thus part of the deterministic reality).

Other common ways to describe the phenomenon of measurement (above) is to say that the wave function has collapsed, the Everettian worlds have branched, or decoherence of superposition has occurred.

]]>