The Big Questions

The other day I mused on Twitter about three big origin questions: the origin of the universe, the origin of life, and the origin of consciousness. Which isn’t to say they are related, just that they’re all interesting and important (and currently nowhere near solved). Physicists have taken stabs at the life question, but (with a few dramatic exceptions) they’ve mostly stayed away from consciousness. Probably for the best.

Here’s Ed Witten giving his own personal — and characteristically sensible — opinion, which is that consciousness is a really knotty problem, although not so difficult that we should start contemplating changing the laws of physics in order to solve it. Though I am more optimistic than he is that we’ll understand it on a reasonable timescale. (Hat tip to Ash Jogalekar.)

Anyone seriously interested in tackling these big questions would be well-served by acknowledging that much (most? almost all?) progress in science is incremental, sneaking up on major discoveries by a series of small steps rather than leaping right to a dramatic new paradigm. Even if you want to understand the origin of the universe, it might behoove you to think about some more specific and tractable problems, like the nature of quantum fluctuations in inflation, or the emergence of spacetime in string theory. If you want to understand the origin of consciousness, it’s a good strategy to think about something like our perception of color, with the idea of working your way up to the more challenging issues.

Conversely, it’s these big questions that attract crackpots like honey attracts flies. I get a lot of emails (and physical letters) from cranks, but they never have a new theory of the branching ratio of the Higgs boson into four leptons; it’s always about the nature of space and time and everything. It’s too easy for anyone to have an opinion about these big questions, whether or not those opinions are worth paying attention to.

All of which leads up to saying: it’s still worth tackling the big questions! Start small, but think big. Because they are so hard, it’s too easy to make fun of attempts to solve the biggest questions, or to imagine that they are irreducibly mysterious and will never be solved. I wouldn’t be at all surprised if we had quite compelling pictures of the origin of the universe, life, and consciousness within the next hundred years. But only if we’re willing to tackle the big problems seriously.

Posted in Science | 71 Comments

Guest Post: An Interview with Jamie Bock of BICEP2

Jamie Bock If you’re reading this you probably know about the BICEP2 experiment, a radio telescope at the South Pole that measured a particular polarization signal known as “B-modes” in the cosmic microwaves background radiation. Cosmologists were very excited at the prospect that the B-modes were the imprint of gravitational waves originating from a period of inflation in the primordial universe; now, with more data from the Planck satellite, it seems plausible that the signal is mostly due to dust in our own galaxy. The measurements that the team reported were completely on-target, but our interpretation of them has changed — we’re still looking for direct evidence for or against inflation.

Here I’m very happy to publish an interview that was carried out with Jamie Bock, a professor of physics at Caltech and a senior research scientist at JPL, who is one of the leaders of the BICEP2 collaboration. It’s a unique look inside the workings of an incredibly challenging scientific effort.


New Results from BICEP2: An Interview with Jamie Bock

What does the new data from Planck tell you? What do you know now?

A scientific race has been under way for more than a decade among a dozen or so experiments trying to measure B-mode polarization, a telltale signature of gravitational waves produced from the time of inflation. Last March, BICEP2 reported a B-mode polarization signal, a twisty polarization pattern measured in a small patch of sky. The amplitude of the signal we measured was surprisingly large, exceeding what we expected for galactic emission. This implied we were seeing a large gravitational wave signal from inflation.

We ruled out galactic synchrotron emission, which comes from electrons spiraling in the magnetic field of the galaxy, using low-frequency data from the WMAP [Wilkinson Microwave Anisotropy Probe] satellite. But there were no data available on polarized galactic dust emission, and we had to use models. These models weren’t starting from zero; they were built on well-known maps of unpolarized dust emission, and, by and large, they predicted that polarized dust emission was a minor constituent of the total signal.

Obviously, the answer here is of great importance for cosmology, and we have always wanted a direct test of galactic emission using data in the same piece of sky so that we can test how much of the BICEP2 signal is cosmological, representing gravitational waves from inflation, and how much is from galactic dust. We did exactly that with galactic synchrotron emission from WMAP because the data were public. But with galactic dust emission, we were stuck, so we initiated a collaboration with the Planck satellite team to estimate and subtract polarized dust emission. Planck has the world’s best data on polarized emission from galactic dust, measured over the entire sky in multiple spectral bands. However, the polarized dust maps were only recently released.

On the other side, BICEP2 gives us the highest-sensitivity data available at 150 GHz to measure the CMB. Interestingly, the two measurements are stronger in combination. We get a big boost in sensitivity by putting them together. Also, the detectors for both projects were designed, built, and tested at Caltech and JPL, so I had a personal interest in seeing that these projects worked together. I’m glad to say the teams worked efficiently and harmoniously together.

What we found is that when we subtract the galaxy, we just see noise; no signal from the CMB is detectable. Formally we can say at least 40 percent of the total BICEP2 signal is dust and less than 60 percent is from inflation.

How do these new data shape your next steps in exploring the earliest moments of the universe?

It is the best we can do right now, but unfortunately the result with Planck is not a very strong test of a possible gravitational wave signal. This is because the process of subtracting galactic emission effectively adds more noise into the analysis, and that noise limits our conclusions. While the inflationary signal is less than 60 percent of the total, that is not terribly informative, leaving many open questions. For example, it is quite possible that the noise prevents us from seeing part of the signal that is cosmological. It is also possible that all of the BICEP2 signal comes from the galaxy. Unfortunately, we cannot say more because the data are simply not precise enough. Our ability to measure polarized galactic dust emission in particular is frustratingly limited.

Figure 1:  Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the  ‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational  waves.  The polarization is given by the length and direction of the lines, with a coloring to better  show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is  a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6  times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of  a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in  new Keck data at 95 GHz still under analysis.  The very slight color shift visible between 150  and 95 GHz is due to the change in the beam size.

Figure 1: Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the
‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational
waves.  The polarization is given by the length and direction of the lines, with a coloring to better
show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is
a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6
times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of
a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in
new Keck data at 95 GHz still under analysis. The very slight color shift visible between 150
and 95 GHz is due to the change in the beam size.

However, there is good news to report. In this analysis, we added new data obtained in 2012–13 from the Keck Array, an instrument with five telescopes and the successor to BICEP2 (see Fig. 1). These data are at the same frequency band as BICEP2—150 GHz—so while they don’t help subtract the galaxy, they do increase the total sensitivity. The Keck Array clearly detects the same signal detected by BICEP2. In fact, every test we can do shows the two are quite consistent, which demonstrates that we are doing these difficult measurements correctly (see Fig. 2). The BICEP2/Keck maps are also the best ever made, with enough sensitivity to detect signals that are a tiny fraction of the total.

A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency.  The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line.  The excess peaks at angular scales of about 2 degrees.  The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black.  Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

Figure 2: A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency. The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line. The excess peaks at angular scales of about 2 degrees. The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black. Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

In addition, Planck’s measurements over the whole sky show the polarized dust is fairly well behaved. For example, the polarized dust has nearly the same spectrum across the sky, so there is every reason to expect we can measure and remove dust cleanly.

To better subtract the galaxy, we need better data. We aren’t going to get more data from Planck because the mission has finished. The best way is to measure the dust ourselves by adding new spectral bands to our own instruments. We are well along in this process already. We added a second band to the Keck Array last year at 95 GHz and a third band this year at 220 GHz. We just installed the new BICEP3 instrument at 95 GHz at the South Pole (see Fig. 3). BICEP3 is single telescope that will soon be as powerful as all five Keck Array telescopes put together. At 95 GHz, Keck and BICEP3 should surpass BICEP2’s 150 GHz sensitivity by the end of this year, and the two will be a very powerful combination indeed. If we switch the Keck Array entirely over to 220 GHz starting next year, we can get a third band to a similar depth.

BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space.  This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit:  Sam Harrison

Figure 3: BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space. This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit: Sam Harrison)

Finally, this January the SPIDER balloon experiment, which is also searching the CMB for evidence of inflation, completed its first flight, outfitted with comparable sensitivity at 95 and 150 GHz. Because SPIDER floats above the atmosphere (see Fig. 4), we can also measure the sky on larger spatial scales. This all adds up to make the coming years very exciting.

View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf.   (Credit:  SPIDER team).

Figure 4: View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf. (Credit: SPIDER team).

Why did you make the decision last March to release results? In retrospect, do you regret it?

We knew at the time that any news of a B-mode signal would cause a great stir. We started working on the BICEP2 data in 2010, and our standard for putting out the paper was that we were certain the measurements themselves were correct. It is important to point out that, throughout this episode, our measurements basically have not changed. As I said earlier, the initial BICEP2 measurement agrees with new data from the Keck Array, and both show the same signal. For all we know, the B-mode polarization signal measured by BICEP2 may contain a significant cosmological component—that’s what we need to find out.

The question really is, should we have waited until better data were available on galactic dust? Personally, I think we did the right thing. The field needed to be able to react to our data and test the results independently, as we did in our collaboration with Planck. This process hasn’t ended; it will continue with new data. Also, the searches for inflationary gravitational waves are influenced by these findings, and it is clear that all of the experiments in the field need to focus more resources on measuring the galaxy.

How confident are you that you will ultimately find conclusive evidence for primordial gravitational waves and the signature of cosmic inflation?

I don’t have an opinion about whether or not we will find a gravitational wave signal—that is why we are doing the measurement! But any result is so significant for cosmology that it has to be thoroughly tested by multiple groups. I am confident that the measurements we have made to date are robust, and the new data we need to subtract the galaxy more accurately are starting to pour forth. The immediate path forward is clear: we know how to make these measurements at 150 GHz, and we are already applying the same process to to the new frequencies. Doing the measurements ourselves also means they are uniform so we understand all of the errors, which, in the end, are just as important.

What will it mean for our understanding of the universe if you don’t find the signal?

The goal of this program is to learn how inflation happened. Inflation requires matter-energy with an unusual repulsive property in order to rapidly expand the universe. The physics are almost certainly new and exotic, at energies too high to be accessed with terrestrial particle accelerators. CMB measurements are one of the few ways to get at the inflationary physics, and we need to squeeze them for all they are worth. A gravitational wave signal is very interesting because it tells us about the physical process behind inflation. A detection of the polarization signal at a high level means that the certain models of inflation, perhaps along the lines of the models first developed, are a good explanation.

But here again is the real point: we also learn more about inflation if we can rule out polarization from gravitational waves. No detection at 5 percent or less of the total BICEP2 signal means that inflation is likely more complicated, perhaps involving multiple fields, although there are certainly other possibilities. Either way is a win, and we’ll find out more about what caused the birth of the universe 13.8 billion years ago.

Our team dedicated itself to the pursuit of inflationary polarization 15 years ago fully expecting a long and difficult journey. It is exciting, after all this work, to be at this stage where the polarization data are breaking into new ground, providing more information about gravitational waves than we learned before. The BICEP2 signal was a surprise, and its ultimate resolution is still a work in progress. The data we need to address these questions about inflation are within sight, and whatever the answers are, they are going to be interesting, so stay tuned.

Posted in Guest Post, Science | 10 Comments

I Wanna Live Forever

If you’re one of those people who look the universe in the eyeball without flinching, choosing to accept uncomfortable truths when they are supported by the implacable judgment of Science, then you’ve probably acknowledged that sitting is bad for you. Like, really bad. If you’re not convinced, the conclusions are available in helpful infographic form; here’s an excerpt.

Sitting-Infographic

And, you know, I sit down an awful lot. Doing science, writing, eating, playing poker — my favorite activities are remarkably sitting-based.

So I’ve finally broken down and done something about it. On the good advice of Carl Zimmer, I’ve augmented my desk at work with a Varidesk on top. The desk itself was formerly used by Richard Feynman, so I wasn’t exactly going to give that up and replace it with a standing desk. But this little gizmo lets me spend most of my time at work on my feet instead of sitting on my butt, while preserving the previous furniture.

IMG_1173

It’s a pretty nifty device, actually. Room enough for my laptop, monitor, keyboard, mouse pad, and the requisite few cups for coffee. Most importantly for a lazybones like me, it doesn’t force you to stand up absolutely all the time; gently pull some handles and the whole thing gently settles down to desktop level, ready for your normal chair-bound routine.

IMG_1174

We’ll see how the whole thing goes. It’s one thing to buy something that allows you to stand while working, it’s another to actually do it. But at least I feel like I’m trying to be healthier. I should go have a sundae to celebrate.

Posted in Health, Personal | 34 Comments

The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics

Longtime readers know that I’ve made a bit of an effort to help people understand, and perhaps even grow to respect, the Everett or Many-Worlds Interpretation of Quantum Mechanics (MWI) . I’ve even written papers about it. It’s a controversial idea and far from firmly established, but it’s a serious one, and deserves serious discussion.

Which is why I become sad when people continue to misunderstand it. And even sadder when they misunderstand it for what are — let’s face it — obviously wrong reasons. The particular objection I’m thinking of is:

MWI is not a good theory because it’s not testable.

It has appeared recently in this article by Philip Ball — an essay whose snidely aggressive tone is matched only by the consistency with which it is off-base. Worst of all, the piece actually quotes me, explaining why the objection is wrong. So clearly I am either being too obscure, or too polite.

I suspect that almost everyone who makes this objection doesn’t understand MWI at all. This is me trying to be generous, because that’s the only reason I can think of why one would make it. In particular, if you were under the impression that MWI postulated a huge number of unobservable worlds, then you would be perfectly in your rights to make that objection. So I have to think that the objectors actually are under that impression.

An impression that is completely incorrect. The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”)

Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:

  1. The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
  2. The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.

That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.

Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.

Ah, but the MWI-naysayers say (as Ball actually does say), but every version of quantum mechanics has those two postulates or something like them, so testing them doesn’t really test MWI. So what? If you have a different version of QM (perhaps what Ted Bunn has called a “disappearing-world” interpretation), it must somehow differ from MWI, presumably by either changing the above postulates or adding to them. And in that case, if your theory is well-posed, we can very readily test those proposed changes. In a dynamical-collapse theory, for example, the wave function does not simply evolve according to the Schrödinger equation; it occasionally collapses (duh) in a nonlinear and possibly stochastic fashion. And we can absolutely look for experimental signatures of that deviation, thereby testing the relative adequacy of MWI vs. your collapse theory. Likewise in hidden-variable theories, one could actually experimentally determine the existence of the new variables. Now, it’s true, any such competitor to MWI probably has a limit in which the deviations are very hard to discern — it had better, because so far every experiment is completely compatible with the above two axioms. But that’s hardly the MWI’s fault; just the opposite.

The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics. Hilbert space is big, regardless of one’s personal feelings on the matter.

Which saddens me, as an MWI proponent, because I am very quick to admit that there are potentially quite good objections to MWI, and I would much rather spend my time discussing those, rather than the silly ones. Despite my efforts and those of others, it’s certainly possible that we don’t have the right understanding of probability in the theory, or why it’s a theory of probability at all. Similarly, despite the efforts of Zurek and others, we don’t have an absolutely airtight understanding of why we see apparent collapses into certain states and not others. Heck, you might be unconvinced that the above postulates really do lead to the existence of distinct worlds, despite the standard decoherence analysis; that would be great, I’d love to see the argument, it might lead to a productive scientific conversation. Should we be worried that decoherence is only an approximate process? How do we pick out quasi-classical realms and histories? Do we, in fact, need a bit more structure than the bare-bones axioms listed above, perhaps something that picks out a preferred set of observables?

All good questions to talk about! Maybe someday the public discourse about MWI will catch up with the discussion that experts have among themselves, evolve past self-congratulatory sneering about all those unobservable worlds, and share in the real pleasure of talking about the issues that matter.

Posted in Science | 115 Comments

Problem Book in Relativity and Gravitation: Free Online!

If I were ever to publish a second edition of Spacetime and Geometry — unlikely, but check back in another ten years — one thing I would like to do would be to increase the number of problems at the end of each chapter. I like the problems that are there, but they certainly could be greater in number. And there is no solutions manual, to the chagrin of numerous professors over the last decade.

What I usually do, when people ask for solutions and/or more problems, is suggest that they dig up a copy of the Problem Book in Relativity and Gravitation by Lightman, Press, Price, and Teukolsky. It’s a wonderful resource, with twenty chapters chock-full of problems, all with complete solutions in the back. A great thing to have for self-study. The book is a bit venerable, dating from 1975, and the typesetting isn’t the most modern; but the basics of GR haven’t changed in that time, and the notation and level are a perfect fit for my book.

And now everyone can have it for free! Where by “now” I mean “for the last five years,” although somehow I never heard of this. Princeton University Press, the publisher, gave permission to put the book online, for which students everywhere should be grateful.

Problem Book in Relativity and Gravitation

If you’re learning (or teaching) general relativity, you owe yourself to check it out.

Posted in Science | 9 Comments

New Course: The Higgs Boson and Beyond

Happy to announce that I have a new course out with The Great Courses (produced by The Teaching Company). This one is called The Higgs Boson and Beyond, and consists of twelve half-hour lectures. I previously have done two other courses for them: Dark Matter and Dark Energy, and Mysteries of Modern Physics: Time. Both of those were 24 lectures each, so this time we’re getting to the good stuff more quickly.

The inspiration for the course was, naturally, the 2012 discovery of the Higgs, and you’ll be unsurprised to learn that there is some overlap with my book The Particle at the End of the Universe. It’s certainly not just me reading the book, though; the lecture format is very different than the written word, and I’ve adjusted the topics and order appropriately. Here’s the lineup:

1205---packaging_flat.1423223209

  1. The Importance of the Higgs Boson
  2. Quantum Field Theory
  3. Atoms to Particles
  4. The Power of Symmetry
  5. The Higgs Field
  6. Mass and Energy
  7. Colliding Particles
  8. Particle Accelerators and Detectors
  9. The Large Hadron Collider
  10. Capturing the Higgs Boson
  11. Beyond the Standard Model
  12. Frontiers: Higgs in Space

Because it is a course, the presentation here is in a more strictly logical order than it is in the book, starting from quantum field theory and working our way up. It’s still aimed at a completely non-expert audience, though a bit of enthusiasm for physics will be helpful for grappling with the more challenging material. And it’s available in both audio-only or video — but I have to say they did a really nice job with the graphics this time around, so the video is worth having.

And it’s on sale! Don’t know how long that will last, but there’s a big difference between regular prices at The Great Courses and the sale prices. A bargain either way!

Posted in Higgs, Personal | 27 Comments

The State of the Early Universe

Well hello, blog. It’s been too long! Feels good to be back.

The big cosmological excitement this week was the announcement of new cosmic microwave background measurements. These include a big release of new papers from the Planck satellite, as well as a joint polarization analysis combining data from BICEP2, the Keck array, and Planck.

Polarization measurements from Planck superimposed on CMB temperature anisotropies. From the Planckoscope, h/t Bob McNees and Raquel Ribeiro.

Polarization measurements from Planck superimposed on CMB temperature anisotropies. From the Planckoscope, h/t Bob McNees and Raquel Ribeiro.

The good news is: we understand the current universe pretty darn well! So much so, in fact, that even an amazingly high-precision instrument such as Planck has a hard time discovering truly new and surprising things about cosmology. Hence, the Planck press releases chose to highlight the finding that the earliest stars formed about 0.1 billion years later than had previously been thought. Which is an awesome piece of science, but doesn’t quite rise to the level of excitement that other possible discoveries might have reached.

Power spectrum of CMB temperature fluctuations, from Planck. Now that is some agreement between theory and experiment!

Power spectrum of CMB temperature fluctuations, from Planck. Now that is some agreement between theory and experiment!

For example, the possibility that we had seen primordial gravitational waves from inflation, as the original announcement of the BICEP2 results suggested back in March. If you’ll remember, the polarization of the CMB can be mathematically decomposed into “E-modes,” which look like gradients and arise naturally from the perturbations in density that we all know and love, and “B-modes,” which look like curls and are not produced (in substantial amounts) from density perturbations. They could be produced by gravitational waves, which in turn could be generated during cosmic inflation — so finding them is a very big deal, indeed.

A big deal that apparently hasn’t happened. As has been suspected for a while now, while BICEP2 did detect B-modes, they seem to have been generated by dust in our galaxy, rather than by gravitational waves during inflation. That is the pretty definitive conclusion from the new Planck/BICEP2/Keck joint analysis.

And therefore, what we had hoped was a detection of primordial gravitational waves now turns into a less-thrilling (but equally scientifically crucial) upper limit. Here’s one way of looking at the situation now. On the horizontal axis we have ns, the “tilt” in the power spectrum of perturbations, i.e. the variation in the amplitude of those perturbations on different distances across space. And on the vertical axis we have r, the ratio of the gravitational waves to the ordinary density perturbations. The original BICEP2 interpretation was that we had discovered r = 0.2; now we see that r is less than 0.15, probably less than 0.10, depending on which pieces of information you combine to get your constraint. No sign that it’s anything other than zero.

Current constraints on the "tilt" of the primordial perturbations (horizontal axis) and the contribution from gravitational waves (vertical axis).

Current constraints on the “tilt” of the primordial perturbations (horizontal axis) and the contribution from gravitational waves (vertical axis).

So what have we learned? Here are some take-away messages. Continue reading

Posted in Science | 26 Comments

We Are All Machines That Think

My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”


Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

Posted in Technology | 141 Comments

Dark Matter, Explained

If you’ve ever wondered about dark matter, or been asked puzzled questions about it by your friends, now you have something to point to: this charming video by 11-year-old Lucas Belz-Koeling. (Hat tip Sir Harry Kroto.)

The title references “Draw My Life style,” which is (the internet informs me) a label given to this kind of fast-motion photography of someone drawing on a white board.

You go, Lucas. I doubt I would have been doing anything quite this good at that age.

Posted in Science | 38 Comments

A Simple Form of Poker “Essentially” Solved

You know it’s a good day when there are refereed articles in Science about poker. (Enthusiasm slightly dampened by the article being behind a paywall, but some details here.)

Poker, of course, is a game of incomplete information. You don’t know your opponent’s cards, they don’t know yours. Part of your goal should be to keep it that way: you don’t want to give away information that would let your opponent figure out what you have.

As a result, the best way to play poker (against a competent opponent) is to use a mixed strategy: in any given situation, you want to have different probabilities for taking various actions, rather than a deterministic assignment of the best thing to do. If, for example, you always raise with certain starting hands, and always call with others, an attentive player will figure that out, and thereby gain a great deal of information about your hand. It’s much better to sometimes play weak hands as if they are strong (bluffing) and strong hands as if they are weak (slow-playing). The question is: how often should you be doing that?

Now researchers at a University of Alberta group that studies computerized poker has offered and “essentially” perfect strategy for a very simple form of poker: Heads-Up Limit Hold’em. In Hold’em, each player has two “hole” cards face down, and there are five “board” cards face-up in the middle of the table; your hand is the best five-card combination you can form from your hole cards and the board. “Heads-up” means that only two players are playing (much simpler than a multi-player game), and “limit” means that there is any bet comes in a single pre-specified amount (much simpler than “no-limit,” where you can bet anything from a fixed minimum up to the size of your stack or your opponent’s, whichever is smaller).

A simple game, but not very simple. Bets occur after each player gets their hole cards, and again after three cards (the “flop”) are put on the board, again after a fourth card (the turn), and finally after the last board card (the river) is revealed. If one player bets, the other can raise, and then the initial better can re-raise, up to a number of bets (typically four) that “caps” the betting.

gl_10537

So a finite number of things can possibly happen, which makes the game amenable to computer analysis. But it’s still a large number. There are about 3×1017 “states” that one can reach in the game, where a “state” is defined by a certain number of bets having been made as well as the configuration of cards that have already been dealt. Not easy to analyze! Fortunately (or not), as a player with incomplete information you won’t be able to distinguish between all of those states — i.e. you don’t know your opponent’s hole cards. So it turns out that there are about 3×1014 distinct “decision points” from which a player might end up having to act.

So all you need to do is: for each of those 300 trillion possibilities, assign the best possible mixed strategy — your probability to bet/check if there hasn’t already been a bet, fold/call/raise if there has — and act accordingly. Hey, nobody ever said being a professional poker player would be easy. (As you might know, human beings are very bad at randomness, so many professionals use the second hand on a wristwatch to generate pseudo-random numbers and guide their actions.)

Nobody is going to do that, of course. Continue reading

Posted in Miscellany | 20 Comments