Memory-Driven Computing and The Machine

Back in November I received an unusual request: to take part in a conversation at the Discover expo in London, an event put on by Hewlett Packard Enterprise (HPE) to showcase their new technologies. The occasion was a project called simply The Machine — a step forward in what’s known as “memory-driven computing.” On the one hand, I am not in any sense an expert in high-performance computing technologies. On the other hand (full disclosure alert), they offered to pay me, which is always nice. What they were looking for was simply someone who could speak to the types of scientific research that would be aided by this kind of approach to large-scale computation. After looking into it, I thought that I could sensibly talk about some research projects that were relevant to the program, and the technology itself seemed very interesting, so I agreed stop by London on the way from Los Angeles to a conference in Rome in honor of Georges Lemaître (who, coincidentally, was a pioneer in scientific computing).

Everyone knows about Moore’s Law: computer processing power doubles about every eighteen months. It’s that progress that has enabled the massive technological changes witnessed over the past few decades, from supercomputers to handheld devices. The problem is, exponential growth can’t go on forever, and indeed Moore’s Law seems to be ending. It’s a pretty fundamental problem — you can only make components so small, since atoms themselves have a fixed size. The best current technologies sport numbers like 30 atoms per gate and 6 atoms per insulator; we can’t squeeze things much smaller than that.

So how do we push computers to faster processing, in the face of such fundamental limits? HPE’s idea with The Machine (okay, the name could have been more descriptive) is memory-driven computing — change the focus from the processors themselves to the stored data they are manipulating. As I understand it (remember, not an expert), in practice this involves three aspects:

  1. Use “non-volatile” memory — a way to store data without actively using power.
  2. Wherever possible, use photonics rather than ordinary electronics. Photons move faster than electrons, and cost less energy to get moving.
  3. Switch the fundamental architecture, so that input/output and individual processors access the memory as directly as possible.

Here’s a promotional video, made by people who actually are experts.

The project is still in the development stage; you can’t buy The Machine at your local Best Buy. But the developers have imagined a number of ways that the memory-driven approach might change how we do large-scale computational tasks. Back in the early days of electronic computers, processing speed was so slow that it was simplest to store large tables of special functions — sines, cosines, logarithms, etc. — and just look them up as needed. With the huge capacities and swift access of memory-driven computing, that kind of “pre-computation” strategy becomes effective for a wide variety of complex problems, from facial recognition to planing airline routes.

It’s not hard to imagine how physicists would find this useful, so that’s what I briefly talked about in London. Two aspects in particular are pretty obvious. One is searching for anomalies in data, especially in real time. We’re in a data-intensive era in modern science, where very often we have so much data that we can only find signals we know how to look for. Memory-driven computing could offer the prospect of greatly enhanced searches for generic “anomalies” — patterns in the data that nobody had anticipated. You can imagine how that might be useful for something like LIGO’s search for gravitational waves, or the real-time sweeps of the night sky we anticipate from the Large Synoptic Survey Telescope.

The other obvious application, of course, is on the theory side, to large-scale simulations. In my own bailiwick of cosmology, we’re doing better and better at including realistic physics (star formation, supernovae) in simulations of galaxy and large-scale structure formation. But there’s a long way to go, and improved simulations are crucial if we want to understand the interplay of dark matter and ordinary baryonic physics in accounting for the dynamics of galaxies. So if a dramatic new technology comes along that allows us to manipulate and access huge amounts of data (e.g. the current state of a cosmological simulation) rapidly, that would be extremely useful.

Like I said, HPE compensated me for my involvement. But I wouldn’t have gone along if I didn’t think the technology was intriguing. We take improvements in our computers for granted; keeping up with expectations is going to require some clever thinking on the part of engineers and computer scientists.

Quantum Is Calling

Hollywood celebrities are, in many important ways, different from the rest of us. But we are united by one crucial similarity: we are all fascinated by quantum mechanics.

This was demonstrated to great effect last year, when Paul Rudd and some of his friends starred with Stephen Hawking in the video Anyone Can Quantum, a very funny vignette put together by Spiros Michalakis and others at Caltech’s Institute for Quantum Information and Matter (and directed by Alex Winter, who was Bill in Bill & Ted’s Excellent Adventure). You might remember Spiros from our adventures emerging space from quantum mechanics, but when he’s not working as a mathematical physicist he’s brought incredible energy to Caltech’s outreach programs.

Now the team is back again with a new video, this one titled Quantum is Calling. This one stars the amazing Zoe Saldana, with an appearance by John Cho and the voices of Simon Pegg and Keanu Reeves, and of course Stephen Hawking once again. (One thing about Caltech: we do not mess around with our celebrity cameos.)

If you’re interested in the behind-the-scenes story, Zoe and Spiros and others give it to you here:

If on the other hand you want all the quantum-mechanical jokes explained, that’s where I come in:

Jokes should never be explained, of course. But quantum mechanics always should be, so this time we made an exception.

Thanksgiving

This year we give thanks for a feature of the physical world that many people grumble about rather than celebrating, but is undeniably central to how Nature works at a deep level: the speed of light. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform and Riemannian Geometry.)

The speed of light in vacuum, traditionally denoted by c, is 299,792,458 meters per second. It’s exactly that, not just approximately; it turns out to be easier to measure intervals of time to very high precision than it is to measure distances in space, so we measure the length of a second experimentally, then define the meter to be “the distance that light travels 299,792,458 of in one second.” Personally I prefer to characterize c as “one light-year per year”; that’s equally exact, and it’s easier to remember all the significant figures that way.

There are a few great things about the speed of light. One is that it’s a fixed, universal constant, as measured by inertial (unaccelerating) observers, in vacuum (empty space). Of course light can slow down if it propagates through a medium, but that’s hardly surprising. The other great thing is that it’s an upper limit; physical particles, as far as we know in the real world, always move at speeds less than or equal to c.

That first fact, the universal constancy of c, is the startling feature that set Einstein on the road to figuring out relativity. It’s a crazy claim at first glance: if two people are moving relative to each other (maybe because one is in a moving car and one is standing on the sidewalk) and they measure the speed of a third object (like a plane passing overhead) relative to themselves, of course they will get different answers. But not with light. I can be zipping past you at 99% of c, directly at an oncoming light beam, and both you and I will measure it to be moving at the same speed. That’s only sensible if something is wonky about our conventional pre-relativity notions of space and time, which is what Einstein eventually figured out. It was his former teacher Minkowski who realized the real implication is that we should think of the world as a single four-dimensional spacetime; Einstein initially scoffed at the idea as typically useless mathematical puffery, but of course it turned out to be central in his eventual development of general relativity (which explains gravity by allowing spacetime to be curved).

Because the speed of light is universal, when we draw pictures of spacetime we can indicate the possible paths light can take through any point, in a way that will be agreed upon by all observers. Orienting time vertically and space horizontally, the result is the set of light cones — the pictorial way of indicating the universal speed-of-light limit on our motion through the universe. Moving slower than light means moving “upward through your light cones,” and that’s what all massive objects are constrained to do. (When you’ve really internalized the lessons of relativity, deep in your bones, you understand that spacetime diagrams should only indicate light cones, not subjective human constructs like “space” and “time.”)

Light Cones

The fact that the speed of light is such an insuperable barrier to the speed of travel is something that really bugs people. On everyday-life scales, c is incredibly fast; but once we start contemplating astrophysical distances, suddenly it seems maddeningly slow. It takes just over a second for light to travel from the Earth to the Moon; eight minutes to get to the Sun; over five hours to get to Pluto; four years to get to the nearest star; twenty-six thousand years to get to the galactic center; and two and a half million years to get to the Andromeda galaxy. That’s why almost all good space-opera science fiction takes the easy way out and imagines faster-than-light travel. (In the real world, we won’t ever travel faster than light, but that won’t stop us from reaching the stars; it’s much more feasible to imagine extending human lifespans by many orders of magnitude, or making cryogenic storage feasible. Not easy — but not against the laws of physics, either.)

It’s understandable, therefore, that we sometimes get excited by breathless news reports about faster-than-light signals, though they always eventually disappear. But I think we should do better than just be grumpy about the finite speed of light. Like it or not, it’s an absolutely crucial part of the nature of reality. It didn’t have to be, in the sense of all possible worlds; the Newtonian universe is a relatively sensible set of laws of physics, in which there is no speed-of-light barrier at all.

That would be a very different world indeed. Continue reading

Gifford Lectures on Natural Theology

In October I had the honor of visiting the University of Glasgow to give the Gifford Lectures on Natural Theology. These are a series of lectures that date back to 1888, and happen at different Scottish universities: Glasgow, Aberdeen, Edinburgh, and St. Andrews. “Natural theology” is traditionally the discipline that attempts to learn about the nature of God via our experience of the world (in contrast to by revelation or contemplation). The Gifford Lectures have always interpreted this regime rather broadly; many theologians have given the talks, but also people like Neils Bohr, Arthur Eddington, Hannah Arendt, Noam Chomsky, Carl Sagan, Richard Dawkins, and Steven Pinker.

Sometimes the speakers turn their lectures into short published books; in my case, I had just written a book that fit well into the topic, so I spoke about the ideas in The Big Picture. Unfortunately the first of the five lectures was not recorded, but the subsequent four were. Here are those recordings, along with a copy of my slides for the first talk. It’s not a huge loss, as many of the ideas in the first lecture can be found in previous talks I’ve given on the arrow of time; it’s about the evolution of our universe, how that leads to an arrow of time, and how that helps explain things like memory and cause/effect relations. The second lecture was on the Core Theory and why we think it will remain accurate in the face of new discoveries. The third lecture was on emergence and how different ways of talking about the world fit together, including discussions of effective field theory and why the universe itself exists. Lecture four dealt with the evolution of complexity, the origin of life, and the nature of consciousness. (I might have had to skip some details during that one.) And the final lecture was on what it all means, why we are here, and how to live in a universe that doesn’t come with any instructions. Enjoy!

(Looking at my YouTube channel makes me realize that I’ve been in a lot of videos.)

Lecture One: Cosmos, Time, Memory (slides only, no video)
Slideshare

Lecture Two: The Stuff of Which We Are Made

Lecture Three: Layers of Reality

Lecture Four: Simplicity, Complexity, Thought

Lecture Five: Our Place in the Universe

Talking About Dark Matter and Dark Energy

Trying to keep these occasional Facebook Live videos going. (I’ve looked briefly into other venues such as Periscope, but FB is really easy and anyone can view without logging in if they like.)

So here is one I did this morning, about why cosmologists think dark matter and dark energy are things that really exist. I talk in particular about a recent paper by Nielsen, Guffanti, and Sarkar that questioned the evidence for universal acceleration (I think the evidence is still very good), and one by Erik Verlinde suggesting that emergent gravity can modify Einstein’s general relativity on large scales to explain away dark matter (I think it’s an intriguing idea, but am skeptical it can ever fit the data from the cosmic microwave background).

Feel free to propose topics for future conversations, or make suggestions about the format.