December 2016

Memory-Driven Computing and The Machine

Back in November I received an unusual request: to take part in a conversation at the Discover expo in London, an event put on by Hewlett Packard Enterprise (HPE) to showcase their new technologies. The occasion was a project called simply The Machine — a step forward in what’s known as “memory-driven computing.” On the one hand, I am not in any sense an expert in high-performance computing technologies. On the other hand (full disclosure alert), they offered to pay me, which is always nice. What they were looking for was simply someone who could speak to the types of scientific research that would be aided by this kind of approach to large-scale computation. After looking into it, I thought that I could sensibly talk about some research projects that were relevant to the program, and the technology itself seemed very interesting, so I agreed stop by London on the way from Los Angeles to a conference in Rome in honor of Georges Lemaître (who, coincidentally, was a pioneer in scientific computing).

Everyone knows about Moore’s Law: computer processing power doubles about every eighteen months. It’s that progress that has enabled the massive technological changes witnessed over the past few decades, from supercomputers to handheld devices. The problem is, exponential growth can’t go on forever, and indeed Moore’s Law seems to be ending. It’s a pretty fundamental problem — you can only make components so small, since atoms themselves have a fixed size. The best current technologies sport numbers like 30 atoms per gate and 6 atoms per insulator; we can’t squeeze things much smaller than that.

So how do we push computers to faster processing, in the face of such fundamental limits? HPE’s idea with The Machine (okay, the name could have been more descriptive) is memory-driven computing — change the focus from the processors themselves to the stored data they are manipulating. As I understand it (remember, not an expert), in practice this involves three aspects:

  1. Use “non-volatile” memory — a way to store data without actively using power.
  2. Wherever possible, use photonics rather than ordinary electronics. Photons move faster than electrons, and cost less energy to get moving.
  3. Switch the fundamental architecture, so that input/output and individual processors access the memory as directly as possible.

Here’s a promotional video, made by people who actually are experts.

The project is still in the development stage; you can’t buy The Machine at your local Best Buy. But the developers have imagined a number of ways that the memory-driven approach might change how we do large-scale computational tasks. Back in the early days of electronic computers, processing speed was so slow that it was simplest to store large tables of special functions — sines, cosines, logarithms, etc. — and just look them up as needed. With the huge capacities and swift access of memory-driven computing, that kind of “pre-computation” strategy becomes effective for a wide variety of complex problems, from facial recognition to planing airline routes.

It’s not hard to imagine how physicists would find this useful, so that’s what I briefly talked about in London. Two aspects in particular are pretty obvious. One is searching for anomalies in data, especially in real time. We’re in a data-intensive era in modern science, where very often we have so much data that we can only find signals we know how to look for. Memory-driven computing could offer the prospect of greatly enhanced searches for generic “anomalies” — patterns in the data that nobody had anticipated. You can imagine how that might be useful for something like LIGO’s search for gravitational waves, or the real-time sweeps of the night sky we anticipate from the Large Synoptic Survey Telescope.

The other obvious application, of course, is on the theory side, to large-scale simulations. In my own bailiwick of cosmology, we’re doing better and better at including realistic physics (star formation, supernovae) in simulations of galaxy and large-scale structure formation. But there’s a long way to go, and improved simulations are crucial if we want to understand the interplay of dark matter and ordinary baryonic physics in accounting for the dynamics of galaxies. So if a dramatic new technology comes along that allows us to manipulate and access huge amounts of data (e.g. the current state of a cosmological simulation) rapidly, that would be extremely useful.

Like I said, HPE compensated me for my involvement. But I wouldn’t have gone along if I didn’t think the technology was intriguing. We take improvements in our computers for granted; keeping up with expectations is going to require some clever thinking on the part of engineers and computer scientists.

Memory-Driven Computing and The Machine Read More »

19 Comments

Quantum Is Calling

Hollywood celebrities are, in many important ways, different from the rest of us. But we are united by one crucial similarity: we are all fascinated by quantum mechanics.

This was demonstrated to great effect last year, when Paul Rudd and some of his friends starred with Stephen Hawking in the video Anyone Can Quantum, a very funny vignette put together by Spiros Michalakis and others at Caltech’s Institute for Quantum Information and Matter (and directed by Alex Winter, who was Bill in Bill & Ted’s Excellent Adventure). You might remember Spiros from our adventures emerging space from quantum mechanics, but when he’s not working as a mathematical physicist he’s brought incredible energy to Caltech’s outreach programs.

Now the team is back again with a new video, this one titled Quantum is Calling. This one stars the amazing Zoe Saldana, with an appearance by John Cho and the voices of Simon Pegg and Keanu Reeves, and of course Stephen Hawking once again. (One thing about Caltech: we do not mess around with our celebrity cameos.)

Stephen Hawking + Zoe Saldana: Quantum is Calling ft. Keanu Reeves, Simon Pegg, John Cho, Paul Rudd

If you’re interested in the behind-the-scenes story, Zoe and Spiros and others give it to you here:

Behind the Scenes: Stephen Hawking + Zoe Saldana: Quantum is Calling

If on the other hand you want all the quantum-mechanical jokes explained, that’s where I come in:

The Science Behind Quantum Is Calling

Jokes should never be explained, of course. But quantum mechanics always should be, so this time we made an exception.

Quantum Is Calling Read More »

5 Comments
Scroll to Top