Using Information to Extract Energy

There was some excitement last week about a Maxwell’s-Demon-type experiment conducted by Shoichi Toyabe and collaborators in Japan. (Costly Nature Physics article here; free arxiv version here.) It’s a great result, worth making a fuss about. But some commentators spun it as “converting information into energy.” That’s not quite right — it’s more like “using information to extract energy from a heat bath.”

Say you have a box of gas with a certain temperature at maximum entropy — thermodynamic equilibrium. That is, the gas is smoothly spread throughout the box. (We can safely ignore gravity.) There’s certainly energy in there, but it’s not very useful. Indeed, one way of thinking about entropy is as a measure of how useless a certain amount of energy is. If we have a low-entropy configuration, we can extract useful work from the energy inside, such as pushing a piston. If we have a high-entropy configuration, the energy is useless; there’s nothing we can do to consistently extract it.

Here’s an example from my book. Consider two pistons with the same number of gas particles inside, with the same total energy. But the top container is in a low-entropy state with all the gas on one side of the piston; the bottom container is in a high-entropy state with the gas equally spread out.

extracting energy from a piston

You see the difference — from the top configuration we can extract useful work by simply allowing the piston to expand. In the process, the total energy of the gas goes down (it cools off). But in the bottom piston, nothing’s going to happen. There’s just as much energy inside there, but we can’t get it out because it’s in a high-entropy state.

In 1929, Leó Szilárd used a similar setup to establish an amazing result: the connection between energy and information. The connection is not that “information carries energy”; if I tell you some information about gas particles in a box, that doesn’t change their total energy. But it does help you extract that energy. Effectively, learning more information lowers the entropy of the gas. That’s a loosey-goosey statement, because there is more than one way to define “entropy”; but one reasonable definition is that the entropy is a measure of the information you don’t have about a system. (In the piston above, we know more about the gas in the low-entropy setup, since we have a better idea of where it is localized.)

This being physics, there are equations. Szilárd showed that, if you have one bit of information about the system, you can use that to extract an amount of energy given by

 E = (\log 2) kT .

Here, k is Boltzmann’s constant, and T is the temperature of the system. That temperature is crucial; that’s where the energy actually comes from, not from the “information” (and the temperature will go down when you extract the energy).

The idea is called Szilárd’s Engine, and is beautiful in its simplicity. As a physicist, one always looks for the spherical cow. In this case, it’s a box of gas with just one particle, moving in one dimension. (In the real experiment, they used knowledge of a particle’s position to make it hop up a staircase.)

szilard

The equivalent of “maximum entropy” here is “we have no idea where the particle is.” There is energy in the box, equal to kT/2, but we can’t get it out.

But now imagine that someone gives us one bit of information: they tell us which side of the box the particle is on. Now we can get some energy out! All we have to do is wait until the particle is on the left-hand side of the box, and quickly slip in our piston coming out the right:

szilard2

The particle will now bump into the piston, pushing it to the right, allowing us to useful work, like lifting a very tiny bucket or something. In the process some of the particle’s energy is transferred to the piston, so we’ve extracted some energy from the box. Note that we could not have done this if we hadn’t been given the information — without knowing where the particle is, our piston would have lost energy on average just as often as it gained energy, so we couldn’t have done any useful work.

This clever thought experiment launched an entire field of research, connecting information to energy. “Information is physical,” the motto now goes. It’s fun to see (a version of) Szilárd’s engine actually be created in the lab.

26 Comments

26 thoughts on “Using Information to Extract Energy”

  1. But what about the energy wasted for monitoring the particle and slipping the the piston into position? Without it being taken into account the whole experiment is misleading at best.

  2. That is the most complicated refrigerator I have every seen.

    A classical strategy for tackling Maxwell’s demon is to realize that the gate must open and close with some finite speed, and must have some finite mass. The speed has to be faster than the transit time of the slower particles that you wish to exclude. And in analogy to gamblers ruin, there is no better strategy then to have the gate periodically open and close faster than the slow transit time, but to do this would require Work and hence the generation of Heat and Entropy.

    A quantum strategy is to appeal to the uncertainty principle. For the gate to filter we must know the position in space and time with great certainty. But this requires a greater uncertainty in momentum and energy, thus a higher degree of dimensions of our quantum state space in momentum and energy. But the logarithm of the degree of dimensions of the quantum state space is proportional to the entropy.

  3. I missed the simplest attack on the paradox: Ignore the gate and ask how does the demon know what is a fast particle and what is a slow particle?

    To determine speed requires interaction with the system, and hence generating heat from the mechanism that monitors the state of the system.

  4. Information is the formless, non-local, and timeless equator ( zero unit of formation) of energy and mass.

  5. wait a moment….

    so this ‘engine’ produces less energy than it consumes, right? if you put everything into a closed box, the 2nd law still holds?

    i can’t help but think of a toddler randomly throwing a ball. a computer analyses the trajectory and moves a staircase to exactly the right position so that the ball will jump upwards. doesn’t sound so terribly efficient to me, but i’m probably missing what this is all about.

  6. Why not have piston rods extend out both sides of the box? When the particle bumps into the piston, it will move left or right: in either case, the piston rod that moves out can be geared to raise a small bucket. Now you don’t have to know which side of the piston the particle is located at to extract some energy, though you will know which side after the energy has been extracted. What am I missing here?

  7. Thanks, Sean, for another great post. I really appreciate the link to the free version of the article.

    I first learned about Maxwell’s Demon almost 25 years in the context of learning about the second law. How elegant that now an Information Age demon is called on, not to cheat thermodynamics, but to extract free energy from the environment and apply it where we could not do so directly.

  8. There’s no free energy, Ted. Yes, you can “use information to extract energy”. What’s not discussed in the original post is how on obtaining said “information” you must first spend quite a lot of energy, and thus increase the entropy of the whole “observer – experiment” system.

    So, no, this won’t create a perpetual motion machine. I hope that Carrol didn’t meant that way, I hope he only meant to conclude the simpler “Information is energy” motto…. (as if this was all news… I mean what?!?)

  9. Aaron Sheldon: So use heavier particles from whom you can extract much more energy with the same amount of information and information gathering. The must be some deeper reason for it not to work?

  10. Hey Sean!

    Another good name to mention in this context is Rolf Landauer, who worked out the “computational” aspects of Maxwell’s demon (and, in the process, put a lower bound on the heat dissipation of a computer.) To do this, it is not necessary to invoke quantum uncertainty, piston friction, particle monitoring costs, or anything else in particular; the information theoretic properties of the problem are sufficient.

    Put another way, if you look hard enough in the demon, you find either a hidden low temperature sink (the demon’s memory, e.g.) or unanalysed irreversible processes “going in the wrong direction” (erasing previously stored locations in the demon’s memory.)

    http://libros.unm.edu/search~S21?/tmaxwell%27s+demon/tmaxwells+demon/1%2C5%2C7%2CE/frameset&FF=tmaxwells+demon+entropy+information+computing&2%2C%2C2 has his original paper, as well as lots of follow-ups.

  11. Don’t believe Luis’ comment that there is no “free energy.” Yeah, there is such a thing when using “free energy” as a term of art in thermodynamics. RTFA people (where the A stands for article).

    Sean, you have my sincerest admiration for putting up with the comments section.

  12. Ted,

    What do you mean “term of art”? I read the article and I still don’t think there is such thing as free energy.

  13. Ted probably means “free energy” as in the classical thermo state property, F = E – TS ( or “A” if you’re into Arbeit). He doesn’t mean “free energy” as in getting something from nothing. The change in “free-energy” of a (bio)physical system can be related to the limiting amount of useful work it does on its surroundings. In fact, following Ted’s RTFA advice, it seems the authors explain “free energy” in eq. 1 of the article.

    Of course, it’s possible I just didn’t catch the humor from Dave & Luis.

    Thanks for the post, Sean. Students learning biophysics and pchem will be very interested to read this (and the previous article in “Entropy” that somehow produced a lot of heat).

  14. Two simple questions:

    a. Should’nt that be a natural logarithm, i.e., `ln’ instead of Log, since Boltzmann’s formula uses the former ?
    b. How does Szilard get a 2 from one bit of information ?

  15. a) yes, it’s an ln

    b) one (useful) bit of information reduces the number of microstates by a factor of 2.

  16. Using Sean’s model, Mikes question reduces to putting the piston on the right side when we are told the particle is on the left, and putting the piston on the left when we are told the particle is on the right.

    This will double the amount of energy extracted, but it does not preclude needing to know where the particle is.

  17. olderwithmoreinsurance

    Guess I also don’t see what’s wrong with Mike’s argument: the particle IS somewhere. No matter where you put the piston and regardless of which way the rods go, it WILL move and therefore be capable of doing work.

  18. I agree that Mike’s question at #8 deserves a more thoughtful answer, and I had the same problem when reading Chapter 9 of Sean’s book.

    Sean presents in the book a well-above-average discussion of the Maxwell demon problem, which meshes nicely with his theme of information as a fundamental quantity. Note also the excellent pedigree of this current post: Feynman discusses exactly this case of being able to extract useful energy when an initial state of a particle in a box is known in _The Feynman Lectures on Computation_ ( http://www.amazon.com/Feynman-Lectures-Computation-Richard-P/dp/0738202967 ); he describes being able to use the zeroed/initialized state of a Turing-type machine as a kind of “fuel”, meaning useful or low-entropy energy. (Sean, I’m surprised not to see the Lectures on Computation in the bibliography of your book; surely you’ve read it?)

    Not daring to doubt the combined wisdom of Feynman and Carroll, I will take the blame on myself and say that, like Mike at #8, I don’t see that the single-particle example really illustrates what it’s supposed to here. Certainly it’s possible to connect the piston rod to a “mechanical rectifier” of some kind, such that useful work will be extracted whether the piston moves left or right — I’m sure any blog reader can come up with three different designs before breakfast — and so we can extract useful energy from the piston’s motion _without_ any foreknowledge of the particle’s position. So I don’t see how this example is supposed to work, directly on its own terms.

    Further, note that once the mechanical setup is in place one can then re-charge the particle’s velocity from some heat source, then re-deploy the piston in the middle of the box (imagine opening it like an umbrella) and repeat the process over again. Voila! infinite useful energy extracted from a single-temperature heat source. Within the single-particle-in-a-box example we don’t need information to violate the Second Law, so information cannot be the crux of what this example shows. So, I’m sorry, Sean, I just don’t see it.

    The real answer of how to preserve the Second Law, in this case, I _think_ will be found only when one considers what happens when the piston and whatever it’s connected to all come into thermal equilibrium with the particle and its heat source; as touched on in comment #21 above, will the mechanical rectifier still work when the whole apparatus is vibrating at some finite temperature? I’m not going to get into any long extractions here, but I believe a good starting point is to read the chapter “Ratchet and Pawl” in the Feynman Lectures on Physics, where he illustrates why ratchets can’t be used to extract useful work from mechanical thermal fluctuations. I think a similar logic applies to what can and cannot be accomplished with rectifiers, ie why can’t you get a DC EMF by electrically rectifiying the Johnson noise from a hot resistor.

    So, in the end I agree with Mike at #8; I don’t see that this single-particle-in-a-box example, at the simplified level it’s employed here (ie no thermal motion in the piston) makes the point about information at all.

Comments are closed.

Scroll to Top