Okay, sticking to my desire to blog rather than just tweet (we’ll see how it goes): here’s a great post by John Baez with the forbidding title “Information Geometry, Part 11.” But if you can stomach a few equations, there’s a great idea being explicated, which connects evolutionary biology to entropy and information theory.

There are really two points. The first is a bit of technical background you can ignore if you like, and skip to the next paragraph. It’s the idea of “relative entropy” and its equivalent “information” formulation. Information can be thought of as “minus the entropy,” or even better “the maximum entropy possible minus the actual entropy.” If you know that a system is in a low-entropy state, it’s in one of just a few possible microstates, so you know a lot about it. If it’s high-entropy, there are many states that look that way, so you don’t have much information about it. (Aside to experts: I’m kind of shamelessly mixing Boltzmann entropy and Gibbs entropy, but in this case it’s okay, and if you’re an expert you understand this anyway.) John explains that the information (and therefore also the entropy) of some probability distribution is always *relative* to some other probability distribution, even if we often hide that fact by taking the fiducial probability to be uniform (… in some variable). The relative information between two distributions can be thought of as how much you *don’t* know about one distribution if you know the other one; the relative information between a distribution and itself is zero.

The second point has to do with the evolution of populations in biology (or in analogous fields where we study the evolution of populations), following some ideas of John Maynard Smith. Make the natural assumption that the rate of change of a population is proportional to the number of organisms in that population, where the “constant” of proportionality is a function of all the other populations. That is: imagine that every member of the population breeds at some rate that depends on circumstances. Then there is something called an *evolutionarily stable state*, one in which the relative populations (the fraction of the total number of organisms in each species) is constant. An equilibrium configuration, we might say.

Then the take-home synthesis is this: if you are *not* in an evolutionarily stable state, then as your population evolves, the relative information between the actual state and the stable one *decreases* with time. Since information is minus entropy, this is a Second-Law-like behavior. But the interpretation is that the population is “learning” more and more about the stable state, until it achieves that state and knows all there is to know!

Okay, you can see why tweeting is seductive. Without the 140-character limit, it’s hard to stop typing, even if I try to just link and give a very terse explanation. Hopefully I managed to get all the various increasing/decreasing pointing in the right direction…

Pingback: Evolution, Entropy, and Information – - ScienceNewsX - Science News AggregatorScienceNewsX – Science News Aggregator

Ok, I’ll confess to being a non-expert (despite certain textbook-writing credentials). What’s the difference between Boltzmann entropy and Gibbs entropy?

@Dan Schroeder…

http://en.wikipedia.org/wiki/Boltzmann_entropy

http://en.wikipedia.org/wiki/Gibbs_entropy#Gibbs_Entropy_Formula

The actual entropy increasing, does that mean information is decreasing, being lost?

Dan– Take the space of all possible states of a system. Two ways to define a notion of entropy. One is to say that you have a probability distribution over the space, and measure how spread out that is — that’s the Gibbs entropy. The other is to perform a coarse-graining into macrostates, by saying that states are in the same macrostate if they are indistinguishable under some specified set of measurements. For any particular state, we can associate to it the number of other states in the same macrostate. The Boltzmann entropy is the log of that number, as you find engraved on Boltzmann’s tombstone.

So the Gibbs entropy doesn’t need coarse-graining but it subjective, referring completely to our knowledge of the system; the Boltzmann entropy does rely on a coarse-graining but it objective, and can be large even if we know exactly what the state is.

I also posted this as a comment on FB in case of.

A question about information entropy. As said in the article, entropy is a lack of information. I understood that higher information probability indeed equals less entropy (as in – log W ?). But Maxwell’s demon confuses me. Isn’t it in a low entropy state before keeping track of the atoms? In other words, low entropy with no information and high entropy later when it gathered more information. It sounds like the opposite but I’m surely missing something here.

@marten : As far as I understand this : more entropy relates to less information. It’s more surprising to find something in a low entropy state, thus more informative.

@Yannick

Thank you. I find it difficult to ubderstand that entropy increases always, but (according to the Big Bang hypothesis) CMBR , a product of entropy increase, is cooling down and a lot of information is lost.

You’re welcome. But I’m no expert hence my own question above.

Pingback: Science, Entropy (relative) and Education « blueollie

Pingback: What’s cooler than information and entropy? | The Finch and Pea

In France, “information Geometry” is included in a larger mathematical domain “Geometric Science of Information”, that is debated in Brillouin Seminar launched in 2009 :

http://repmus.ircam.fr/brillouin/past-events

http://repmus.ircam.fr/brillouin/home

You can register for Brillouin Seminar News :

http://listes.ircam.fr/wws/info/informationgeometry

Recently, Brillouin Seminar has organized :

– In 2011, a French-Indian Workshop on “Matrix Information Geometries” at Ecole Polytechnique. Proceedings will be published by Springer in 2012. Slides and abstracts are available on website :

http://www.informationgeometry.org/MIG/

A book has been edited by Springer on “Matrix Information Geometry”:

http://www.springer.com/engineering/signals/book/978-3-642-30231-2

– In 2012, a Symposium on “Information Geometry and Optimal Transport Theory” at Institut Henri Poincaré in Paris with GDR CNRS MSPC. All slides are available on the website :

http://www.ceremade.dauphine.fr/~peyre/mspc/mspc-thales-12/

You can find a very recent French PhD Dissertation in English on this subject by Yang Le and supervised by Marc Arnaudon :

Medians of probability measures in Riemannian manifolds

http://tel.archives-ouvertes.fr/tel-00664188