Perceiving Randomness

The kind way to say it is: “Humans are really good at detecting patterns.” The less kind way is: “Humans are really good at detecting patterns, even when they don’t exist.”

I’m going to blatantly swipe these two pictures from Peter Coles, but you should read his post for more information. The question is: which of these images represents a collection of points selected randomly from a distribution with uniform probability, and which has correlations between the points? (The relevance of this exercise to cosmologists studying distributions of galaxies should be obvious.)

randompoints.gif

The points on the right, as you’ve probably guessed from the set up, are distributed completely randomly. On the left, there are important correlations between them.

Humans are not very good at generating random sequences; when asked to come up with a “random” sequence of coin flips from their heads, they inevitably include too few long strings of the same outcome. In other words, they think that randomness looks a lot more uniform and structureless than it really does. The flip side is that, when things really are random, they see patterns that aren’t really there. It might be in coin flips or distributions of points, or it might involve the Virgin Mary on a grilled cheese sandwich, or the insistence on assigning blame for random unfortunate events.

Bonus link uncovered while doing our characteristic in-depth research for this post: flip ancient coins online!

40 Comments

40 thoughts on “Perceiving Randomness”

  1. Well, you know, I’ve been told already in high school that when asked to produce a random string humans won’t include sufficient repetitions. Now guess what I do when I’m ask to write down a sequence of random numbers. I wouldn’t be surprised if one day somebody repeats this exercise and finds humans actually produce too many strings with same numbers, just to make sure.

  2. For those who enjoy memorizing irrational numbers, you can cheat on things like this.

    E.g.: “random” coin flips (corresponding to digits of pi modulo 2)

    HHTHHHTTHHHTHHHHTHTTTTTTHHTHT…

    Of course, I’m assuming that the digits of pi are statistically random. Is this proven?

  3. I feel I must add to that list ‘Technical Analysis’ (an exercise in identifying patterns in charts of financial instruments in an attempt to predict future direction). As Mandelbrot noted on a number of occasions, it is frightening how much money changes hands on this faulty thinking.

  4. Sean

    Thanks for adding the link to my page. It’s nice to get a few hits on items other than those about the doom and gloom about physics funding in the UK!

    Peter

  5. The main assumption in Technical Analysis is that the behavior of stock prices is NOT random. Technical Analysis is based on the idea that if you dig into historical data you can identify patterns which are repeated over the lifetime of the instrument (which is not totally unreasonable given the swing of mood and psychology in the market); so in this way, in Technical Analysis you exploit your knowledge of the identified patterns in order to make money betting on the fact that the historical patterns will be repeated. Paul Wilmott says ‘Technical Analysis is bunk!’, but that’s perhaps out of his academic prejudice. Who gives a damn whether you made money with technical analysis or with the sophisticated stochastic quantitative finance models? Indeed we have now seen a few times how stochastic models can be efficient in hiding and camouflaging the big events of the market such as the formation of bubbles.

  6. Sean-

    I’ve noticed from your recent work on cosmology and the arrow of time that you seem to use the term entropy somewhat more broadly than its strict definition, to include the concept of algorithmic (or Kolmogorov) complexity, which characterizes the disorder of a single, disordered state, rather than an ensemble of states with a probability scheme. See, for instance, the corresponding Wikipedia entries:

    http://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)
    http://en.wikipedia.org/wiki/Kolmogorov_complexity

    See also, for example, Zurek’s paper (Phys. Rev. A 40, 4731 – 4751 (1989)) carefully distinguishing between the two concepts:

    http://prola.aps.org/abstract/PRA/v40/i8/p4731_1

    Since you’re writing a book that will feature entropy and disorder in a big way, perhaps you could take the opportunity to enlighten the public on this subtle but important distinction…

  7. (One important distinction being that the algorithmic complexity of a pure state can increase, even under unitary time evolution.)

  8. Algorithmic complexity, in particular, is what distinguishes the two quantum states:

    |1111111111>

    and:

    |1011101011>

    despite both states being pure. The algorithmic complexity is what distinguishes between a pure state in which all the gas molecules are in one tiny corner of a box from a pure state in which they are spread around throughout the box randomly, despite the fact that both states are pure and hence have vanishing entropy.

  9. TimG:

    Of course, I’m assuming that the digits of pi are statistically random. Is this proven?

    No, it’s not proven. Pretty much everybody believes that they are, but it’s a very difficult thing to actually prove.

    Search on “is pi normal” for more info.

  10. The question over the randomness of the digits of pi is a perfect example of algorithmic complexity, as opposed to entropy.

  11. Here’s another example of the difference, with implications for the 2nd law of thermodynamics:

    Consider a box filled with 100 rubber superballs, which all start out crowded together in one of the top corners. The initial disorder is obviously very low. Release the superballs, and keep track of how they all move. (For 100 superballs, this task is eminently reasonable, with negligible impact on their trajectories.) After a few seconds, they will have spread out to occupy the whole box in a very disordered configuration.

    But the entropy the whole time has been precisely zero, since we have always known the system’s exact state! What’s increasing is the complexity, not the entropy. This scenario represents an important version of the 2nd law of thermodynamics, in which entropy plays no role.

  12. Low Math, Meekly Interacting

    Apophenia is one of my favorite words.

    (A seemingly random comment…or IS IT?)

  13. The issue of percieving randomness should be connected to the issue of talking about it. There are big problems talking about randomness, and how to rate the truthfulness of statements about probability. I have long wondered how to treat the truth value of a statement like “70% chance of rain today.” How can we rate the truth value of such statements? Neither raining nor not raining can show the statement either true or false! Do such statements need a “collective” truth value? Can we say, if we gather 1,000 such predictions from a given forcaster and it rained only 40% of all those times, the statements are collectively “not very true” etc? But of course, what rightly defines the “collection” of note?

    BTW, there seems to be a limit on how many characters per post, but I don’t see that advertised. Is there, what is it, and it would be good CS to post that info. tx

  14. Not that this is the sort of thing that really requires an attribution, but Peter Coles seems to have gotten his two visual examples from an illustration in Stephen Jay Gould’s book “Bully For Brontosaurus” (where Gould says the illustrations came from a computer program whipped up by his colleague, physicist Ed Purcell)–see pages 266 and 267 here (the nonrandom example is rotated 180, the random example is oriented the same way). I mention this mainly just because it’s a great Gould essay and worth checking out (it obviously made an impression on me if the illustrations in this post immediately reminded me of it, although I didn’t notice they were actual reproductions until I compared them–my memory for random dots isn’t that good!)

  15. And, as a final example, Peter Coles’ two pictures: Both pictures have zero entropy, since we know the precise state in each case. But one picture is more random than another, and hence has a larger algorithmic complexity. Less information would be required by a second party to reproduce the first picture than the second.

  16. Oh, yes. But with only 100 macroscopic superballs, that information can easily be tracked (or simulated) by a computer.

    Of course, the concepts of complexity and entropy converge in the thermodynamic limit (say, of a box with zillions of molecules, even in the classical case) when our memory device is limited to a finite information storage capacity and therefore simply cannot store the full details of the exact state of the subject system.

    In that case, our memory device can only manage to record enough information to define the macrostate of the subject system—that is, information that defines a probability distribution. If the subject system is truly in a state of low (high) complexity, then our memory device can (must) employ a probability distribution exhibiting low (high) entropy, where the entropy -(sum)rho log rho of the probability distribution is of order the complexity of the state.

    That’s why we often use the terms “complexity” and “entropy” interchangeably. But in the Peter Coles’ example picture in the blog, the two terms are not equivalent; the two pictures both have zero entropy, since they are both exact states and not represented by probability distributions, but one is more random than the other.

  17. That is not the Virgin Mary. I’m pretty sure I dated that girl in high school….

    e.

  18. Brian, thanks for the answer regarding normal numbers.

    Matt, I’m with you on macroscopic disorder being distinct from entropy, although it’s not so clear to me how one quantifies macroscopic disorder. Even if 100 balls all start out clustered close together, it seems to me that I’d still need 100 x, y, and z coordinates to tell you precisely where they are. If on the other hand the balls follow some pattern like “One ball exactly every ten centimeters” then it’s more clear to me how this allows an abbreviated description.

    Regarding your example of two quantum states |1111111111> and |1011101011>, isn’t the greater complexity of the second case merely a consequence of our choice of basis, rather than some inherent property of the system itself?

    Then again, I suppose the entropy of a system is likewise a function of our description of the system, in that it depends on how we partition microstates into macrostates. To be honest it’s never been completely clear to me why we have to group the states by the particular quantities we use (pressure, volume, temperature, etc.) — other than that these happen to be the things we’re good at measuring in the macroscopic system.

  19. This comes up pretty quickly when you’re writing videogames – nobody likes it when the same random effect/sound byte keeps being played again and again.

  20. Hey Matt, it’s the 19th Century on the line. They say they’re looking for their definition of entropy and they’re wondering if you’ve seen it.

  21. The most interesting thing is that we can compare stochastic datasets.
    The sequence of n = 15 two-digit numbers

    03, 09, 27, 81, 43, 29, 87, 61, 83, 49, 47, 41, 23, 69, 07 (A)

    looks as random as the sequence

    37, 74, 11, 48, 85, 22, 59, 96, 33, 70, 07, 44, 81, 18, 55 (B)

    But the degrees of their stochasticity can be more objectively measured by the Kolmogorov parameter. It can be proved that that the stochasticity probability is approximately 4,700 times higher for the sequence (A) than for the sequence (B).

  22. Interested Bystander

    Umm… all the superballs would end up sitting on the bottom of the box.

  23. Jesse M,

    You are right, I got the pictures from Stephen Jay Gould’s book and used them with appropriate credit in my book From Cosmos to Chaos .

    You will also find the same pair of images in various places around the web.

    Peter

Comments are closed.

Scroll to Top