Category: Science

  • Rules for Time Travelers

    With the new Star Trek out, it’s long past time (as it were) that we laid out the rules for would-be fictional time-travelers. (Spoiler: Spock travels to the past and gets a sex change and becomes Kirk’s grandfather lover.*) Not that we expect these rules to be obeyed; the dramatic demands of a work of fiction will always trump the desire to get things scientifically accurate, and Star Trek all by itself has foisted half a dozen mutually-inconsistent theories of time travel on us. But time travel isn’t magic; it may or may not be allowed by the laws of physics — we don’t know them well enough to be sure — but we do know enough to say that if time travel were possible, certain rules would have to be obeyed. And sometimes it’s more interesting to play by the rules. So if you wanted to create a fictional world involving travel through time, here are 10+1 rules by which you should try to play.

    0. There are no paradoxes.

    This is the overarching rule, to which all other rules are subservient. It’s not a statement about physics; it’s simply a statement about logic. In the actual world, true paradoxes — events requiring decidable propositions to be simultaneously true and false — do not occur. Anything that looks like it would be a paradox if it happened indicates either that it won’t happen, or our understanding of the laws of nature is incomplete. Whatever laws of nature the builder of fictional worlds decides to abide by, they must not allow for true paradoxes.

    1. Traveling into the future is easy.

    We travel into the future all the time, at a fixed rate: one second per second. Stick around, you’ll be in the future soon enough. You can even get there faster than usual, by decreasing the amount of time you experience elapsing with respect to the rest of the world — either by low-tech ways like freezing yourself, or by taking advantage of the laws of special relativity and zipping around near the speed of light. (Remember we’re talking about what is possible according to the laws of physics here, not what is plausible or technologically feasible.) It’s coming back that’s hard.

    2. Traveling into the past is hard — but maybe not impossible.

    If Isaac Newton’s absolute space and time had been the correct picture of nature, we could simply say that traveling backwards in time was impossible, and that would be the end of it. But in Einstein’s curved-spacetime universe, things are more flexible. From your own personal, subjective point of view, you always more forward in time — more technically, you move on a timelike curve through spacetime. But the large-scale curvature of spacetime caused by gravity could, conceivably, cause timelike curves to loop back on themselves — that is to say, become closed timelike curves — such that anyone traveling on such a path would meet themselves in the past. That’s what respectable, Einstein-approved time travel would really be like. Of course, there’s still the little difficulty of warping spacetime so severely that you actually create closed timelike curves; nobody knows a foolproof way of doing that, or even whether it’s possible, although ideas involving wormholes and cosmic strings and spinning universes have been bandied about.

    3. Traveling through time is like traveling through space.

    I’m only going to say this once: there would be no flashing lights. At least, there would only be flashing lights if you brought along some strobes, and decided to start them flashing as you traveled along your closed timelike curve. Likewise, there is no disappearance in a puff of smoke and re-appearing at some other time. Traveling through time is just like traveling through space: you move along a certain path, which (we are presuming) the universe has helpfully arranged so that your travels bring you to an earlier moment in time. But a time machine wouldn’t look like a booth with spinning wheels that dematerializes now and rematerializes some other time; it would look like a rocket ship. Or possibly a DeLorean, in the unlikely event that your closed timelike curve started right here on Earth and never left the road.

    Think of it this way: imagine there were a race of super-intelligent trees, who could communicate with each other using abstract concepts but didn’t have the ability to walk. They might fantasize about moving through space, and in their fantasies “space travel” would resemble teleportation, with the adventurous tree disappearing in a puff of smoke and reappearing across the forest. But we know better; real travel from one point to another through space is a continuous process. Time travel would be like that.

    4. Things that travel together, age together.

    If you travel through time, and you bring along with you some clocks or other objects, all those things experience time in exactly the same way that you do. In particular, both you and the clocks march resolutely forward in time, from your own perspective. You don’t see clocks spinning wildly backwards, nor do you yourself “age” backwards, and you certainly don’t end up wearing the clothes you favored back in high school. Your personal experience of time is governed by clocks in your brain and body — the predictable beating of rhythmic pulses of chemical and biological processes. Whatever flow of time is being experienced by those processes — and thus by your conscious perception — is also being experienced by whatever accompanies you on your journey.

    5. Black holes are not time machines.

    Sadly, if you fell into a black hole, it would not spit you out at some other time. It wouldn’t spit you out at all — it would gobble you up and grow slightly more corpulent in the process. If the black hole were big enough, you might not even notice when you crossed the point of no return defined by the event horizon. But once you got close to the center of the hole, tidal forces would tug at you — gently at first, but eventually tearing you apart. The technical term is spaghettification. Not a recommended strategy for would-be time adventurers.

    Wormholes — tunnels through spacetime, which in principle can connect widely-separated events — are a more promising alternative. Wormholes are to black holes as elevators are to deep wells filled with snakes and poisoned spikes. The problem is, unlike black holes, we don’t know whether wormholes exist, or even whether they can exist, or how to make them, or how to preserve them once they are made. Wormholes want to collapse and disappear, and keeping them open requires a form of negative energies. Nobody knows how to make negative energies, although they occasionally slap the name “exotic matter” on the concept and pretend it might exist.

    (more…)

  • arxiv Find: Atom interferometry tests of local Lorentz invariance

    What is the Secretary of Energy doing submitting papers to the arxiv when he’s supposed to be solving the world’s energy problems? I have enough trouble getting papers written when it’s my actual job.

    Atom interferometry tests of local Lorentz invariance in gravity and electrodynamics
    Authors: Keng-Yeow Chung, Sheng-wey Chiow, Sven Herrmann, Steven Chu, Holger Mueller

    Abstract: We present atom-interferometer tests of the local Lorentz invariance of post-Newtonian gravity. An experiment probing for anomalous vertical gravity on Earth, which has already been performed by us, uses the highest-resolution atomic gravimeter so far. The influence of Lorentz violation in electrodynamics is also taken into account, resulting in combined bounds on Lorentz violation in gravity and electrodynamics. Expressed within the standard model extension or Nordtvedt’s anisotropic universe model, we limit twelve linear combinations of seven coefficients for Lorentz violation at the part per billion level, from which we derive limits on six coefficients (and seven when taking into account additional data from lunar laser ranging). We also discuss the use of horizontal interferometers, including atom-chip or guided-atom devices, which potentially allow the use of longer coherence times in order to achieve higher sensitivity.

    We kid the Energy Secretary, but this is a very cool experiment. (I presume this is the interferometer?) Basically, you throw an atom up in the air, and catch it as it comes down. But you actually split the wave function of the atom into two different beams, depending on when it absorbs and emits a pulse of laser light. The beams leave the same place and are collected at the same place, but travel on slightly different paths; you can use interferometry to see whether these different paths have evolved differently.

    Which lets you test all kinds of things, from measuring the fine structure constant to looking for new forces to testing Lorentz invariance, as is happening here. But if it helps free us from dependence on foreign oil sources, I’d be surprised.

  • Evolution and the Second Law

    Since no one is blogging around here, and I’m still working on my book, I will cheat and just post an excerpt from the manuscript. Not an especially original one, either; in this section I steal shamelessly from the nice paper that Ted Bunn wrote last year about evolution and entropy (inspired by an previous paper by Daniel Styer).

    ————————————

    Without even addressing the question of how “life” should be defined, we can ask what sounds like a subsequent question: does life make thermodynamic sense? The answer, before you get too excited, is “yes.” But the opposite has been claimed – not by any respectable scientists, but by creationists looking to discredit Darwinian natural selection as the correct explanation for the evolution of life on Earth. One of their arguments relies on a misunderstanding of the Second Law, which they read as “entropy always increases,” and then interpret as a universal tendency toward decay and disorder in all natural processes. Whatever life is, it’s pretty clear that life is complicated and orderly – how, then, can it be reconciled with the natural tendency toward disorder?

    There is, of course, no contradiction whatsoever. The creationist argument would equally well imply that refrigerators are impossible, so it’s clearly not correct. The Second Law doesn’t say that entropy always increases. It says that entropy always increases (or stays constant) in a closed system, one that doesn’t interact noticeably with the external world. But it’s pretty obvious that life is not like that; living organisms interact very strongly with the external world. They are the quintessential examples of open systems. And that is pretty much that; we can wash our hands of the issue and get on with our lives.

    But there’s a more sophisticated version of the argument, which you could imagine being true – although it still isn’t – and it’s illuminating (and fun) to see exactly how it fails. The more sophisticated argument is quantitative: sure, living beings are open systems, so in principle they can decrease entropy somewhere as long as it increases somewhere else. How do you know that the increase in entropy in the outside world is really enough to account for the low entropy of living beings?

    As we mentioned way back in Chapter Two, the Earth and its biosphere are systems that are very far away from thermal equilibrium. In equilibrium, the temperature is the same everywhere, whereas when we look up we see a very hot Sun in an otherwise very cold sky. There is plenty of room for entropy to increase, and that’s exactly what’s happening. But it’s instructive to run the numbers.

    The energy budget of the Earth, considered as a single system, is pretty simple. We get energy from the Sun, via radiation; we lose the same amount of energy to empty space, also via radiation. (Not exactly the same; processes such as nuclear decays also heat up the Earth and leak energy into space, and the rate at which energy is radiated is not strictly constant. Still, it’s an excellent approximation.) But while the amount is the same, there is a big difference in the quality of the energy we get and the energy we give back. Remember back in the pre-Boltzmann days, entropy was understood as a measurement of the uselessness of a certain amount of energy; low-entropy forms of energy could be put to useful work, such as powering an engine or grinding flour, while high-entropy forms of energy just sat there.

    Sun-Earth-entropy

    The energy we get from the Sun is of a low-entropy, useful form, while the energy we radiate back out into space has a much higher entropy. The temperature of the Sun is about twenty times the average temperature of the Earth. The temperature of radiation is just the average energy of the photons of which it is made, so the Earth needs to radiate twenty low-energy (long-wavelength, infrared) photons for every one high-energy (short-wavelength, visible) photon it receives. It turns out, after a bit of math, that twenty times as many photons directly translates into twenty times the entropy. The Earth emits the same amount of energy as it receives, but with twenty times higher entropy.

    The hard part is figuring out just what we mean when we say that the life forms here on Earth are “low-entropy.” How exactly do we do the coarse-graining? It is possible to come up with reasonable answers to that question, but it’s complicated. Fortunately, there is a dramatic shortcut we can take. Consider the entire biomass of the Earth – all of the molecules that are found in living organisms of any type. We can easily calculate the maximum entropy that collection of molecules could have, if it were in thermal equilibrium; plugging in the numbers (the biomass is 1015 kilograms, the temperature of the Earth is 255 Kelvin), we find that its maximum entropy is 1044. And we can compare that to the absolute minimum entropy it could have – if it were in an exactly unique state, the entropy would be precisely zero.

    So the largest conceivable change in entropy that would be required to take a completely disordered collection of molecules the size of our biomass and turn them into absolutely any configuration at all – including the actual ecosystem we currently have – is 1044. If the evolution of life is consistent with the Second Law, it must be the case that the Earth has generated more entropy over the course of life’s evolution by converting high-energy photons into low-energy ones than it has decreased entropy by creating life. The number 1044 is certainly an overly generous estimate – we don’t have to generate nearly that much entropy, but if we can generate that much, the Second Law is in good shape.

    How long does it take to generate that much entropy by converting useful solar energy into useless radiated heat? The answer, once again plugging in the temperature of the Sun and so forth, is: about one year. Every year, if we were really efficient, we could take an undifferentiated mass as large as the entire biosphere and arrange it in a configuration with as small an entropy as we can imagine. In reality, life has evolved over billions of years, and the total entropy of the “Sun + Earth (including life) + escaping radiation” system has increased by quite a bit. So the Second Law is perfectly consistent with life as we know it; not that you were ever in doubt.

  • Fermi Waffles on Dark Matter

    For the last few months there’s been some excitement among particle-astrophysicists about intriguing results from the PAMELA satellite experiment and the ATIC balloon experiment. (We also blogged about it here and here.) PAMELA claimed to see an excess in the number of high-energy cosmic positrons (anti-electrons) over what you would expect from conventional astrophysical sources, while ATIC (which can’t distinguish between positrons and electrons) saw an overall rise in the number of positrons and electrons combined, more or less consistent with what PAMELA saw. One dramatic but plausible explanation for this result is that the positrons are produced when dark matter particles and antiparticles annihilated with each other, which would certainly be exciting. But it wasn’t quite a home run, because there was no evidence for the corresponding excess of anti-protons you would probably also expect. (Although that is not a deal-breaker; with a little ingenuity, particle physicists are able to come up with models that produce positrons but not anti-protons.) There was also some controversy when theorists wrote papers trying to fit the data before the data were even published, by snapping pictures of plots shown at conferences with their cell-phone cameras. More than enough drama for a TV movie, I would say.

    A tinfoil-hat conspiracy theorist might imagine that all the excitement was intentionally manufactured, just so people would pay more attention to the first measurement from the new Fermi (formerly GLAST) gamma-ray telescope. And now those results are in! (Other Fermi results have already appeared, but not about this particular question.)

    Sadly, the results are “in” in the sense of being published in Physical Review Letters, which helpfully charges $25 if you’re not a subscriber. (Presumably it will be on arxiv soon, probably tonight.) The best summary of the results, although somewhat technical, is by Bruce Winstein and Kathryn Zurek at Physics, the American Physical Society’s in-house journal that highlights interesting results.

    And here are those results.

    Fermi electron/positron spectrum

    Fermi is more like ATIC than like PAMELA, in that it also cannot distinguish between electrons and positrons, so this graph shows both. The blue line is a simple model that you might expect in the absence of any dark-matter annihilations, and the red points are the Fermi results. If you look very closely, you can see the grey squares representing the ATIC data, which peak way up there between 100 GeV and 1000 GeV of energy.

    So: hmm. Sadly it’s not a completely definitive result, either way. (This is reflected in the coverage in the popular press, where, unlike the physicists, they need to come to a conclusion: Ron Cowen at Science News says “Another Clue in the Case for Dark Matter,” while Adrian Cho at ScienceNow says “Lights Out for Dark Matter Claim?” Both do a good job in the body of their articles.) The Fermi data are clearly lower than the ATIC data were — but they’re not quite as low as the simple model would predict. The energy resolution of Fermi also isn’t quite as good — it’s harder for them to pinpoint the energy of each particle — so it’s conceivable that there is a sharp peak that simply gets smeared out by their instrument. But I completely agree with Winstein and Zurek’s take:

    These results, as precise as they are, do not definitively confirm or rule out a DM source. Although the large ATIC excess, which had been consistent with PAMELA, is ruled out, because of uncertainties from charge-dependent modulation in the flux from the solar wind, the Fermi and PAMELA data do remain consistent as having the same source. Since several natural astrophysical explanations can generate the Fermi and PAMELA spectra, the likely course is that one will be found there. It may simply be, as the Fermi paper points out, that the primary electron spectrum in the cosmic-ray source, predicted to fall as ~E-3.3 (where E is the particle energy), does not fall as steeply as thought in the energy range observed by Fermi.

    In other words, it’s not too hard to imagine an astrophysical explanation that doesn’t require new physics beyond the Standard Model (which would still be interesting). But more data would be nice. We’ll keep looking.

  • Are Cities Just Very Large Organisms?

    A couple of years ago I got to hear Geoffrey West, one of Time magazine’s 100 Most Influential People, give a talk on his research at a meeting of the American Association for the Advancement of Science. It was a fantastic talk, and I immediately had the idea to ask him to come to Caltech at some point and give it as a colloquium. So tomorrow he’ll be here, and anyone in the neighborhood interested in a semi-technical account of complex systems from physics to biology is welcome to stop by. He might be angling for the record for the longest talk title ever:

    The Complexity, Simplicity, and Unity of Living Systems from Cells to Cities: A Physicist’s Search for Quantitative, Unified Theories of Biological and Social Structure and Organization

    Although Life is very likely the most complex phenomenon in the Universe, many of it’s most fundamental and complex phenomena scale with size in a surprisingly simple fashion. For example, metabolic rate scales approximately as the 3/4-power of mass over 27 orders of magnitude from complex molecules up to the largest multicellular organisms. Similarly, time-scales (such as lifespans and growth-rates) and sizes (such as genome lengths, RNA densities, and tree heights) scale as power laws with exponents which are typically simple multiples of 1/4. The universality and simplicity of these relationships, together with emergent “universal” invariants, suggest that fundamental constraints underly much of the coarse-grained generic structure and organisation of living systems. It will be shown how these 1/4 power scaling laws follow from underlying principles embedded in the dynamical and geometrical structure of space-filling, fractal-like, hierarchical branching networks, presumed optimised by natural selection. These ideas lead to a general quantitative, predictive theory that potentially captures the essential features of many diverse biological systems. Examples will include vascular systems, growth, cancer, aging and mortality, sleep, cell size, genome lengths, and DNA nucleotide substitution rates. These ideas will be extended to social organisations: to what extent are cities or corporations an extension of biology? Are they “just” very large organisms? Analogous scaling laws reflecting underlying social network structure point to general principles of organization common to all cities, but, counter to biological systems, the pace of social life systematically increases with size. This has dramatic implications for growth, development and particularly for sustainability: innovation and wealth creation that fuel social systems, if left unchecked, potentially sow the seeds for their inevitable collapse.

    We’ve talked before about the difficulty in defining “life,” although one safe criterion is that a living organism is going to be pretty complex. What about the other way — when you have an undeniably complex system like a city or a university or a galaxy, at what point does it become useful to think of it as a “living organism”? Those are hard questions, but one angle is to investigate the similarities that complex systems demonstrate as they are manifested at different sizes. That’s the idea of “scaling laws” — measuring a feature common to a set of complicated systems (number of parts, speed of motion, etc.) and see how they change as a function of scale.

    You might have imagined that complexity comes in a variety of completely different forms, and there would be no simple relationship that included viruses, house cats, and sprawling urban centers. But the data reveal a remarkable degree of regularity — many complex systems share certain basic features, just scaled up or down in ways appropriate to their size.

    Here is one startling example: every living being on Earth gets about a billion heartbeats worth of lifespan. Larger organisms live longer, but their hearts (or other analogous rhythmic processes) beat more slowly. Use those heartbeats wisely!

    The next challenge, of course, is to understand why. A few stabs have been taken in that direction using ideas about hierarchical networks of smaller systems — about which I shouldn’t say much, at least until I’ve heard the talk.

    Those of you who can’t make it to LA on short notice can enjoy this video, or check out Blake Stacey’s live-blog of a previous talk.

  • Making Extra Dimensions Disappear

    One of the big questions for people who believe in extra dimensions is: Why don’t we see them? Sure, we have methods for hiding them, usually by making them really tiny, but then we need to ask: Why are they tiny?

    Matt Johnson, Lisa Randall and I just came out with a paper that takes a partial stab at this question: Dynamical Compactification from de Sitter Space. (And a similar-sounding paper came out the same day from Jose Blanco-Pillado, Delia Schwartz-Perlov, and Alex Vilenkin.) It’s an intriguing idea, if I do say so myself: starting with nothing more complicated than a higher-dimensional spacetime with a positive vacuum energy and an electromagnetic field (or a higher-dimensional generalization thereof), you will automatically get quantum fluctuations into lower-dimensional spacetimes! If we really believe in extra dimensions, we need to understand how regions with different effective dimensionalities are cosmologically related, and this is a step in that direction.

    Matt Johnson

    Normally I’d blog all about it, but on this occasion we’re outsourcing to a guest blogger. My collaborator Matt Johnson is a postdoc at Caltech, and before that was a grad student at UC Santa Cruz, where he worked with Anthony Aguirre — a previous guest-blogger of ours! We like to keep things in the family.

    —————————————————

    Extra dimensions. Sounds preposterous at first. Well, perhaps more accurately, it sounds preposterous to most people who don’t do high-energy theory. But, really I assure you, there are many well-motivated reasons why us wacky theorists like to ponder the existence of extra dimensions.

    For one, as shown long ago by Kaluza and Klein, it is possible to get Maxwell’s equations of electromagnetism in four dimensions by taking 5 dimensional General Relativity and wrapping one of the spatial dimensions up in a circle too small to see. The smaller the circle is, the harder it is to move in this “other direction,” and so there is no danger in getting lost on the way home. In this way, Maxwell’s equations have an elegant geometrical origin and gravity and electricity & magnatism are combined into one force (5 dimensional gravity).

    Another strong motivation comes from string theory, which is only a consistent quantum theory of gravity if there are 10 or 11 dimensions in total. Again, since we don’t see them, it is necessary to hide the existence of the extra dimensions. Inspired by the fact that it was possible to hide one extra dimension by wrapping it up in a circle, generally the extra 6 or 7 dimensions are thought to be “compactified” into a very small compact geometry like a sphere or a torus.

    At this point, the five-year-old in the audience is insistently asking, “If you have all these extra dimensions, and you are telling me that they are wrapped up into this tiny ball, how did they get wrapped up in the first place? Why are the four dimensions we see so large, and the others so small?”

    After nearly a century of thinking about the existence of extra dimensions, there are surprisingly few plausible answers to this very simple question. One of the few answers was proposed by Brandenberger and Vafa. They studied the thermodynamics of strings in a torus-shaped hot early-universe, and found that miraculously it is favorable for only four of the dimensions to become large. Pretty nice, if the universe is a torus and all the dimensions started out small and compact. But, it would be nice to have some alternatives in case this turns out not to be viable.

    Sean Carroll, Lisa Randall, and I recently wrote a paper that revisits the five-year-old’s question. We wanted to start with the very simplest model that has extra dimensions and solutions in which some of them can be compactified. A minimal set of ingredients needed to accomplish this includes 1) D-dimensional gravity, 2) a positive D-dimensional cosmological constant, and 3) a (D-4)-form gauge field (think E&M, but with more indices). This theory has long been known to have solutions where 4 of the dimensions are non-compact and (D-4) of them correspond to directions on a sphere, whose size is stabilized by the energetics of curvature and a background Electric or Magnetic field.

    More interestingly, we showed that some of the spacetimes that are solutions to this theory contain a four-dimensional universe that lives behind the event horizon of an extended object, a “p-brane” or “black brane,” that is embedded in a background D-dimensional spacetime. Moreover, there are mechanisms that dynamically give rise to such objects, thanks to the magic of quantum mechanics, and this leads to an explanation for why some number of extra dimensions became compact!

    Sounds complicated, but you can actually go a long way towards understanding what we did by considering plain-old four dimensional black holes. (more…)

  • Remembering the Past is Like Imagining the Future

    Because of the growth of entropy, we have a very different epistemic access to the past than to the future. In retrodicting the past, we have recourse to “memories” and “records,” which we can take as mostly-reliable indicators of events that actually happened. But when it comes to the future, the best we can do is extrapolate, without nearly the reliability that we have in reconstructing the past.

    However — the human brain, as most readers of this blog probably know, was not intelligently designed. It’s doesn’t have the high-level structure of a computer program, where all the processes are carefully planned to achieve some goal. (The lower-level structures share the mechanical features of any other physical system, but that’s of little help here.) Evolution nudges the genome in useful directions, but it can only work with the raw materials it’s given; it doesn’t have the luxury of starting from scratch. So over and over in biological organisms, we find features that were originally developed for one purpose being re-engineered for something else.

    As it turns out, the way that the human brain goes about the task of “remembering the past” is actually very similar to how it goes about “imagining the future.” Deep down, these are activities with very different functions and outcomes — predicting the future is a lot less reliable, for one thing. But in both cases, the brain goes through more or less the same routine.

    mri-schacter.jpg

    That’s what Daniel Schacter at Harvard and his friends have discovered, by doing functional MRI studies of brains subjected to different kinds of cues. (Science News report, Nature review article, Charlie Rose interview.) Subjects are inserted gently into the giant magnetic field, then asked to either conjure up a memory or imagine a future scenario about some particular cue-word. What you see is that the same sites in the brain light up in both cases. The brain on the left in this image is remembering the past — on the right, it’s concocting an imaginary scenario about the future.

    doing_double_duty.jpg

    Further confirmation comes from studies of amnesiacs, who famously can’t remember the past. But if you ask the right questions, you find that they also have significant problems imagining their own future.

    We tend to assume that the brain must be like a computer — when we want to access a memory, we simply pull up a “file” stored somewhere on the brain’s hard drive, and take a look at its contents. But that’s not it at all. Schacter believes that pieces of data relevant to any particular memory — times, images, sounds — are stored piecemeal in different parts of the brain. When we want to “remember” something, another part of the brain assembles these pieces into a (hopefully) coherent picture. It’s like running a new simulation every time you need a memory, and it’s the same thing we do when we try to imagine some event in the future.

    Everyone has heard that memories can be unreliable, but many of us don’t appreciate the extent to which that is true. It’s not the case that “real” memories are stored once and for all deep in the darkest recesses of the brain, and it’s just a matter of digging them up. False memories — conjured from any number of sources, from gradual embellishment to direct suggestion by others — seem precisely as vivid and real to us as accurate memories do. For a good reason: the brain uses the same tools to construct the memory from the available raw materials. A novel and a history book look the same on the printed page.

  • String Wars: The Aftermath

    An interesting short interview with Ed Witten in this week’s New Scientist. Mostly straightforward stuff, but it’s always good to hear what smart people are thinking. Witten is spending the year on sabbatical at CERN; like many people, he was sort of hoping to be there when the first physics results from the LHC appeared, but reality intervened an that’s looking increasingly unlikely. Happily, CERN has developed electronic means of communication whereby interesting findings may be promulgated to researchers who are not within close physical proximity to the lab.

    Longtime CV readers may be interested in Witten’s take on the String Wars:

    The 1980s and 90s were dotted with euphoric claims from string theorists. Then in 2006 Peter Woit of Columbia University in New York and Lee Smolin of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, published popular books taking string theory to task for its lack of testability and its dominance of the job market for physicists. Witten hasn’t read either book, and compares the “string wars” surrounding their publication – which played out largely in the media and on blogs – to the fuss caused by the 1995 book The End of Science, which argued that the era of revolutionary scientific discoveries was over. “Neither the publicity surrounding that book nor the fact that people lost interest in talking about it after a while reflected any change in the intellectual underlying climate.”

    That sounds about right. For the most part, actual string theorists simply went about their business, trying to figure out what this fascinating but difficult theory really is. The irony is that a major point of the anti-string books was that the public hype concerning string theory didn’t paint an accurate picture of its more problematic features — which was true. But the backlash books gave the public a misleading impression in the other direction, leading to the somewhat amusing appearance of my own piece in New Scientist explaining that the theory was for the most part chugging along as before. Hype cuts in every direction, and it feeds on drama, not on accuracy.

    There is certainly some feeling that the near-term growth area in high-energy theory is not string theory, but phenomenology (or arguably particle astrophysics). Certainly those are the people who seem to be getting the jobs these days. The explanation there is pretty straightforward: data! Or at least the promise thereof. It’s hard to do physics with little to go on other than thought experiments, but one gets by when relatively few real experiments are available. Increasingly, that’s no longer the case.

    But it’s been a long time since we’ve had a good string-wars thread, so here you go. For old time’s sake.

  • The Earth’s Elder

    The largest organism on Earth, and probably the oldest multicellular organism, is named Pando. Kind of a cutesy name for such an impressive specimen, don’t you think?

    800px-aspenoverview0172.JPG

    If you were to meet Pando — which you could easily do, if you paid a visit to Fishlake National Forest in Utah — it would look like a forest of Quaking Aspen trees. But if you happened to be equipped to do DNA testing on plant specimens, you would realize that all of the trees were genetically identical. That’s because they’re all part of the same tree, sharing a common root system. One tree springs from a seed, long ago, and spreads out roots; but then more trees erupt from those roots, and the process simply continues. Individual “trees” might die, but that’s like you or me losing a toenail; Pando lives on. It weighs in at over six million kilograms, and is likely more than 80,000 years old (although it might be much older).

    I have nothing especially profound to say about Pando, I just think it’s cool. But when you have arrow-of-time on the brain, everything resonates. Unlike most other multicellular organisms, there’s no reason why Pando should ever die, absent dramatic external factors. As long as its environment remains hospitable, Pando could live forever. Monocellular organisms, of course, do this all the time; they split into “children” which are genetically identical (up to mutations), so it’s legitimate to say that any given bacterium has lived for many millions of years. The birth/growth/death cycle is not absolutely necessary to the existence of life — it’s just useful, if life wants to avoid the very real possibility that the environment does dramatically change for the worse. Giving birth to children with slightly different genetic makeups — and then getting out of their way, by dying — gives the species a fighting chance to adapt and survive in the face of dramatic changes around it. (Update: some termites have a different strategy.)

    Meanwhile, Pando abides. Good for it.

  • Perceiving Randomness

    The kind way to say it is: “Humans are really good at detecting patterns.” The less kind way is: “Humans are really good at detecting patterns, even when they don’t exist.”

    I’m going to blatantly swipe these two pictures from Peter Coles, but you should read his post for more information. The question is: which of these images represents a collection of points selected randomly from a distribution with uniform probability, and which has correlations between the points? (The relevance of this exercise to cosmologists studying distributions of galaxies should be obvious.)

    randompoints.gif

    The points on the right, as you’ve probably guessed from the set up, are distributed completely randomly. On the left, there are important correlations between them.

    Humans are not very good at generating random sequences; when asked to come up with a “random” sequence of coin flips from their heads, they inevitably include too few long strings of the same outcome. In other words, they think that randomness looks a lot more uniform and structureless than it really does. The flip side is that, when things really are random, they see patterns that aren’t really there. It might be in coin flips or distributions of points, or it might involve the Virgin Mary on a grilled cheese sandwich, or the insistence on assigning blame for random unfortunate events.

    Bonus link uncovered while doing our characteristic in-depth research for this post: flip ancient coins online!