Category: Science

  • Brilliant!

    Brilliant! New Scientist has asked over 70 of the world’s most brilliant and charismatic and modest scientists to forecast what might be the big breakthroughs in their fields over the next 50 years. Some of the many examples that might be of interest to CV readers:

    • Alex Vilenkin thinks we might find cosmic strings.
    • Gerard ‘t Hooft imagines a deterministic theory that would supercede quantum mechanics.
    • Lisa Randall hopes that the LHC will tell us something about the fundamental nature of spacetime.
    • Edward Witten thinks that string theory will be fertile, and is excited about extra-solar planets.
    • Steven Weinberg would like to see a theory of everything.
    • Max Tegmark will be printing T-shirts emblazoned with the aforementioned TOE.
    • David Deutsch looks forward to working quantum computers.
    • Rocky Kolb and Kip Thorne both predict that we’ll find gravitational waves from inflation.
    • Martin Rees wants to know if there was one Big Bang, or many.
    • Richard Gott imagines a colony on Mars.
    • Lawrence Krauss prevaricates about dark energy.
    • Frank Wilczek actually steps up to the plate, predicting superintelligent computers and abundant solar power.
    • Steven Pinker thinks it’s all just a trick to make him look foolish.

    Hey, wait a minute — even I’m in there! Who knew? Here’s my prognostication:

    The most significant breakthrough in cosmology in the next 50 years will be that we finally understand the big bang.

    In recent years, the big bang model – the idea that our universe has expanded and cooled over billions of years from an initially hot, dense state – has been confirmed and elaborated in spectacular detail. But the big bang itself, the moment of purportedly infinite temperature and density at the very beginning, remains a mystery. On the basis of observational data, we can say with confidence what the universe was doing 1 second later, but our best theories all break down at the actual moment of the bang.

    There is good reason to hope that this will change. The inflationary universe scenario takes us back to a tiny fraction of a second after the bang. To go back further we need to understand quantum gravity, and ideas from string theory are giving us hope that this goal is obtainable. New ways of collecting data about dark matter, dark energy and primordial perturbations allow us to test models of the earliest times. The decades to come might very well be when the human race finally figures out where it all came from.

    [Here you can imagine some suitably aw-shucks paragraph in which I appear to be vaguely embarassed at all this talk of “brilliance,” which might be appropriate in describing Weinberg and Witten and ‘t Hooft but certainly doesn’t apply to little old me, who would never have made the cut if it weren’t for my blogging hobby, although I’m not quite sure how Max got in there either, and hey, if anyone wants to protest that I certainly do belong, that’s what comment sections are for. Don’t have time to construct it just now, but you know how it would go.]

    Anyone else want to predict what the biggest breakthrough in the next 50 years will be?

  • Dark Energy Has Long Been Dark-Energy-Like

    Thursday (“today,” for most of you) at 1:00 p.m. Eastern, there will be a NASA Media Teleconference to discuss some new observations relevant to the behavior of dark energy at high redshifts (z > 1). Participants will be actual astronomers Adam Riess and Lou Strolger, as well as theorist poseurs Mario Livio and myself. If the press release is to be believed, the whole thing will be available in live audio stream, and some pictures and descriptions will be made public once the telecon starts.

    I’m not supposed to give away what’s going on, and might not have a chance to do an immediate post, but at some point I’ll update this post to explain it. If you read the press release, it says the point is “to announce the discovery that dark energy has been an ever-present constituent of space for most of the universe’s history.” Which means that the dark energy was acting dark-energy-like (a negative equation of state, or very slow evolution of the energy density) even back when the universe was matter-dominated.

    Update: The short version is that Adam Riess and collaborators have used Hubble Space Telescope observations to discover 21 new supernovae, 13 of which are spectroscopically confirmed as Type Ia (the standardizable-candle kind) with redshifts z > 1. Using these, they place new constraints on the evolution of the dark energy density, in particular on the behavior of dark energy during the epoch when the universe was matter-dominated. The result is that the dark energy component seems to have been negative-pressure even back then; more specifically, w(z > 1) = -0.8+0.6-1.0, and w(z > 1) < 0 at 98% confidence.

    supernovae

    Longer version: Dark energy, which is apparently about 70% of the energy of the universe (with about 25% dark matter and 5% ordinary matter), is characterized by two features — it’s distributed smoothly throughout space, and maintains nearly-constant density as the universe expands. This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,” since the law of energy conservation relates the rate of change of the energy density to (ρ + p), where ρ is the energy density and p is the pressure. Thus, if p = -ρ, the density is strictly constant; that’s vacuum energy, or the cosmological constant. But it could evolve just a little bit, and we wouldn’t have noticed yet. So we invent an “equation-of-state parameter” w = p/ρ. Then w = -1 implies that the dark energy density is constant; w > -1 implies that the density is decreasing, while w < -1 means that it’s increasing.

    In the recent universe, supernova observations convince us that w = -1+0.1-0.1; so the density is close to constant. But there are puzzles in the dark-energy game; why is the vacuum energy so small, and why are the densities of matter and dark energy comparable, even though matter evolves noticeably while dark energy is close to constant? So it’s certainly conceivable that the behavior of the dark energy was different in the past — in particular, that the density of what we now know as dark energy used to behave similarly to that of matter, fading away as the universe expanded, and only recently switched over to an appreciably negative value of w.

    These new observations speak against that possibility. They include measurements of supernovae at high redshifts, back when the density of matter was higher than that of dark energy. They then constrain the value of w as it was back then, at redshifts greater than one (when the universe was less than half its current size). And the answer is … the dark energy was still dark-energy-like! That is, it had a negative pressure, and its energy density wasn’t evolving very much. It was in the process of catching up to the matter density, not “tracking” it in some sneaky way.

    Of course, to get such a result requires some assumptions. Riess et al. consider three different “priors” — assumed behaviors for the dark energy. The “weak” prior makes no assumptions at all about what the dark energy was doing at redshifts greater than 1.8, and draws correspondingly weak conclusions. The “strong” prior uses data from the microwave background, along with the assumption (which is really not that strong) that the dark energy wasn’t actually dominating at those very high redshifts. That’s the prior under which the above results were obtained. The “strongest” prior imagines that we can extrapolate the behavior of the equation-of-state parameter linearly back in time — that’s a very strong prior indeed, and probably not realistic.

    So everything is consistent with a perfectly constant vacuum energy. No big surprise, right? But everything about dark energy is a surprise, and we need to constantly be questioning all of our assumptions. The coincidence scandal is a real puzzle, and the idea that dark energy used to behave differently and has changed its nature recently is a perfectly reasonable one. We don’t yet know what the dark energy is or why it has the density it does, but every new piece of information nudges us a bit further down the road to really understanding it.

    Update: The Riess et al. paper is now available as astro-ph/0611572. The link to the data is broken, but I think it means to go here.

  • Toward a Unified Epistemology of the Natural Sciences

    Donald Rumsfeld Dr. Free-Ride reminds us of the celebrated free-verse philosophizing of Donald Rumsfeld, from a 2002 Department of Defense news briefing.

    As we know,
    There are known knowns.
    There are things we know we know.

    We also know
    There are known unknowns.
    That is to say
    We know there are some things
    We do not know.

    But there are also unknown unknowns,
    The ones we don’t know
    We don’t know.

    We tease our erstwhile Defense Secretary, but beneath the whimsical parallelisms, the quote actually makes perfect sense. In fact, I’ll be using it in my talk later today on the nature of science. One of the distinguishing features of science, I will argue, is that we pretty much know which knowns are known. That is to say, it’s obviously true that there are plenty of questions to which science does not know the answer, as well as some to which it does. But the nice thing is that we have a pretty good idea of where the boundary is. Where people often go wrong — and I’ll use examples of astrology, Intelligent Design, perpetual-motion machines, and What the Bleep Do We Know? — is in attempting to squeeze remarkable and wholly implausible wonders into the tightly-delimited regimes where science doesn’t yet have it all figured out, or hasn’t done some explicit experiment. (For example, it may be true that we haven’t taken apart and understood your specific perpetual-motion device, but it pretty obviously violates not only conservation of energy, but also Maxwell’s equations and Newton’s laws of motion. We don’t need to spend time worrying about your particular gizmo; we already know it can’t work.)

    Rumsfeld’s comprehensive classification system did, admittedly, leave out the crucial category of unknown knowns — the things you think you know, that aren’t true. Those had something to do with his ultimate downfall.

  • Out-Einsteining Einstein

    Among my recent peregrinations was a jaunt up to Santa Barbara, where I gave two talks in a row (although in different buildings, and to somewhat different audiences). Both were about attempts to weasel out of the need for dark stuff in the universe by trying to modify gravity.

    The first talk, a high-energy theory seminar, was on trying to do away with dark energy by modifying gravity. I used an antiquated technology called “overhead transparencies” to give the talk itself, so there is no electronic record. If I get a chance sometime soon, I’ll post a summary of the different models I talked about.

    The subsequent talk was over at the Kavli Institute for Theoretical Physics. There was a program on gravitational lensing going on, and they had asked Jim Hartle to give an overview of attempts to replace dark matter with modified gravity. Jim decided that he would be happier if I gave the talk, so it was all arranged to happen on a day I’d be visiting SB anyway. (Don’t feel bad for me; it was fun to give the talks, and they took me to a nice dinner afterwards.) I’m not really an expert on theories of gravity that do away with dark matter, but I’ve dabbled here and there, so I was able to put together a respectable colloquium-level talk.

    MOND slide

    And here it is. You can see the slides from the talk, as well as hear what I’m saying. I started somewhat lethargically, as it’s hard to switch gears quickly from one talk to another, but we built up some momentum by the end. I started quite broadly with the idea of different “gravitational degrees of freedom,” and worked my up to Bekenstein’s TeVeS model (a relativistic version of Milgrom’s MOND), explaining the empirical difficulties with clusters of galaxies, the cosmic microwave background, and most recently the Bullet Cluster. We can’t say that the idea is ruled out, but the evidence that dark matter of some sort exists is overwhelming, which removes much of the motivation for modifying gravity.

    The KITP is firmly in the vanguard of putting talks online, both audio/video and reproductions of the slides. By now they have quite the extensive collection of past talks, from technical seminars to informal discussions to public lectures. Some recent categories of interest:

    On Friday I’ll be at Villanova, my alma mater, giving a general talk to undergraduates on what science is all about. I’m not sure if it will be recorded, but if the yet-to-be-written slides turn out okay, I’ll put them online.

  • Humankind’s Basic Picture of the Universe

    Scott Aaronson has thown down a gauntlet by claiming that theoretical computer science, “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.” Obviously the truth-value of such a statement will depend on what counts as our “basic picture of the universe,” but Scott was good enough to provide an explanation of the most important things that TCS has taught us, which is quite fascinating. (More here.) Apparently, if super-intelligent aliens landed and were able to pack boxes in our car trunks very efficiently, they could also prove the Riemann hypothesis. Although the car-packing might be more useful.

    There are important issues of empiricism vs. idealism here. The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live. What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori. Spacetime didn’t have to be curved, after all; for that matter, the Earth didn’t have to go around the Sun (to the extent that it does). Those are just things that appear to be true of our universe, at least locally.

    But let’s grant the hypothesis that our “picture of the universe” consists both of logical truths and empirical ones. Can we defend the honor of particle physics and cosmology here? What have we really contributed over the last 30 years to our basic picture of the universe? It’s not fair to include great insights that are part of some specific theory, but not yet established as true things about reality — so I wouldn’t include, for example, anomalies canceling in string theory, or the Strominger-Vafa explanation for microstates in black holes, or inflationary cosmology. And I wouldn’t include experimental findings that are important but not quite foundation-shaking — so neutrino masses don’t qualify.

    With these very tough standards, I think there are two achievements that I would put up against anything in terms of contributions to our basic picture of the universe:

    1. An inventory of what the universe is made of. That’s pretty important, no? In units of energy density, it’s about 5% ordinary matter, 25% dark matter, 70% dark energy. We didn’t know that 30 years ago, and now we do. We can’t claim to fully understand it, but the evidence in favor of the basic picture is extremely strong. I’m including within this item things like “it’s been 14 billion years since the Big Bang,” which is pretty important in its own right. I thought of a separate item referring to the need for primordial scale-free perturbations and the growth of structure via gravitational instability — I think that one is arguably at the proper level of importance, but it’s a close call.
    2. The holographic principle. I’m using this as a catch-all for a number of insights, some of which are in the context of string theory, but they are robust enough to be pretty much guaranteed to be part of the final picture whether it involves string theory or not. The germ of the holographic principle is the idea that the number of degrees of freedom inside some region is not proportional to the volume of the region, but rather to the area of its boundary — an insight originally suggested by the behavior of Hawking radiation from black holes. But it goes way beyond that; for example, there can be dualities that establish the equivalence of two different theories defined in different numbers of dimensions (ala AdS/CFT). This establishes once and for all that spacetime is emergent — the underlying notion of a spacetime manifold is not a fundamental feature of reality, but just a good approximation in a certain part of parameter space. People have speculated about this for years, but now it’s actually been established in certain well-defined circumstances.

    A short list, but we have every reason to be proud of it. These are insights, I would wager, that will still be part of our basic picture of reality two hundred years from now. Any other suggestions?

  • After Reading a Child’s Guide to Modern Physics

    Abbas at 3 Quarks reminds us that next year is W.H. Auden’s centenary (and that Britain is curiously unenthusiastic about celebrating the event). The BBC allows you to hear Auden read this poem at a 1965 festival; his father was a physicist.

    If all a top physicist knows
    About the Truth be true,
    Then, for all the so-and-so’s,
    Futility and grime,
    Our common world contains,
    We have a better time
    Than the Greater Nebulae do,
    Or the atoms in our brains.

    Marriage is rarely bliss
    But, surely it would be worse
    As particles to pelt
    At thousands of miles per sec
    About a universe
    Wherein a lover’s kiss
    Would either not be felt
    Or break the loved one’s neck.

    Though the face at which I stare
    While shaving it be cruel
    For, year after year, it repels
    An ageing suitor, it has,
    Thank God, sufficient mass
    To be altogether there,
    Not an indeterminate gruel
    Which is partly somewhere else.

    Our eyes prefer to suppose
    That a habitable place
    Has a geocentric view,
    That architects enclose
    A quiet Euclidian space:
    Exploded myths – but who
    Could feel at home astraddle
    An ever expanding saddle?

    This passion of our kind
    For the process of finding out
    Is a fact one can hardly doubt,
    But I would rejoice in it more
    If I knew more clearly what
    We wanted the knowledge for,
    Felt certain still that the mind
    Is free to know or not.

    It has chosen once, it seems,
    And whether our concern
    For magnitude’s extremes
    Really become a creature
    Who comes in a median size,
    Or politicizing Nature
    Be altogether wise,
    Is something we shall learn.

    Ol’ Wystan is right; we do have a better time than most of the universe. It would be no fun to constantly worry that “a lover’s kiss / Would either not be felt / Or break the loved one’s neck.” And in a sense, it’s surprising (one might almost say unnatural) that our local conditions allow for the build-up of the delicate complexity necessary to nurture passion and poetry among we creatures of median size.

    In most physical systems, we can get a pretty good idea of the relevant scales of length and time just by using dimensional analysis. If you have some fundamental timescale governing the behavior of a system, you naturally expect most processes characteristic of that system to happen on approximately that timescale, give or take an order of magnitude here or there. But our universe doesn’t work that way at all — there are dramatic balancing acts that stretch the relevant timescales far past their natural values. In the absence of any fine-tunings, the relevant timescale for the universe would be the Planck time, 10-44 seconds, whereas the actual age of the universe is more like 1018 seconds. This is actually two problems in one: why doesn’t the vacuum energy rapidly dominate over the energy density in matter and radiation — the cosmological constant problem — and, imagining that we’ve solved that one, why doesn’t spatial curvature dominate over all the energy density — the flatness problem. It would be much more “natural,” in other words, to live in either a cold and empty universe, or one that recollapsed in a jiffy.

    But given that the universe does linger around, it’s still a surprise that the matter within it exhibits interesting dynamics on timescales much longer than the Planck time. A human lifespan, for example, is about 109 seconds. The human/Planck hierarchy actually owes its existence to a multi-layered series of hierarchies. First, the characteristic energy scale of particle physics is set by electroweak symmetry breaking to be about 1011 electron volts, far below the Planck energy at 1027 electron volts. (That’s known to particle physicists as “the” hierarchy problem.) And then the mass of the electron (me ~ 5 x 105 electron volts) is smaller than it really should be, as it is suppressed with respect to the electroweak scale by a Yukawa coupling of about 10-6. But then the weakness of the electromagnetic interaction, as manifested in the small value of the fine-structure constant α = 1/137, implies that the Rydberg (which sets the scales for atomic physics) is even lower than that:

    Ry ~ α2 me ~ 10 electron volts.

    This energy corresponds to timescales (by inserting appropriate factors of Planck’s constant and the speed of light) of about 10-18 seconds; much longer than the Planck time, but still much shorter than a human lifetime. The cascade of hierarchies continues; molecular binding energies are typically much smaller than a Rydberg, the timescales characteristic of mesocopic collections of slowly-moving molecules are correspondingly longer still, etc.

    Because we don’t yet fully understand the origin of these fantastic hierarchies, we can conclude that God exists. Okay, no we can’t. Really we can conclude that we live in a multiverse in which all of the constants of nature take on different values in different places. Okay, we can’t actually conclude that either. What we can do is keep thinking about it, not jumping to too many conclusions while we try to fill one of those pesky “gaps” in our understanding that people like to insist must be evidence for their personal favorite story of reality.

    But “politicizing Nature,” now that’s just bad. Not altogether wise at all.

  • Near and Far

    Confession time: I didn’t successfully figure out what these two crescents were, until I read the explanation at Astronomy Picture of the Day. Can you?

    Crescents

    (more…)

  • Reconstructing Inflation

    All sorts of responsibilities have been sadly neglected, as I’ve been zooming around the continent — stops in Illinois, Arizona, New York, Ontario, New York again, and next Tennessee, all within a matter of two weeks. How is one to blog under such trying conditions? (Airplanes and laptops are involved, if you must know.)

    But the good news is that I’ve been listening to some very interesting physics talks, the kind that actually put ideas into your head and set off long and convoluted bouts of thinking. Possibly conducive to blogging, but only if one pauses for a moment to stop thinking and actually write something. Which is probably a good idea in its own right.

    One of the talks was a tag-team performance by Dick Bond and Lev Kofman, both cosmologists at the Canadian Institute for Theoretical Astrophysics at the University of Toronto. The talk was part of a brief workshop at the Perimeter Institute on “Strings, Inflation, and Cosmology.” It was just the right kind of meeting, with only about twenty people, fairly narrowly focused on an area of common interest (although the talks themselves spanned quite a range, from a typically imaginative propsoal by Gia Dvali about quantum hair on black holes to a detailed discussion of density fluctuations in inflation by Alan Guth).

    Dick and Lev were interested in what we should expect inflationary models to predict, and what data might ultimately teach us about the inflationary era. The primary observables connected with inflation are primordial perturbations — the tiny deviations from a perfectly smooth universe that were imprinted at early times. These deviations come in two forms: “scalar” perturbations, which are fluctuations in the energy density from place to place, and which eventually grow via gravitational instability into galaxies and clusters; and the “tensor” perturbations in the curvature of spacetime itself, which are just long-wavelength gravitational waves. Both arise from the zero-point vacuum fluctuations of quantum fields in the very early universe — for scalar fluctuations, the relevant field is the “inflaton” φ that actually drives inflation, while for tensor fluctuations it’s the spacetime metric itself.

    The same basic mechanism works in both cases — quantum fluctuations (due ultimately to Heisenberg’s uncertainty principle) at very small wavelengths are amplified by the process of inflation to macroscopic scales, where they are temporarily frozen-in until the expansion of the universe relaxes sufficiently to allow them to dynamically evolve. But there is a crucial distinction when it comes to the amount of such fluctuations that we would ultimately see. In the case of gravity waves, the field we hope to observe is precisely the one that was doing the fluctuating early on; the amplitude of such fluctuation is related directly to the rate of inflation when they were created, which is in turn related to the energy density, which is given simply by the potential energy V(φ) of the scalar field. But scalar perturbations arise from quantum fluctuations in φ, and we aren’t going to be observing φ directly; instead, we observe perturbations in the energy density ρ. A fluctuation in φ leads to a different value of the potential V(φ), and consequently the energy density; the perturbation in ρ therefore depends on the slope of the potential, V’ = dV/dφ, as well as the potential itself. Once one cranks through the calculation, we find (somewhat counterintuitively) that a smaller slope yields a larger density perturbation. Long story short, the amplitude of tensor perturbations looks like

    T 2 ~ V ,

    while that of the scalar perturbations looks like

    S 2 ~ V 3/(V’ )2 .

    Of course, such fluctuations are generated at every scale; for any particular wavelength, you are supposed to evaluate these quantities at the moment when the mode is stretched to be larger than the Hubble radius during inflation.

    To date, we are quite sure that we have detected the influence of scalar perturbations; they are responsible for most, if not all, of the temperature fluctuations we observe in the Cosmic Microwave Background. We’re still looking for the gravity-wave/tensor perturbations. It may someday be possible to detect them directly as gravitational waves, with an ultra-sensitive dedicated satellite; at the moment, though, that’s still pie-in-the-sky (as it were). More optimistically, the stretching caused by the gravity waves can leave a distinctive imprint on the polarization of the CMB — in particular, in the type of polarization known as the B-modes. These haven’t been detected yet, but we’re trying.

    Problem is, even if the tensor modes are there, they are probably quite tiny. Whether or not they are substantial enough to produce observable B-mode polarization in the CMB is a huge question, and one that theorists are presently unable to answer with any confidence. (See papers by Lyth and Knox and Song on some of the difficulties.) It’s important to get our theoretical expectations straight, if we’re going to encourage observers to spend millions of dollars and years of their time building satellites to go look for the tensor modes. (Which we are.)

    So Dick and Lev have been trying to figure out what we should expect in a fairly model-independent way, given our meager knowledge of what was going on during inflation. They’ve come up with a class of models and possible behaviors for the scalar and tensor modes as a function of wavelength, and asked which of them could fit the data as we presently understand it, and then what they would predict for future experiments. And they hit upon something interesting. There is a well-known puzzle in the anisotropies of the CMB: on very large angular scales (small l, in the graph below), the observed anisotropy is much smaller than we expect. The red line is the prediction of the standard cosmology, and the data come from the WMAP satellite. (The gray error bars arise from the fact that there are only a finite number of observations of each mode at large scales, while the predictions are purely statistical — a phenomenon known as “cosmic variance.”)

    WMAP CMB power spectrum

    It’s hard to tell how seriously we should take that little glitch, especially since it is at one end of what we can observe. But the computers don’t care, so when Dick and Leve fit models to the data, the models like to do their best to fit that point. If you have a perfectly flat primordial spectrum, or even one that is tilted but still a straight line, there’s not much you can do to fit it. But if you allow some more interesting behavior for the inflaton field, you have a chance.

    Let’s ask ourselves, what would it take for the inflaton to be generating smaller perturbations at earlier times? (Larger wavelengths are produced earlier, as they are the first to get stretched outside the Hubble radius during inflation.) We expect the value of the inflaton potential V to monotonically decrease during inflation, as the scalar field rolls down. So, from the second equation above, the only way to get a smaller scalar amplitude S at early times is to have a substantially larger value of the slope V’. So the inflaton potential might look something like this.

    InflatonPotential

    Maybe it’s a little contrived, but it seems to fit the data, and that’s always nice. And the good news is that a large slope at early times implies that the actual value of the potential V was also large at early times (because the field was higher up a steep slope). Which means, from the equation for T above, that we expect (relatively) large tensor modes at large scales! Which in turn is exactly where we have some hope to look for them.

    This is all a hand-waving reconstruction of the talk that Dick and Lev gave, which involved a lot more equations and Monte Carlo simulations. The real lesson, to me, is that we are still a long way from having a solid handle on what to expect in terms of the inflationary perturbations, and shouldn’t fool ourselves into thinking that our momentary theoretical prejudices are accurate reflections of the true space of possibilities. If it’s true that we have a decent shot at detecting the tensor modes at large scales, it would represent an incredible triumph of our ability to extend our thinking about the universe back to its earliest moments.

  • Physics Antiques Roadshow

    Liveblogging here from the Fall Meeting of the Illinois and Iowa Sections of the American Association of Physics Teachers. The attendees are mostly high-school physics teachers, some from local colleges. Later tonight I’ll be giving a talk, but I can’t resist telling you about the delightful session we just had — WITHIT, or “What in the Heck is This?”

    What is this? High-school science teachers live in a very different world than professional researchers. Typically a “department” is only one person, and when it comes to resources one has to be a little creative. So it’s quite common (I’ve just learned), when one first is hired, for the new teacher to be presented with a storeroom full of stuff that their predecessors had acquired one way or another. And this stuff doesn’t always come nicely packaged with detailed instructions and lesson plans.

    Sometimes, indeed, it’s hard to figure out what the stuff is! So here at the FM of the IIS of the AAPT, people have been bringing in pieces of apparatus that have been lying around for decades and have become unmoored from their original purposes. They then show the wayward equipment to their assembled colleagues, and ask for help figuring out what the heck this thing is supposed to be. So far we’ve had experiments to measure kinetic energy, X-ray tubes, and an inverse-square-law apparatus.

    I see great TV-show possibilities here. (After only one month of living in LA!) Could you imagine the tension as a bedraggled but hopeful physics teacher is told that their gizmo is an original Leonardo?

  • Is That a Particle Accelerator in Your Pocket, Or Are You Just Happy to See Me?

    The Large Hadron Collider accelerates protons to an energy of 7000 GeV, which is pretty impressive. (A GeV is a billion electron volts; the energy in a single proton at rest, using E=mc2, is about 1 GeV.) But it requires a 27-kilometer ring, and the cost is measured in billions of dollars. The next planned accelerator is the International Linear Collider (ILC), which will be similarly grand in size and cost. People have worried, not without reason, that the end is in sight for experimental particle physics at the energy frontier, as it becomes prohibitively expensive to build new machines.

    That why it’s great news that scientists from Lawrence Berkeley Labs and Oxford have managed to accelerate electrons to 1 GeV (via Entropy Bound). What’s that you say? 1 GeV seems tiny compared to 7000 GeV? Yes, but these electrons were accelerated over a distance of just 3.3 centimeters, using laser wakefield technology. You can do the math: if you could simply scale things up (in reality it’s not so easy, of course), you could reach 10,000 GeV in a distance of about a hundred meters.

    The LHC and the ILC won’t be the end of particle physics. Even the Planck scale, 1018 GeV, isn’t all that big. In terms of mass-energy, it’s only one millionth of a gram. The kinetic energy of a fast car is of order 1016 GeV, close to the traditional grand-unification scale. (Why? Kinetic energy is mv2/2, but let’s ignore factors of order unity. The speed of light is c = 200,000 miles/sec = 7*108 miles/hour. So a car going 70 miles/hour is moving at 10-7 the speed of light. The mass of a car is about one metric ton, which is 1000 kg, which is 106 grams, and one gram is 1024 GeV. So a car is 1030 GeV. [Or you could just happen to know how many nucleons/car.] So the kinetic energy is that mass times the velocity squared, which is 1030*(10-7)2 GeV = 1016 GeV.)

    The trick, of course, is getting all this energy into a single particle, but that’s a technology problem. We’ll get there.