You Can’t Derive Ought from Is

(Cross-posted at NPR’s 13.7: Cosmos and Culture.)

Remember when, inspired by Sam Harris’s TED talk, we debated whether you could derive “ought” (morality) from “is” (science)? That was fun. But both my original post and the followup were more or less dashed off, and I never did give a careful explanation of why I didn’t think it was possible. So once more into the breach, what do you say? (See also Harris’s response, and his FAQ. On the other side, see Fionn’s comment at Project Reason, Jim at Apple Eaters, and Joshua Rosenau.)

I’m going to give the basic argument first, then litter the bottom of the post with various disclaimers and elaborations. And I want to start with a hopefully non-controversial statement about what science is. Namely: science deals with empirical reality — with what happens in the world. (I.e. what “is.”) Two scientific theories may disagree in some way — “the observable universe began in a hot, dense state about 14 billion years ago” vs. “the universe has always existed at more or less the present temperature and density.” Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right. The observation might be difficult or even impossible to carry out, but we can always imagine what it would entail. (Statements about the contents of the Great Library of Alexandria are perfectly empirical, even if we can’t actually go back in time to look at them.) If you have a dispute that cannot in principle be decided by recourse to observable facts about the world, your dispute is not one of science.

With that in mind, let’s think about morality. What would it mean to have a science of morality? I think it would look have to look something like this:

Human beings seek to maximize something we choose to call “well-being” (although it might be called “utility” or “happiness” or “flourishing” or something else). The amount of well-being in a single person is a function of what is happening in that person’s brain, or at least in their body as a whole. That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured. The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized.

All this talk of maximizing functions isn’t meant to lampoon the project of grounding morality on science; it’s simply taking it seriously. Casting morality as a maximization problem might seem overly restrictive at first glance, but the procedure can potentially account for a wide variety of approaches. A libertarian might want to maximize a feeling of personal freedom, while a traditional utilitarian might want to maximize some version of happiness. The point is simply that the goal of morality should be to create certain conditions that are, in principle, directly measurable by empirical means. (If that’s not the point, it’s not science.)

Nevertheless, I want to argue that this program is simply not possible. I’m not saying it would be difficult — I’m saying it’s impossible in principle. Morality is not part of science, however much we would like it to be. There are a large number of arguments one could advance for in support of this claim, but I’ll stick to three.

1. There’s no single definition of well-being.

People disagree about what really constitutes “well-being” (or whatever it is you think they should be maximizing). This is so perfectly obvious, it’s hard to know what to defend. Anyone who wants to argue that we can ground morality on a scientific basis has to jump through some hoops.

First, there are people who aren’t that interested in universal well-being at all. There are serial killers, and sociopaths, and racial supremacists. We don’t need to go to extremes, but the extremes certainly exist. The natural response is to simply separate out such people; “we need not worry about them,” in Harris’s formulation. Surely all right-thinking people agree on the primacy of well-being. But how do we draw the line between right-thinkers and the rest? Where precisely do we draw the line, in terms of measurable quantities? And why there? On which side of the line do we place people who believe that it’s right to torture prisoners for the greater good, or who cherish the rituals of fraternity hazing? Most particularly, what experiment can we imagine doing that tells us where to draw the line?

More importantly, it’s equally obvious that even right-thinking people don’t really agree about well-being, or how to maximize it. Here, the response is apparently that most people are simply confused (which is on the face of it perfectly plausible). Deep down they all want the same thing, but they misunderstand how to get there; hippies who believe in giving peace a chance and stern parents who believe in corporal punishment for their kids all want to maximize human flourishing, they simply haven’t been given the proper scientific resources for attaining that goal.

While I’m happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing. The position doesn’t even seem coherent. Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings? Can we not even imagine people with fundamentally incompatible views of the good? (I think I can.) And if we can, what is the reason for the cosmic accident that we all happen to agree? And if that happy cosmic accident exists, it’s still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn’t necessarily imply that it is good. We could all be mistaken, after all.

In the real world, right-thinking people have a lot of overlap in how they think of well-being. But the overlap isn’t exact, nor is the lack of agreement wholly a matter of misunderstanding. When two people have different views about what constitutes real well-being, there is no experiment we can imagine doing that would prove one of them to be wrong. It doesn’t mean that moral conversation is impossible, just that it’s not science.

2. It’s not self-evident that maximizing well-being, however defined, is the proper goal of morality.

Maximizing a hypothetical well-being function is an effective way of thinking about many possible approaches to morality. But not every possible approach. In particular, it’s a manifestly consequentialist idea — what matters is the outcome, in terms of particular mental states of conscious beings. There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences. Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments? You’re going to get bored of me asking this, but: what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?

The emphasis on the mental states of conscious beings, while seemingly natural, opens up many cans of worms that moral philosophers have tussled with for centuries. Imagine that we are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae. Clearly achieving such a state is a moral good. Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level, until they share exactly the mental state of the conscious person in those conditions. Is that an equal moral good to the conditions in which they actually are healthy and in love etc.? If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection? If not, then clearly our definition of “well-being” is not simply a function of conscious mental states. And if not, what is it?

3. There’s no simple way to aggregate well-being over different individuals.

The big problems of morality, to state the obvious, come about because the interests of different individuals come into conflict. Even if we somehow agreed perfectly on what constituted the well-being of a single individual — or, more properly, even if we somehow “objectively measured” well-being, whatever that is supposed to mean — it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone. People will typically have to sacrifice for the good of others; by paying taxes, if nothing else.

So how are we to decide how to balance one person’s well-being against another’s? To do this scientifically, we need to be able to make sense of statements like “this person’s well-being is precisely 0.762 times the well-being of that person.” What is that supposed to mean? Do we measure well-being on a linear scale, or is it logarithmic? Do we simply add up the well-beings of every individual person, or do we take the average? And would that be the arithmetic mean, or the geometric mean? Do more individuals with equal well-being each mean greater well-being overall? Who counts as an individual? Do embryos? What about dolphins? Artificially intelligent robots?

These may sound like silly questions, but they’re necessary ones if we’re supposed to take morality-as-science seriously. The easy questions of morality are easy, at least among groups of people who start from similar moral grounds; but it’s the hard ones that matter. This isn’t a matter of principle vs. practice; these questions don’t have single correct answers, even in principle. If there is no way in principle to calculate precisely how much well-being one person should be expected to sacrifice for the greater well-being of the community, then what you’re doing isn’t science. And if you do come up with an algorithm, and I come up with a slightly different one, what’s the experiment we’re going to do to decide which of our aggregate well-being functions correctly describes the world? That’s the real question for attempts to found morality on science, but it’s an utterly rhetorical one; there are no such experiments.

Those are my personal reasons for thinking that you can’t derive ought from is. The perceptive reader will notice that it’s really just one reason over and over again — there is no way to answer moral questions by doing experiments, even in principle.

Now to the disclaimers. They’re especially necessary because I suspect there’s no practical difference between the way that people on either side of this debate actually think about morality. The disagreement is all about deep philosophical foundations. Indeed, as I said in my first post, the whole debate is somewhat distressing, as we could be engaged in an interesting and fruitful discussion about how scientific methods could help us with our moral judgments, if we hadn’t been distracted by the misguided attempt to found moral judgments on science. It’s a subtle distinction, but this is a subtle game.

First: it would be wonderful if it were true. I’m not opposed to founding morality on science as a matter of personal preference; I mean, how awesome would that be? Opening up an entirely new area of scientific endeavor in the cause of making the world a better place. I’d be all for that. Of course, that’s one reason to be especially skeptical of the idea; we should always subject those claims that we want to be true to the highest standards of scrutiny. In this case, I think it falls far short.

Second: science will play a crucial role in understanding morality. The reality is that many of us do share some broad-brush ideas about what constitutes the good, and how to go about achieving it. The idea that we need to think hard about what that means, and in particular how it relates to the extraordinarily promising field of neuroscience, is absolutely correct. But it’s a role, not a foundation. Those of us who deny that you can derive “ought” from “is” aren’t anti-science; we just want to take science seriously, and not bend its definition beyond all recognition.

Third: morality is still possible. Some of the motivation for trying to ground morality on science seems to be the old canard about moral relativism: “If moral judgments aren’t objective, you can’t condemn Hitler or the Taliban!” Ironically, this is something of a holdover from a pre-scientific worldview, when religion was typically used as a basis for morality. The idea is that a moral judgment simply doesn’t exist unless it’s somehow grounded in something out there, either in the natural world or a supernatural world. But that’s simply not right. In the real world, we have moral feelings, and we try to make sense of them. They might not be “true” or “false” in the sense that scientific theories are true or false, but we have them. If there’s someone who doesn’t share them (and there is!), we can’t convince them that they are wrong by doing an experiment. But we can talk to them and try to find points of agreement and consensus, and act accordingly. Moral relativism doesn’t imply moral quietism. And even if it did (it doesn’t), that wouldn’t affect whether or not it was true.

And finally: pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don’t agree with ordinary science. That’s mixing levels of description. It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress. But here we are concerned only with people who have agreed to buy into all the epistemic assumptions of reality-based science — they still disagree about morality. That’s the problem. If the project of deriving ought from is were realistic, disagreements about morality would be precisely analogous to disagreements about the state of the universe fourteen billion years ago. There would be things we could imagine observing about the universe that would enable us to decide which position was right. But as far as morality is concerned, there aren’t.

All this debate is going to seem enormously boring to many people, especially as the ultimate pragmatic difference seems to be found entirely in people’s internal justifications for the moral stances they end up defending, rather than what those stances actually are. Hopefully those people haven’t read nearly this far. To the rest of us, it’s a crucially important issue; justifications matter! But at least we can agree that the discussion is well worth having. And it’s sure to continue.

83 Comments

83 thoughts on “You Can’t Derive Ought from Is”

  1. I actually disagree with the original definition of morality, that it is *maximizing* the sum of morality of everyone. Surely there are other functions we could consider, such as *maximin*- maximizing the minimum wellbeing of anyone. Otherwise I think you can always construct examples where, for example, murder is justified to benefit some group.

  2. MedallionOfFerret

    Good post, Sean. I came to CV for science; I get a bonus like this post every so often. Keep it up.

  3. This is nice; that first “Is/Ought” post got me thinking on the subject (the first time I had ever heard of the is/ought conundrum, in fact), and now…back we come again. I’d argue that deriving the Ought from the Is isn’t necessary, because the ought already is; or rather, there are a whole bevy of oughts running around, in the form of everyone’s individual ideas of what should be. I don’t quite see whether one should have to worry about whether or not they’re “true” (whatever that means, in this context), because regardless of that they exist. Seems like it would be sensible to follow along with those oughts, and do one’s best to make sure that what oughts one encounters, or can deduce to exist in other people, are followed through with–because, again, trying to figure out what context they should be true in seems difficult, impossible, or nonsensical. They exist, and resisting them or ignoring them is even more pointless (from a purely nihilistic point of view, mind) than following them, so…Hey, why not?

  4. William Sidell

    What is existence?

    Existence is what is and is defined by the individual.

    What is science?

    Science is the attempt for an ‘exister’ to understand the existence that he believes he occupies by using certain tools.

    What is a tool?

    A tool is an instrument that controls the way an ‘exister’ perceives his existence (whether or not what he perceives is actually true.)

    Why is this of consequence?

    An ‘exister’ defines his existence.
    Existence is defined by science.
    Science is defined by tools.
    Perception is the resultant of tools.
    Tools define what is for an ‘exister’.
    Ought is a subset of is.

  5. Much of the “is/ought” debate can be simplified by admitting (or submitting) that “morality” is not a form of knowledge. There may be empirical things we can know or learn about morality, which might tell us what morality “is,” but that exercise does not inform actual judgments; knowing what a judgment is does not crank out judgments themselves. Hume’s separation of is from ought was brilliant but sometimes we don’t get the upshot: judging is a process informed by something other than facts (there are no moral facts). This doesn’t render judgment (or ethics or morality) impossible. It just makes judgment a product of something other than science.

  6. Human Flourishing sounds like some nasty process in an episode of Dr Who :-D.
    On the other hand, Flourishing could be a stimulating form of BDSM.

  7. Sean, you have made a few mistakes.

    “There’s no single definition of well-being”.

    How is this a problem *in principle*? As Sam has pointed out, there is no single definition of ‘health’, but that hasn’t been a problem for medical science. And even if a single definition were needed for Sam’s case, the non-existence of this definition is a problem *in practice*, not in principle. We can easily imagine that the day may come when we will settle on a definition of ‘well-being’.

    “…what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?”

    This is a strawman. Sam has repeatedly said that when he uses the word ‘science’, he uses it in a broad sense, encompassing all fields of rational inquiry (e.g., philosophy, history, mathematics, and science). There may not be an *empirical* test to distinguish among rival moral theories, but there may be *conceptual* or *philosophical* arguments that could do so.

    “There’s no simple way to aggregate well-being over different individuals.”

    Again, this may be a problem in practice, but how is it a problem in principle? Sam’s argument does not hinge on whether we could currently aggregate well-being across individuals. Even if different people experience well-being in different ways, that’s not a problem. Different people enjoy different types of music. To place all individuals in a state of musical enjoyment would not be to play one type of music to all of them. Rather, it would be to play to each individual the type of music that he or she favors. Similarly, we could in principle arrange our societies so as to cater to the different well-being requirements of different people.

    “…it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone”

    This is another strawman. Sam has never claimed that his argument hinges on the possibility of providing perfect happiness for everyone. Sam has explicitly said that he conceives of a moral landscape with different peaks and troughs. The peaks do not represent *perfect* happiness for *everyone*. Rather, the peaks represent the maximum *possible* happiness.

    “…pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don’t agree with ordinary science”

    Ordinary science is based on axioms (e.g., we must be logically coherent in our descriptions, we may base predictions on past observations). Sam is arguing that moral reasoning is also based on axioms (namely, that we should pursue well-being and avoid suffering). Those who reject the axioms of ordinary science have no way to construct a rational body of knowledge about the world. Sam is arguing that those who reject his axioms of moral reasoning have no way to construct a rational set of moral imperatives. Indeed, what would it mean to construct a set of supposedly ‘moral’ imperatives that would, for instance, assign rights and duties to inanimate objects, or advocate the greatest possible misery for all conscious creatures?

  8. Par la Grâce de Dieu, NIKOLAI III , EMPEREUR et Autocrate de toutes les

    One of the most frightening things about the Nazis is not that they had different moral standards from those of the rest of us: it is that they didn’t. They knew that what they were doing was wrong — and in fact that’s a major reason why they went ahead and did it. See eg Himmler’s “secret speech”.

  9. @Phil Plait

    I think Sean has misstated the problem slightly. Perhaps we could invent an aggregation scheme, if we chose objectively measurable criteria for well-being. But there is no objective basis for choosing one such aggregation scheme over another. It may seem obvious to choose a scheme of universal equality, i.e. one where each individual’s well-being counts equally. But why should we? We can’t even ask that question because it presupposes a prior moral standard (a prior should/ought). Many of us do support such a scheme–at least when it comes to basic human rights–but the ultimate reasons for us to do so are our subjective preferences.

    Moreover, while we may support equal basic rights for all humans, few of us actually treat all humans as having equal moral value. We naturally give preference to the well-being of our own children over children in a distant country, and we don’t generally feel that we are being immoral in doing so. I don’t think we would accept a moral standard that told us that we (as individuals) had a moral duty to consider the well-being of all children equally.

  10. For me the cartesian morality is based on the usual medicine and psychology/psychiatry for a lot of things, so it is based on science.

  11. @Jim Lippard (36) – thanks for the reference and the critique of Greene’s work. For anyone who is interested, there is a working draft of the paper Jim mentioned available here. Final version is behind a journal paywall.

  12. P.S. If we focus too much on how to measure well-being, or how to aggregate across individuals, there’s a danger of losing sight of the logically prior question: why do we want such a formula in the first place? What’s it for?

    When we come up with scientific laws, there are two fairly obvious answers to “what’s it for?” We want scientific laws because we (a) want to understand how the world works, and/or (b) want to control the world.

    But there is no equivalent answer to the question of what this formula for maximising well-being is for. To say “it’s for maximising well-being” is tautologous. The only reasonable answer is that maximising well-being is something that people want to do. In other words, it’s a subjective matter about what people want. And different people are likely to have different preferences for what constitutes maximum well-being, whose well-being is most important and even whether well-being is the only thing that matters.

  13. Sean, I’m with you on the debate but you shot yourself in the foot with

    The job of morality is to specify what that [global well-being] function is, measure it, and derive conditions in the world under which it is maximized.

    IF you define morality this way*, half your questions go away. For example, while you “see no evidence whatsoever that they all ultimately want the same thing”, that’s saying that (psyhological and neuroscientific) evidence CAN convince you otherwise – so the theory is scientific.

    Another example: “Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments?” Well, they are analytic – morality AS DEFINED ABOVE simply is consequential. I don’t need empirical evidence that morality is consequentialist any more than I need empirical evidence to establish that the Schroedinger equation is linear.

    You let Harris define morality to be what he wants it to be. And if you do, it’s an empirical science. The real question is what morality is – once you settle on some naturalistic metaethics, ethics indeed becomes a science.

    * I’m assuming here that you have an implied “we all want and should maximize global well-being” as part of the moral theory.

  14. Sam is arguing that moral reasoning is also based on axioms (namely, that we should pursue well-being and avoid suffering)

    Avoid suffering? Are you sure? Some attention might be paid to the insights of our friend Nietzsche. Avoid causing suffering? That would keep some of your pieces on the board, at least, but such presumptions are fraught with problems, as Sean (as he has with most of the comments made) has already highlighted.

    I also feel like mentioning that I really hate IKEA.

  15. Pingback: Can Science Answer Moral Questions? (Pt. 2) - Science and Religion Today

  16. Pingback: Bruin Alliance of Skeptics and Secularists » BASS Meeting VI

  17. @ DaveH (comment 66)

    Yes, I’m sure. Avoiding suffering lies at the heart of sensible moral reasoning. If we get into the nitty-gritty details, then of course we can identify cases where we ought to endure a degree of suffering to obtain greater overall well-being (e.g., lifting weights at the gym is painful, but ultimately is good for your health and releases pleasurable endorphins). No one’s saying that to reach a peak of well-being we must never suffer. But clearly we ought to avoid needless suffering. There is a conversation to be had about how to maximize our well-being with minimal concomitant suffering, and Sam Harris is trying to lay the foundations for this conversation to take place.

  18. I think it probably is possible to answer moral questions based on the grounds of solid quantitative logical reasoning. However, for one thing, it is necessary to make assumptions in order to get such models to work – nothing new here, of course, just an instance of the incompleteness theorem. Also, I think, even with very simple assumptions (such as that players in some form of game are bayesian decision makers) the level of difficulty involved in building a convincing logical foundation is quite high. But I don’t think that this should stop people from trying, even if the answers at the end of the day are limited rather than absolute in extent.

    My current thoughts on the matter is that in order to get a proper description of such dynamics one needs to look at double categories, or triple categories at the very least (like 2 or 3 categories, but with a bit more structure). Then one needs to build on top of this a tensor theory, and then somehow use this to find appropriate Nash-type equilibria.

  19. Vincent,

    Clearly, needless suffering is not needed. By definition. But our relationship with suffering is far more complex and intrinsic to experience than you indicate. Various forms of suffering we even call entertainment. I question what you call an axiom.

    The discipline of suffering, of GREAT suffering–know ye not that it is only THIS discipline that has produced all the elevations of humanity hitherto? The tension of soul in misfortune which communicates to it its energy, its shuddering in view of rack and ruin, its inventiveness and bravery in undergoing, enduring, interpreting, and exploiting misfortune, and whatever depth, mystery, disguise, spirit, artifice, or greatness has been bestowed upon the soul–has it not been bestowed through suffering, through the discipline of great suffering?

    Is that not sensible? Is he not talking about the spirit of adventure, fortitude, endurance, Tragedy, Horror, The Blues… ?

  20. Yes, morality won’t ever be a strictly scientific discipline but it can and should benefit from rigorous studies of consequences of various moral frameworks. Science won’t supplant morality but it can and should inform our moral choices.

  21. Vincent said:

    Avoiding suffering lies at the heart of sensible moral reasoning

    Avoiding suffering is at the heart of one kind of moral reasoning, but not of all of them.

    Look at “Troy” the recent movie. For Achilles and the Myrmidons, achieving glory is the heart of their moral code, and suffering, their own and anyone elses, is incidental.

    Look at the recent incidents of “honor killing”. They happen because, for those men, honor is the heart of morality, and it trumps suffering, both their daughters and their own.

  22. Ronan said:

    I’d argue that deriving the Ought from the Is isnt necessary…

    This is almost brilliant. Given the lack of a (nearly) universal connector between the two domains (Sam Harris says: you Ought to do what promotes the well-being of everyone, but a billion Muslims say: you Ought to do what Allah through Mohammed has commanded, etc), we should give it up as a lost cause.

    The reason everyone wants to link Ought to Is, is that nearly everyone (except schizophrenics and creationists) broadly agrees on Is, but there are dozens (hundreds?) of competing Oughts with millions and billions of adherents each.

    The analogy for Is would be if everyone in Europe thought the world was flat, and thousands of them claimed to have actually been to the edge, and seen the turtle underneath.

Comments are closed.

Scroll to Top