You Can’t Derive Ought from Is

(Cross-posted at NPR’s 13.7: Cosmos and Culture.)

Remember when, inspired by Sam Harris’s TED talk, we debated whether you could derive “ought” (morality) from “is” (science)? That was fun. But both my original post and the followup were more or less dashed off, and I never did give a careful explanation of why I didn’t think it was possible. So once more into the breach, what do you say? (See also Harris’s response, and his FAQ. On the other side, see Fionn’s comment at Project Reason, Jim at Apple Eaters, and Joshua Rosenau.)

I’m going to give the basic argument first, then litter the bottom of the post with various disclaimers and elaborations. And I want to start with a hopefully non-controversial statement about what science is. Namely: science deals with empirical reality — with what happens in the world. (I.e. what “is.”) Two scientific theories may disagree in some way — “the observable universe began in a hot, dense state about 14 billion years ago” vs. “the universe has always existed at more or less the present temperature and density.” Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right. The observation might be difficult or even impossible to carry out, but we can always imagine what it would entail. (Statements about the contents of the Great Library of Alexandria are perfectly empirical, even if we can’t actually go back in time to look at them.) If you have a dispute that cannot in principle be decided by recourse to observable facts about the world, your dispute is not one of science.

With that in mind, let’s think about morality. What would it mean to have a science of morality? I think it would look have to look something like this:

Human beings seek to maximize something we choose to call “well-being” (although it might be called “utility” or “happiness” or “flourishing” or something else). The amount of well-being in a single person is a function of what is happening in that person’s brain, or at least in their body as a whole. That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured. The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized.

All this talk of maximizing functions isn’t meant to lampoon the project of grounding morality on science; it’s simply taking it seriously. Casting morality as a maximization problem might seem overly restrictive at first glance, but the procedure can potentially account for a wide variety of approaches. A libertarian might want to maximize a feeling of personal freedom, while a traditional utilitarian might want to maximize some version of happiness. The point is simply that the goal of morality should be to create certain conditions that are, in principle, directly measurable by empirical means. (If that’s not the point, it’s not science.)

Nevertheless, I want to argue that this program is simply not possible. I’m not saying it would be difficult — I’m saying it’s impossible in principle. Morality is not part of science, however much we would like it to be. There are a large number of arguments one could advance for in support of this claim, but I’ll stick to three.

1. There’s no single definition of well-being.

People disagree about what really constitutes “well-being” (or whatever it is you think they should be maximizing). This is so perfectly obvious, it’s hard to know what to defend. Anyone who wants to argue that we can ground morality on a scientific basis has to jump through some hoops.

First, there are people who aren’t that interested in universal well-being at all. There are serial killers, and sociopaths, and racial supremacists. We don’t need to go to extremes, but the extremes certainly exist. The natural response is to simply separate out such people; “we need not worry about them,” in Harris’s formulation. Surely all right-thinking people agree on the primacy of well-being. But how do we draw the line between right-thinkers and the rest? Where precisely do we draw the line, in terms of measurable quantities? And why there? On which side of the line do we place people who believe that it’s right to torture prisoners for the greater good, or who cherish the rituals of fraternity hazing? Most particularly, what experiment can we imagine doing that tells us where to draw the line?

More importantly, it’s equally obvious that even right-thinking people don’t really agree about well-being, or how to maximize it. Here, the response is apparently that most people are simply confused (which is on the face of it perfectly plausible). Deep down they all want the same thing, but they misunderstand how to get there; hippies who believe in giving peace a chance and stern parents who believe in corporal punishment for their kids all want to maximize human flourishing, they simply haven’t been given the proper scientific resources for attaining that goal.

While I’m happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing. The position doesn’t even seem coherent. Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings? Can we not even imagine people with fundamentally incompatible views of the good? (I think I can.) And if we can, what is the reason for the cosmic accident that we all happen to agree? And if that happy cosmic accident exists, it’s still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn’t necessarily imply that it is good. We could all be mistaken, after all.

In the real world, right-thinking people have a lot of overlap in how they think of well-being. But the overlap isn’t exact, nor is the lack of agreement wholly a matter of misunderstanding. When two people have different views about what constitutes real well-being, there is no experiment we can imagine doing that would prove one of them to be wrong. It doesn’t mean that moral conversation is impossible, just that it’s not science.

2. It’s not self-evident that maximizing well-being, however defined, is the proper goal of morality.

Maximizing a hypothetical well-being function is an effective way of thinking about many possible approaches to morality. But not every possible approach. In particular, it’s a manifestly consequentialist idea — what matters is the outcome, in terms of particular mental states of conscious beings. There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences. Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments? You’re going to get bored of me asking this, but: what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?

The emphasis on the mental states of conscious beings, while seemingly natural, opens up many cans of worms that moral philosophers have tussled with for centuries. Imagine that we are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae. Clearly achieving such a state is a moral good. Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level, until they share exactly the mental state of the conscious person in those conditions. Is that an equal moral good to the conditions in which they actually are healthy and in love etc.? If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection? If not, then clearly our definition of “well-being” is not simply a function of conscious mental states. And if not, what is it?

3. There’s no simple way to aggregate well-being over different individuals.

The big problems of morality, to state the obvious, come about because the interests of different individuals come into conflict. Even if we somehow agreed perfectly on what constituted the well-being of a single individual — or, more properly, even if we somehow “objectively measured” well-being, whatever that is supposed to mean — it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone. People will typically have to sacrifice for the good of others; by paying taxes, if nothing else.

So how are we to decide how to balance one person’s well-being against another’s? To do this scientifically, we need to be able to make sense of statements like “this person’s well-being is precisely 0.762 times the well-being of that person.” What is that supposed to mean? Do we measure well-being on a linear scale, or is it logarithmic? Do we simply add up the well-beings of every individual person, or do we take the average? And would that be the arithmetic mean, or the geometric mean? Do more individuals with equal well-being each mean greater well-being overall? Who counts as an individual? Do embryos? What about dolphins? Artificially intelligent robots?

These may sound like silly questions, but they’re necessary ones if we’re supposed to take morality-as-science seriously. The easy questions of morality are easy, at least among groups of people who start from similar moral grounds; but it’s the hard ones that matter. This isn’t a matter of principle vs. practice; these questions don’t have single correct answers, even in principle. If there is no way in principle to calculate precisely how much well-being one person should be expected to sacrifice for the greater well-being of the community, then what you’re doing isn’t science. And if you do come up with an algorithm, and I come up with a slightly different one, what’s the experiment we’re going to do to decide which of our aggregate well-being functions correctly describes the world? That’s the real question for attempts to found morality on science, but it’s an utterly rhetorical one; there are no such experiments.

Those are my personal reasons for thinking that you can’t derive ought from is. The perceptive reader will notice that it’s really just one reason over and over again — there is no way to answer moral questions by doing experiments, even in principle.

Now to the disclaimers. They’re especially necessary because I suspect there’s no practical difference between the way that people on either side of this debate actually think about morality. The disagreement is all about deep philosophical foundations. Indeed, as I said in my first post, the whole debate is somewhat distressing, as we could be engaged in an interesting and fruitful discussion about how scientific methods could help us with our moral judgments, if we hadn’t been distracted by the misguided attempt to found moral judgments on science. It’s a subtle distinction, but this is a subtle game.

First: it would be wonderful if it were true. I’m not opposed to founding morality on science as a matter of personal preference; I mean, how awesome would that be? Opening up an entirely new area of scientific endeavor in the cause of making the world a better place. I’d be all for that. Of course, that’s one reason to be especially skeptical of the idea; we should always subject those claims that we want to be true to the highest standards of scrutiny. In this case, I think it falls far short.

Second: science will play a crucial role in understanding morality. The reality is that many of us do share some broad-brush ideas about what constitutes the good, and how to go about achieving it. The idea that we need to think hard about what that means, and in particular how it relates to the extraordinarily promising field of neuroscience, is absolutely correct. But it’s a role, not a foundation. Those of us who deny that you can derive “ought” from “is” aren’t anti-science; we just want to take science seriously, and not bend its definition beyond all recognition.

Third: morality is still possible. Some of the motivation for trying to ground morality on science seems to be the old canard about moral relativism: “If moral judgments aren’t objective, you can’t condemn Hitler or the Taliban!” Ironically, this is something of a holdover from a pre-scientific worldview, when religion was typically used as a basis for morality. The idea is that a moral judgment simply doesn’t exist unless it’s somehow grounded in something out there, either in the natural world or a supernatural world. But that’s simply not right. In the real world, we have moral feelings, and we try to make sense of them. They might not be “true” or “false” in the sense that scientific theories are true or false, but we have them. If there’s someone who doesn’t share them (and there is!), we can’t convince them that they are wrong by doing an experiment. But we can talk to them and try to find points of agreement and consensus, and act accordingly. Moral relativism doesn’t imply moral quietism. And even if it did (it doesn’t), that wouldn’t affect whether or not it was true.

And finally: pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don’t agree with ordinary science. That’s mixing levels of description. It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress. But here we are concerned only with people who have agreed to buy into all the epistemic assumptions of reality-based science — they still disagree about morality. That’s the problem. If the project of deriving ought from is were realistic, disagreements about morality would be precisely analogous to disagreements about the state of the universe fourteen billion years ago. There would be things we could imagine observing about the universe that would enable us to decide which position was right. But as far as morality is concerned, there aren’t.

All this debate is going to seem enormously boring to many people, especially as the ultimate pragmatic difference seems to be found entirely in people’s internal justifications for the moral stances they end up defending, rather than what those stances actually are. Hopefully those people haven’t read nearly this far. To the rest of us, it’s a crucially important issue; justifications matter! But at least we can agree that the discussion is well worth having. And it’s sure to continue.

83 Comments

83 thoughts on “You Can’t Derive Ought from Is”

  1. @lix:

    I disagree utterly. First of all, it’s not clear that deciding whether or not one “ought” to believe a proposition is not a moral decision. I’m rather sure that it is at least sometimes a moral decision; my first inclination is to argue that it is pretty much always such, though often without much consequence.

    Second of all, it’s not clear that the “moral sense of ‘ought'” is entirely distinct from other uses, which should be clear from my assertion above. For example, you bring in a “self-interested ought.” I’m pretty sure this is just the moral ought, perhaps expressed by someone with a more relaxed set of moral values.

    Unlike the vaccination question, there is no definite right or wrong here. You don’t know — or get to decide — what ought means any more than I do. So please stop condescending and just consider this as a different approach to the problem than yours. Maybe you can learn something from it.

  2. @Matt Tarditti: The empirical data gathered in psychological research is abundant. Behavioural data, such as reaction times or accuracy measures, as well as data on psychological constructs, such as personality measures. Both classes of data certainly meet the criteria for being measured and independently verifiable.

    I think the ought/is argument is an important one, but I think it’s equally important (or perhaps more-so, at least for any consequentialists out there) to acknowledge that the scientific work Harris and others are calling for is already well underway. It’s just not presented as the science of morality – it’s presented as the thing we’re all actually talking about – the science of well-being, Positive Psychology. While it is sometimes presented in a feel-good unscientific way, in principle it can answer most of the silly-sounding-but-necessary questions, given appropriate assumptions or answers to the remaining questions.

    Here’s an example of some of that work: http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.psych.52.1.141

  3. “is” and “ought” are linguistic conventions and have no normative power over the universe

    Then, all distinctions are synthetic, since everything is spoken of using a linguistic convention. Thank you, Wittgenstein – I feel if you’re going to say stuff like that you might as well not say it!

  4. @Dan L.: Sorry my answer appeared condescending, it was not intentionally so. It still seems to me there’s a clear distinction between moral and self-interested use of “ought”.

    Morality is about what one should do at a broad level, encompassing society’s interests and potentially non-utilitarian values and goals as well. Self-interest, in contrast, is simply about how to satisfy one’s personal preferences, and is therefore relatively easy to evaluate.

    Here’s an example of a self-interested ought:
    “I want an apple. But the apples are not here; they’re in the kitchen. So I ought to go to the kitchen and get an apple.”
    Here’s an example of a self-interested ought in the context of belief:
    “I’d like to believe in unicorns. But, I can’t find any evidence of unicorns, and I really don’t want to believe falsehoods. So I ought not believe in unicorns.”

    In both cases, “ought” is used simply to derive logical consequences of existing knowledge in the context of known preferences.

    Here’s an example of a moral ought:
    “I ought to love my enemy.” In this context, the “ought” is not based on logic or any utilitarian goal; it’s just something some people feel is important, regardless of the context or consequences. The problem is, different people and cultures have very different feelings about these things, so there is no simple or universal preference function. As well as having different basic values, different people place different weights on other peoples’ interests, and some groups believe in absolute moral systems that don’t even allow for preference functions.

    It’s true, I have heard some people say things like “I ought to believe in God”, which sounds like a moral ought applied to a belief. But in my experience, those were always people who had grown up in a strong religious context and found atheism logically compelling but feared the social consequences they would face if they admitted it. So that comes back to self-interest again – just an unusual case where believing a lie is a self-interested act. Similarly, when someone says: “I ought to trust her” – what is really meant is “I don’t entirely trust her, but I fear the consequences of my mistrust and therefore think it’s in my interests to suppress it”.

  5. Well, maximizing well-being is simply John Stuart Mills` Utilitarianism…
    Also, saying you cannot quantify `well-being` basically then also says what
    neuroscientists are doing with fMRI and PET scans also are not science.
    I think that the problem between Sean and Sam is that Sam is a neuroscientist,
    and Sean is a physicist, and the cultures and definitions are somewhat different.
    Sure, morality is a concept, not a particle. It is related to a mind-state. That does not mean
    that it isn`t part of objective reality so long as human cultures exist, in the same way that, for example, hunger and thirst exist. Sean is just being a positivist reductionist, which, by the way, is OK.

  6. The methods of science are fully capable of telling us what the common underlying principle or principles are (if any exist) of all cultural moral standards and common moral intuitions of mentally normal people.

    For instance, is there a hypothesis about what this underlying principle might be that meets normal criteria for scientific utility better than any other hypothesis? These criteria might include explanatory power for cultural moral standards and moral puzzles and predictive power for common moral intuitions, universality, no contradictions with known facts, and so forth.

    If such new scientific knowledge is found, it could be examined to see if it, like any scientific knowledge, could be exploited for our benefit.

    The evolution of morality literature shows the leading candidate for the underlying principle of moral cultural standards and moral intuitions is that they are heuristics and strategies for exploiting the benefits of cooperation for the cooperating group. If all moral behaviors are expected to, on average, produce benefits for the group, then it might be a rational choice to accept the burdens of such a definition of morality. No magic oughts (no sources of justificatory force beyond reason) would be required.

    So there may be no necessity to derive ‘ought’ from ‘is’.

    It may also be a good idea to drop the idea that somehow the well-being of conscious beings is a definition of the goal of moral behavior that is justifiable as based in science. It isn’t. It is not even competitive.

  7. @Matt Tarditti: DaveH was using “in principle” for string theory, not psychology. The latter already has lots of “in practice” examples. They abound, but I would recommend Patricia Churchland’s Brain-Wise: Studies in Neurophilosophy. It has many examples of psychological experiment giving evidence about hard problems like the nature of consciousness, our perception of color, our mental representations of the world, etc.

    Psychology isn’t physics. There’s no simple place to start, like pendulums or planetary orbits, and people are much more complicated than electrons. So there’s been a lot of guessing in the past. Still, even the most cursory examination of modern psychology reveals a strong empirical component.

  8. Yes, You can.
    You can derive ought from is, as long as you are clear on the meaning of morality. In order for there to be a morality (ought), there has to be an action that is taken that is either moral or not. And a human being will make the choice of whether or not to perform that action. That’s ethics. All this aggregated stuff is Politics. So let’s lay yhte politics aside for the moment.

    So what is IS? What is is that humans are living creatures that must use their physical and mental abilities to survive and thrive in the world. . A guy on an island does what he must to survive. He picks fruit, kills fish and land creatures, and he is moral, beacause he is doing what he needs to do to survive. Now, we must agree that no human has more intrinsic right to survive than any other. If we cannot agree on this, there is no morality, for we can choose to do whatever we want to whomever we choose. Thus, when he comes into society, he has to deal with other people, with the same survival rights as he, some of whom may have something he needs, some of whom want what he has. Now, he has a choice. He can take what he wants by force, or he can trade. There is only one MORAL choice, based on what IS.

    Let’s look at the morally correct choice. He trades. This unique (yet common) action enables him to give up something for something he wants more. Both parties have different values, but both get what they want in the end. The trade takes this differing value into account, and it is just the differing values between people that make this trade possible.

    Your paper confuses individual morality with societal choices. So now let’s talk politics. But now you are talking about someone making choices for someone else, without knowing their values. This is where you get hung up. You cannot imagine a system that encompasses everyones values, or evenmakes them all feel best . Well, you need not look for a system that acccounts for people’s differing values. The process by which people trade openly goods or services or money for the things they need (that maximize their well being) already exists. It is called capitalism. No need for “right thinking people” to decide what woud be best for everyone else. And this Ought actually derived from the IS. The system of trade and money was not invented, or decreed. It grew from the behavior of humans acting rationally to each other’s benefit, and has thrived because it has provided a way for humans to survive and thrive to become the dominant species on the planet. .

  9. Sean says “In the real world, we have moral feelings, and we try to make sense of them. Moral relativism doesn’t imply moral quietism.” But Sean seems to be saying that moral relativism is unavoidable, which I agree is distressing. Clearly it’s impossible to come up with a universal definition of collective well being , but an interesting question is how exactly, in the final analysis, do people arrive at their own “internal moral judgments”? People are making moral judgments all the time. There must be underlying reasons for a particular “moral choice” that can in principle be understood scientifically, if we understood this person’s complete history and biology. A person believes certain moral choices are optimal for him/her (self interest), and of course people will differ on self interest. But exactly what is the input (biological and environmental) that determines our “moral” choices? In other words, there are scientific reasons why people make the moral judgments that they do. Perhaps someone believes that the “greatest good” is the taste of a hot fudge sundae. His brain is wired to like hot fudge sundaes (is), so he eats them (ought). A definition of collective self interest may never be possible, but an individual’s “ought” perhaps can be objectively understood.

  10. This is not intended at all to be snarky, but there are very skilled scientists working on these problems that need to be heard. Jim Lippard’s point in comment 3 is based, in part, on the developments of scientific studies of consciousness, particularly AI efforts. Researchers at Stanford, for example, are trying to encode human traits such as charity and humor into AI machines. There are advanced efforts at University of Texas on tracking and training AI towards understanding metaphor. These sorts of studies are going on all over the planet, and it will become a matter of time that we have the formulae for ought and is, and a whole lot more.

  11. Isn’t this discussion like a century old or so? What’s the meaning of the utility function and can it be aggregated? Didn’t economists settle on it’s not happiness and it can’t be aggregated, but you don’t have to?

    This just reminds me I wrote a semi-finished paper on this issue which is still waiting to be finished. Why did you have to remind me of this??? To make a long story short, it supports your point of view. There’s actually people who think that one should measure brain activity to determine somebody’s happiness as an objective measure. Just imagine that, if your neurons happen to fire less your opinion counts less. If you’re interested in a draft of my paper, pls send me a note, hossi at nordita dot com, I’d be interested in your opinion. And maybe I’ll finish it at some point…

  12. Well, I think I would take a very different tack in order to attempt to approach morality in as objective a manner as possible. Here are the steps I would take:

    1. Assume that people have their own views on morality, on which behaviors are desirable or undesirable, on what outcomes are desirable or undesirable.
    2. Assume that the “best” morality for an individual stems from minimizing contradictions between the various desired behaviors and outcomes.

    Note that if it so happens that we all share a basic “moral grammar”, as some have put it, such that there are broad areas of agreement, merely reducing the contradictions in our own moral attitudes, a process to which science is uniquely suited, areas of disagreement between individuals are automatically reduced. Science can also tell us whether or not this common moral grammar exists, and anthropogenic studies have, as near as I can tell, born this out.

    Also note that simply maximizing individual utility, though it will not always maximize global utility, does tend to go largely in this direction. This has been borne out in various ethical games that have shown that in games where people have to interact over and over again, the strategies that do best are those that make it so that other people in the game are most likely to deal fairly with them: even if the game is such that cheating offers potential huge benefits, other members of the game remember the cheating, preventing such cheating from working properly.

  13. Hello Sean

    If I understand it correctly your argument(s) is basically don’t call it science if you cant
    construct an experiment to test the theory. It’s not that the pursuit of proving scientifically morality is good or bad just that any response that doesn’t answer this question is basically wrong.I’d like to add that i would love it to be true in order to make the world a better place but i have a few difficulties accepting it.

    I think its impossible to know because : Lets say i lead a perfect life that maximises my potential for well being ingenuity creativity in a world that everyone does the same.What about all that ingenuity beauty and creativity coming from hardship ? How can u measure which one is more beneficial (or greater)?

    I’ve come to the conclusion that this is the exact reason why it cant be called science but the topic is a very interesting philosophical one and as you said everyone derives his own conclusions.

    But finally i believe the furthest we can go in our percuit to undestand morality or what is best scientifically is the realisation that diversity is necessary and no one road ideal . This knowledge creates a new feeling, attitude and perspective and once all people experiece this maybe then we could reconsider the question.

    Thank you for convincing me of this.

  14. Interesting post. I agree with your conclusions, but I would state the argument in a slightly different way.

    Science is the domain of what is, while morality is the domain of what should be (or ought to be). These are simply two very different things. “What the world should be” is a value judgment, a preference, not some basic fact about the universe.

    Let’s look at it this way: the observable universe is something like 10^88 particles in a particular configuration. Now, morality is basically about saying “the universe should be in configuration A instead of in configuration B”. Wait… what? Why should it be? Because some people – perhaps most people, perhaps even every single person in the world – thinks so. Ok, but that’s only a fact about people, not about the universe. I mean… that A is better than B is not written in the stars. The universe simply is, and it doesn’t give a damn what configuration it’s in. In other words, reality has no moral dimension.

    Or to put it differently: science only describes what is, and that includes no judgment whatsoever. Therefore, science can tell you “everybody wants A” or “everybody would be happier under A” or “A would maximize human well-being”, but it stops there. The judgment, the “ought”, the “therefore we should do this” must be added for it to become morality, but such an act is outside the scope of science. Science doesn’t make any judgment; it only describes the world.

    Which is sort of why I’m not crazy about your arguments. Imagine that :

    – Everybody could agree on the same definition of well being.
    – Everybody agreed that maximizing utility was the goal of morality.
    – We could agree on a way to aggregate well-being.

    So what? That would only mean that we agree. “What should be” would still remain a human perspective that goes beyond a description of what is.

    And don’t get me wrong: it’s obvious that science has enormous potential for morality. Science can help us understand the origin of our moral preferences. For instance, here is a fascinating piece by Steven Pinker on the moral instinct.

    http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html

    And once we agree that some outcome is preferable to some other one, science is by far the best tool we are to determine the best way to reach that outcome. But once all that is said and done, the fact remains that the basic premise of morality – that some things should be preferred to others, that there is such a thing as “ought” – is not a part of an empirical description of reality.

  15. Matt Tarditti

    I wish I could have responded earlier, but so be it…
    True that both physics and psychology both have well documented, empirical foundations. But what troubles me about Carroll’s implied definition of “empirical” as applied to this argument:
    “Two scientific theories may disagree in some way … Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right.”
    When I read this, it almost seems like Carroll is limiting science to knowledge about the universe that can be obtained in an algorithmic sense. If A is true, then C must be true. But if B is true, then D must be true. But in psychology, if A is true, then C is probably true, but D could also be true. It obviously comes down to statistics. But if psychology and physics can be scientifically described in terms of statistics, what prevents morality (as Caroll has defined it) from falling into the same realm?

    If no one wants to engage that argument, I understand…its a dead horse. But Sebastien’s post at #41 is great. I hope that there is a coming rebuttal to his argument, not because I disagree, but I’m just wondering how you construct a rebuttal to it in the first place.

  16. FWIW, a simple rebuttal on this “is-ought” business:

    #41:”that some things are to be preferred to others…is not part of an empirical description of reality.” I think this is wrong. Our make-up includes a huge bundle of various preferences – they are natural facts about us.

    Our moral impulses are natural facts, too. There is good work underway fleshing out the evolutionary origins of our moral sense. Our development of a science which seeks in part to develop and fine-tune these impulses doesn’t introduce anything supernatural.

    The “ought” here is the coupling of the facts about experiential well-being (and its causes) with our moral desires – but these latter are also natural facts!

    (And yes, this is all very speculative and the difficulties are immense, etc., but pointing those things out is not an argument that it is impossible in principle).

  17. Objections

    First of all, founding morality in the new science may be horrible – we answer scientifically questions about all other life forms and that justifies and makes possible and efficient our behavior, called largely the conquest of nature. What if we did that to ourselves? What if you could say a child will probably not be happy or just because of his genes or something like that – you would have the justification to kill him.

    Second of all, the important matters could neatly be summarized thusly: what are the alternative views of happiness? Which are compatible with the new science and which not? – In fact, he simply say let’s use science and forget about everything else. What if science is not the source of – or compatible with – the greatest possible happiness? I am not declaring it is, but it seems like the question is worth asking…

    Third of all, the problem of moral relativism is political: if there is no justification for moral judgment, laws are baseless, and therefore justice is nothing but the advantage of the stronger: there is no truth to moral problems – everything that matters to us – therefore whoever can impose his will is justified or at least is not susceptible to blame. If moral relativism, by some liberal pipe-dream, implied moral quietism, we would be safe perhaps. But it does not – and it justifies what we may call alternative lifestyles, but used to be called horrors. Consider that the problem with quantifying well-being is that it may feel good to love a beautiful woman – but also to see others tremble with envy or jealousy. It might feel good to eat fine dishes or contemplate your worthy children; but also to fight wars and oppress people. In fact, maybe tyranny just feels best…

  18. This is a recitation of a traditionalist view, which cannot manage to see human beings for what they are, complex systems of biomachinery. Because of all the complexity and the opaqueness of human motivation, it seems we cannot even arrive at a consistent set of principles for deriving morality.

    But this is false. We simply haven’t gotten there yet. Once the human brain is reverse engineered and understood, once commonalities are established between people with *seemingly* different moral viewpoints, we will begin to unravel this mystery.

    Bottom line, our moral instincts are shorthand about what our minds have analyzed to be the best methods for human flourishing. We can rightly exclude brain pathology from this discussion. Just as we would exclude any piece of broken machinery from the analysis of functioning models.

    As Sam Harris correctly pointed out, morality is concerned with the well-being of conscious creatures. It is a departure from one set of goals of our biological machinery–that of amoral reproduction and domination of genetic competitors–to another finely tuned set of goals. Civilization and prosperity has finally given our altruistic and cooperative natures a means of expression. Empathy provides some measure of understanding of those who are suffering. Mirror neurons tell us we should care about them.

    We are fortunate enough to have available to us a wealth of information about conscious systems, and we are soon to get a lot more. To imply that that no consistent pattern or theory exists in this data is laughably short-sighted.

    I’m willing to concede this is a human-centric viewpoint. But since human flourishing is tied in with the flourishing of ecosystems which include other species, science based morality would inform a broader-based view of ecosystem and social sustainability. This is the new science of morality, and it is in its infancy. I fully expect the naysayers to continue until such time as the discipline becomes better established. It will be a cooperative effort between neurologists, sociologists, anthropologists, psychologists, zoologists and environmental scientists, to name a few. This is the human equivalent of a “theory of everything.”

    The fact that a “theory of everything” is elusive hasn’t stopped physicists from looking for it, nor should it slow, even in the smallest degree, our progress toward a scientific understanding of morality. Like many other objections to science, this seems to be largely about other disciplines not wanting to cede power to a new objective regime they cannot control. That is where the study of morality is headed–toward the realm of evidence which may challenge all of us to redefine and abandon long-cherished but outworn beliefs.

  19. Pingback: Darwiniana » Carroll vs Harris

  20. Pingback: Black Sun Journal » In Support of a Scientific Morality

  21. I am with Sam on this one. First off, others have dragged Hume and the is/ought into the discussion; Sam never claimed to have overturned Hume. Part of this is because the “ought” in Hume is the provably optimum ought, which, again, Sam does not claim. It is trivial to show that you can get an “ought” from an “is” if it does not have to be correct (you could roll dice). Sam is asking us to use the knowledge we have obtained from the scientific method to engineer a better moral system than what so much of the world has inherited from bronze age scripture.

    I also want to stress that this is not science, it is engineering. Science did not tell us to get rid of smallpox, we decided that was a “good” thing to do, and used the knowledge about what smallpox was, gathered through the scientific method, as a basis to engineer a method to get rid of it. We are currently engineering methods to reduce malaria. Can we engineer methods to improve the lives of the people of the world by making changes to the moral codes handed down from the past? Almost certainly. Can we prove an optimum? No, not even in principle. Well, if we can use scientific knowledge of the world to do better, shall we let the (provably unobtainable) perfect be the enemy of the good (or at least better)?

  22. Hume’s skepticism is irrational mainly because it can be applied to everything we know including scientific method. According to Hume when we claim that an object falls due to gravitational force we are making an assumption because we do not know it for a fact that it will happen. Similarly, morality cannot be epistemically objective because it is based on assumption. But I believe this is a bad argument for two reasons,
    First, Skepticism has its limits if we do not accept some axioms progress and improvement of thinking is not possible.
    Second, similar to law of gravity there are laws that can be described by human nature. For example freedom of speech is a moral right that every human being can enjoy regardless of their culture because language is innately in human capacity.

Comments are closed.

Scroll to Top