Wednesday, June 19, 2013


This may seem off-topic for this blog, but maybe I'm just tired of responding to each new manufactroversy du jour and want to branch out a bit. Disclaimer: I'm not a professional philosopher, and it's very likely that I'm reinventing the wheel here, and there is a better name for the concept I want to discuss. But let's plunge in regardless!

Given that God doesn't exist, where does morality come from? Actually, I would argue that even if we knew for a fact that God existed, that would not automatically make him the one and only possible source of morality - but that's another story.

Anyway, philosophers have explored many possible answers to the above question. One well-known approach is utilitarianism: "The greatest good of the greatest number." Jeremy Bentham, the father of utilitarianism, called it "the felicic calculus."

Utilitarianism is superficially appealing, but we soon see problems with it. First of all, how can we quantify good and harm in order to balance them against each other? Won't the trade-off be different for each individual, and even for the same person at different times?

And even supposing we could agree on a universal scale - a human life is worth a million dollars, say - we still have problems.

Scenario 1: We kill an innocent person, resulting in someone else getting $1,000,001.

Scenario 2: We kill an innocent person, resulting in a million and one people each getting $1.

The first scenario would probably strike most people as problematic, and the second one even more so. And yet both are equally okay according to strict utilitarianism, because they result in the same net increase in benefit to those affected.

I think these examples show that striving for the greatest good of the greatest number has to be balanced with treating each individual with respect and dignity. Scenarios like the ones described above would lead to devaluing human life. Perhaps you could in theory factor this back into a utilitarian calculation by arguing that such scenarios would ultimately reduce the benefits accruing to everyone, by cheapening each individual life. I want to argue in a different direction, however.

I propose the principle of anti-utilitarianism: "The least harm for the greatest number." You should avoid doing harm to anyone, and as far as practical, help them avoid harm. You should only cause harm if it is the only way to prevent a greater harm.

But isn't the least harm the same as the greatest good? Not necessarily. As I mentioned above, what causes happiness depends on the individual. For one person, it might mean having a lot of money; for another, a lot of sex, and so on. And the person who gets his or her wish for a lot of sex might find it repetitive after a while, and move on to other sources of pleasure.

On the other hand, the things that cause harm - hunger, physical pain, lack of shelter, disease - are pretty universal. And when you think about it, what we consider morally praiseworthy acts usually focus on mitigating harm rather than increasing the happiness of someone who is already doing okay. We have charities helping the homeless, the disabled and so on, rather than sending Joe Blow on a vacation to Tahiti.

Of course it's unlikely that any principle can give us an unambiguous answer on how to behave in every situation. How far must we go in saving others from harm, as opposed to simply refraining from causing harm? Many of us have encountered someone who seems determined to ruin his or her life by acting in destructive ways. There's only so much you can do to help such a person without taking away that person's autonomy.

However, the impulse to do good must be viewed with suspicion in the light of history. Think of the Puritans trying to create a utopia, and ending up executing women as witches. Try to mitigate harm instead - there is a better chance you'll actually do something useful, since your idea of what should make other people happy won't always correspond to their ideas!

"Anti-utilitarianism: the least harm for the greatest number" - what do you think?


  1. Does this qualify as minimising pain, though? Which is a tenet of utilitarianism. I understand that harm is causing pain to others, but...

    as in:

    Having claimed that people do, in fact, desire happiness Mill now has to show that it is the only thing they desire. Mill anticipates the objection that people desire other things such as virtue. He argues that whilst people might start desiring virtue as a means to happiness, eventually, it becomes part of someone’s happiness and is then desired as an end in itself.
    "The principle of utility does not mean that any given pleasure, as music, for instance, or any given exemption from pain, as for example health, are to be looked upon as means to a collective something termed happiness, and to be desired on that account. They are desired and desirable in and for themselves; besides being means, they are a part of the end. Virtue, according to the utilitarian doctrine, is not naturally and originally part of the end, but it is capable of becoming so; and in those who love it disinterestedly it has become so, and is desired and cherished, not as a means to happiness, but as a part of their happiness." (wiki)


    Negative utilitarianism[edit]
    In The Open Society and its Enemies (1945), Karl Popper argued that the principle 'maximize pleasure' should be replaced by 'minimize pain'. He thought "it is not only impossible but very dangerous to attempt to maximize the pleasure or the happiness of the people, since such an attempt must lead to totalitarianism."[61] He claimed that,
    there is, from the ethical point of view, no symmetry between suffering and happiness, or between pain and pleasure… In my opinion human suffering makes a direct moral appeal, namely, the appeal for help, while there is no similar call to increase the happiness of a man who is doing well anyway. A further criticism of the Utilitarian formula ‘Maximize pleasure’ is that it assumes a continuous pleasure-pain scale which allows us to treat degrees of pain as negative degrees of pleasure. But, from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure. Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all...[62]
    The actual term negative utilitarianism was introduced by R.N.Smart as the title to his 1958 reply to Popper[63] in which he argued that the principle would entail seeking the quickest and least painful method of killing the entirety of humanity.
    Suppose that a ruler controls a weapon capable of instantly and painlessly destroying the human race. Now it is empirically certain that there would be some suffering before all those alive on any proposed destruction day were to die in the natural course of events. Consequently the use of the weapon is bound to diminish suffering, and would be the ruler's duty on NU grounds.[64]
    Negative utilitarianism would seem to call for the destruction of the world even if only to avoid the pain of a pinprick.[65]
    It has been claimed[66] that negative preference utilitarianism avoids the problem of moral killing, but still demands a justification for the creation of new lives. Others see negative utilitarianism as a branch within classical utilitarianism, which assigns a higher weight to the avoidance of suffering than to the promotion of happiness.[67] The moral weight of suffering can be increased by using a "compassionate" utilitarian metric, so that the result is the same as in prioritarianism.[68] (wiki)