Chapter 3: The Mandate to Maximize

On the website utilitarianism.net, Will MacAskill and Darius Meissner write that “advocates of utilitarianism have argued that the theory has attractive theoretical virtues such as simplicity.” But I find this misleading: utilitarianism is actually a complex bundle of many different ideas.[4] In this short chapter, I will do my best to outline the features and properties of utilitarianism that are most relevant to the critique below.

The first thing to note is that the question “What is right and wrong?” is different from the question “What is good and bad?” You may have heard the old saying that “whereas deontology puts the right before the good, consequentialism puts the good before the right.” What this means is that for someone like Immanuel Kant, a deontologist par excellence, whether the consequences of an act are good or bad has nothing to do with whether the act is right or wrong. This is why he argued that it would be immoral to lie to a murderer at your door asking for the whereabouts of his next victim. Sure, the victim being murdered would be bad — Kant would agree — but what does that have to do with morality? So long as you follow unbendable moral rules like “Never lie,” you’re doing what’s necessary to keep your moral house in order.

Utilitarians disagree. For them, you can’t answer the first question above without already having answered the second one. The reason is that morally right actions are defined as precisely those that increase the amount of good in the world. But what is “the good”? There are several possible answers, but for the sake of our discussion, let’s accept a hedonistic theory according to which the one and only intrinsically good thing in the universe is that subjective state called pleasure or happiness. This leads to the following definition: an act is morally right if and only if it maximizes the total amount of pleasure in the universe (compared to all the other acts available at the time). This is the heart and soul of “total” utilitarianism, where “total” refers to the fact that what matters is the absolute quantity of pleasure rather than the average.[5] So, whereas Kant would claim that lying is wrong because it violates the Moral Law, utilitarians would say that lying is wrong when and only when doing so would fail to maximize intrinsic value — that is, pleasure, if one’s a hedonist.

But how should we assess whether an act has in fact maximized intrinsic value? Consider this somewhat silly example: you hack into the bank accounts of 1,000 people and steal $100 from each. You then take the resulting $100,000 and spend it on a relaxing vacation on the sunny tropical beaches of a Caribbean island. Did stealing in this case maximize the good? Clearly, the total amount of pleasure has increased for you. But ethics is supposed to be impartial and objective, not biased toward particular individuals. It strives to assess our moral choices and actions from a neutral perspective called the moral point of view.[6] Ethicists have proposed many different accounts of what the moral point of view should be. For Kant, it was the Categorical Imperative. But for Henry Sidgwick, an influential early classical utilitarian, the moral point of view is nothing more or less than “the point of view of the universe.” Accordingly, we should assess the consequences of actions from a disembodied cosmic eye, which, looking down from above, can calculate the overall increase or decrease in pleasure objectively. In the case above, although sneaking away with $100,000 was good overall for you, it was not good overall for the universe, so to speak, which considers the effects of an action on everyone equally.

So far so good! There is one more aspect of this view, especially as Bostrom understands it, that we need to establish. One of the many criticisms of utilitarianism is that it’s insensitive to the distinction between persons. Consider a situation in which you’re told that if you get a slightly painful medical procedure next week, you can save yourself from an extremely painful procedure in 1 year. Many of us would opt for the procedure next week. In doing this, we inflict pain upon ourselves so that our future self can avoid it. For utilitarians, though, we should think about trade-offs between lives in the exact same way: if inflicting pain upon one person next week would enable a separate person from experiencing far worse pain in 1 year, then morality orders us to do it. This is what “the greater good” as seen from the cosmic vantage point is all about: making trade-offs here and there without thinking about persons as inviolable beings, or actions as being constrained by the sorts of rules that Kant proposed.[7]

On this view, persons — you and me, your grandma and your spouse — do not matter in and of ourselves.[8] We are mere means to an end, that end being maximal pleasure in the world. As John Rawls famously put it, persons are just the “containers” of intrinsic value. We are fungible (that is, interchangeable) receptacles that matter only because pleasure cannot exist without containers to contain it.[9] This leads to a startling conclusion: the death of someone you love dearly is no worse, morally speaking, than the non-birth of someone who could have existed but never will. To illustrate, think of a person in your life who you love dearly, and imagine that person perishing. (Sorry for the dark thought!) Now imagine a merely possible person named “Diego.” He is what I would call a currently non-existent, possibly never-existent imaginary being. Let’s say that, in fact, he never comes into existence. Which of these two scenarios is worse? Bostrom would say that they’re equivalent, given that (a) Diego would have a happy life, and (b) we bracket the extra suffering that your loved-one’s death would cause those who survive. In other words, the death itself is morally equivalent to the non-birth of Diego. Why? The answer should be obvious: your loved one and Diego are just containers, and in terms of the total amount of pleasure in the universe, there’s no fundamental difference between removing a container that exists and failing to create a container that could exist.

To tie these threads together, morality abhors a vacuum. The more value containers that exist, the more potential value. The more value, the better the world becomes. Hence, this utilitarian view commands us to maximize the total number of containers — meaning “people” — with net-positive amounts of value. The best possible outcome is one in which the largest number of happy people exist. Bigger equals better.[10]

Previous chapter | Next chapter

Table of Contents

[4] See, for example, section 1 of this Stanford Encyclopedia of Philosophy article. Note that utilitarianism is the paradigm case of consequentialism.

[5] Note that virtually no philosophers today are average utilitarians.

[6] For a short introduction to this idea, click here: https://plato.stanford.edu/entries/original-position/#HisBacMorPoiVie.

[7] As Will MacAskill put this very point in a 2018 podcast interview with 80000 Hours: “The third argument [for utilitarianism] is rejecting the idea of personhood, or at least rejecting the idea that who is a person, and the distinction between persons is morally irrelevant. The key thing that utilitarianism does is say that the trade-offs you make within a life are the same as the trade-offs that you ought to make across lives. I will go to the dentist in order to have a nicer set of teeth, inflicting a harm upon myself, because I don’t enjoy the dentist, let’s say, in order to have a milder benefit over the rest of my life. You wouldn’t say you should inflict the harm of going to the dentist on one person intuitively in order to provide the benefit of having a nicer set of teeth to some other person. That seems weird intuitively. … [O]nce you reject this idea that there’s any fundamental moral difference between persons, then the fact that it’s permissible for me to make a trade off where I inflict harm on myself now, or benefit myself now in order to perhaps harm Will age 70 … Let’s suppose that that’s actually good for me overall. Well, I should make just the same trade offs within my own life as I make across lives. It would be okay to harm one person to benefit others. If you grant that, then, you end up with something that’s starting to look pretty similar to utilitarianism” (italics added).

[8] In contrast, Kant argued that rational beings like us are ends in ourselves.

[9] For discussion, see the “Separateness of Persons and Distributive Justice” section of Derek Parfit’s book Reasons and Persons. Other scholars have attempted to avoid this aspect of total utilitarianism here.

[10] Note that, as of 2013, only 23 percent of professional philosophers surveyed preferred consequentialism to deontology, virtue ethics, or other ethical theories. For many philosophers, arguments like those articulated by Bostrom are reasons to reject utilitarian ethics because of their patent absurdity.

Author and scholar of existential threats to humanity and civilization. www.xriskology.com. @xriskology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store