Hacker News new | past | comments | ask | show | jobs | submit login
Why I am not a utilitarian (fakenous.net)
44 points by georgewsinger on Jan 23, 2022 | hide | past | favorite | 95 comments



The gist of the argument is that utilitarianism provides unintuitive responses in certain cases therefore, we should default to deontology. The utilitarian edge-cases are not unfamiliar to me, however deontological systems suffer from failures too. Perhaps the author is not familiar with the nuts and bolts of deontology?

Deonological failures present themselves as conflicts of duty or moral dilemmas. It's not difficult to contrive these ourselves or easily find a ready-made list. To that end, the author's conclusion of defaulting to deontology is logically inconsistent - it merely avoids them by not bringing them up at at all.


> The gist of the argument is that utilitarianism provides unintuitive responses in certain cases therefore, we should default to deontology.

I'm not sure I see the argument of the article that way (even though the author does end by stating that we should default to deontology). I would phrase the core of the argument this way: utilitarianism is a form of deontology! Utilitarianism ultimately rests on the ethical intuition that maximizing pleasure and minimizing pain is the only value that should be pursued. That's not an alternative to deontology: that is a deontological claim. And, once it's put baldly like that, it's a pretty questionable claim: is maximizing pleasure and minimizing pain really the only value that should be pursued? The article is simply posing scenarios in which some other deontological value conflicts with that value.


> utilitarianism is a form of deontology

Systems of morals can all be expressed as rules, outcomes, or goals, so each can be phrased to subsume the others.

> Utilitarianism ultimately rests on the ethical intuition that maximizing pleasure and minimizing pain is the only value that should be pursued.

That sounds more like hedonism. Utilitarians don't necessarily believe that pleasure is perfectly aligned with utility.


> Utilitarians don't necessarily believe that pleasure is perfectly aligned with utility.

Then say "maximizing utility" in place of "maximizing pleasure and minimizing pain". It doesn't change the fact that utilitarianism says there is just one value worth pursuing, whereas other deontological systems say there are multiple values worth pursuing. The latter seems to me to be far more in line with reality.


I'm happy to see these kinds of discussions.

Here's the thing: one of the biggest problems to deontological approaches, as the parent pointed out, is that you end up with conflicts between values. But it's worse than that, because there's not a clear way to resolve these conflicts, and in my experience there's just a "hands throwing up" moment, along with a denial that the conflict existed, or an appeal to "maintaining values" without even recognizing that another value was abandoned.

So you could turn this on its head, and argue that if deontological and utilitarian approaches are equivalent at some level, why not advocate for the approach that at least offers a framework for resolving conflicts in theory? If two values confict, how do you decide? More importantly, how do you decide without appealing to utility at some level, including implicitly?

This gets into very high-level ethical reasoning issues and I'm not sure I would ascribe as purely utilitarian or deontological perspectives. But I think if you take the perspective of equivalence (and I think that's a reasonable perspective), there's not much grounds for saying you're not deontological or not utilitarian.


> If two values confict, how do you decide?

I'm not sure there is any general rule for this. That's why there are conflicts like this in the first place: because there isn't one consistent set of ethical rules that can be applied to every situation.

> More importantly, how do you decide without appealing to utility at some level, including implicitly?

"Utility" in this context is just another way of saying "what people value". But if there were one single scale on which we could measure everything that people value, there wouldn't be ethical conflicts. The whole point is that there isn't any such scale: people have multiple different, incommensurable values, and there is no general rule for how to resolve conflicts between them.


> Systems of morals can all be expressed as rules, outcomes, or goals

I have no problem with this, but utilitarians claim that utilitarianism is an alternative to deontology. If your statement here is true, then utilitarians are wrong in that claim.


These are just different levels of abstraction. You can write bel in Blub, but that doesn't mean they aren't in some sense alternatives.


Why default to deontology then? If this framing doesn't escape moral conflicts then it's no better than utilitarianism!


> If this framing doesn't escape moral conflicts then it's no better than utilitarianism!

It is if it recognizes that there is no magic framing that escapes moral conflicts. Utilitarianism claims to be such a framing, so showing that it doesn't actually deliver those goods seems to me to be a contribution worth making.


The author is Michael Huemer, a professor of philosophy at the University of Colorado, so would be very familiar with "the nuts and bolts of deontology".


Surely he should be familiar with logical reasoning then as well. Yet the argument present here is 'A bad, therefore B' without any evidence that B doesn't also have its own problems. If he is an authority on deontology, he would do well to provide evidence by way of his writing.


People love to point out utilitarianism is bad by using reductio ad absurdum. However, the point to utilitarianism to me is not to have some kind of mathematical function to calculate utility, and to follow it slavishly. Rather, it is to be pragmatic and to not be anti-utilitarian.

There are many cases where the "ethical" course of action makes many people miserable and where a simple, pragmatic, utilitarian solution would be beneficial. You don't have to go to extreme examples like the trolley problem or shooting down planes with terrorists in them. Just look at organ donations, for example. Here in Germany, it is opt-in. Why can't we just make it opt-out, or like in the US: when you get a drivers license, you have to decide yes or no?

I would consider myself somwhat utilitarian, but I have no problem reconciling most of the mentioned problems with common sense. Sometimes, it is by questioning the set-up:

> b. Framing the innocent > You’re the sheriff in a town where people are upset about a recent crime. If no one is punished, there will be riots. You can’t find the real criminal. Should you frame an innocent person, causing him to be unjustly punished, thus preventing the greater harm that would be caused by the riots?

The real problem here is that people are going to riot if their sense of justice is not satisfied. You're not just a sheriff but also a philosopher - the solution is to change society!

Most of the problems are variations of the classic "one person suffers greatly so many can have shallow pleasure". I don't know anybody who really proposes integrating over utility without any weight like this. Instead, most people would weight the injustice much greater than the pleasure. I maintain for example, we could radically reduce CO2 emissions and redistribute wealth to reduce poverty if we were willing to accept a couple percent GDP degrowth. That means on average, you might have to wait 2-3 years to get a new phone or a new car, and you might have to take a less than ideal job for a couple of years. But your inconvenience would be more than compensated by the increased utility for all.


Bentham is always credited with utilitarianism, but Francis Hutcheson (1694-1746), the PhD advisor of Adam Smith, formulated "the greatest happiness for the greatest number" nearly 100 years before Bentham.

Smith referred to him lovingly as "the never-to-be-forgotten Hutcheson." Oops.

Hutcheson also devised a system of "moral computation" (complete with a series of equations—and applicable to AI systems) and specifically indicated how it should be used in conjunction with human intuitions.


Reminds me of how the "veil of ignorance" attributed to John Rawls while the original was developed by John Harsanyi.

In the end, it doesn't much matter who did what, as long as good ideas got popularized. Bentham made an enormous contribution to developing and spreading utilitarian thought, and should be praised as such.

Thank you for sharing about Hutcheson - I've not heard of him before.

One of the best resources for philosophy reading is the Stanford Encyclopedia of Philosophy. Here it seems Hutcheson is credited with "proto-utilitarian" views / "anticipation of some strains of utilitarian thought".

https://plato.stanford.edu/entries/hutcheson/


For being the guy to cultivate Adam Smith, you'd think he'd get more attention. But the issue is the f-s printing of the time. Hard to read!

His work on the inner sense of harmony and beauty is great psychology. Big influence on smith, whose first book was on Sympathy.

Oh, he also coined the term "inalienable rights" and wrote the text on moral philosophy that was used to educate over 60 percent of the signers of the declaration of independence.

Also had a big connection to secret societies and the whole western esoteric deal. Part of why Adam Smith burned all his papers (one may presume).


I think the examples are mere forms of sophistry. At best they’re straw men. Take the example of one healthy patient and five sick ones. First, this is not actually the “greatest good” at all, because you’re only considering six people from the viewpoint of one doctor. Second, why calculate one murder as being of less negative value than the positive value of “saving five lives?” Third, we don’t know anything about other alternatives that are considered instead.

I really don’t think these are examples of utilitarianism.


Exploring your second point a bit more: if it were five billion lives rather than five lives being saved, most people's intuition would be that regrettably it's necessary to murder the one patient. Our intuition about this thought experiment transitions from deontology to consequentialism as the magnitude of the utility impact increases. There's something unarticulated about the murder that seems worse than five natural deaths, but not worse than five billion natural deaths.

Maybe it's the fact that the act would have a major negative effect on the surgeon, or that admitting a justification for murder other than self-defense could open the door to abuse causing widespread harm. Ie, maybe our intuition is basically utilitarian but is constrained by rights and rules to protect against risks that are hard to quantify.


Yes, even from a utilitarian viewpoint, the prospect of living in a society that will murder you if there happen to be a sufficient number of people that require your organs is a dismal prospect. A utilitarian should also account for that very significant negative externality.

Many of these examples fail to take overall utility into account. The sadistic pleasure is another prime example. You must consider not just those who would enjoy the torture but also those that would be repulsed by it.

Same with the cookie. Rewarding a serial killer would make much of society unhappy. From a utilitarian point of view, giving the cookie to Mother Teresa (generally rewarding those who act kindly to others), is a very straightforward choice. The individual perception of cookie sweetness contributes very little to the overall utility.


utilitarianism is generally considered to only value outcomes and not any acts themselves.

So, one is comparing “a person dying” and “5 people not dying”, unless there’s something about the experience of the death-due-to-murder and the death-due-to-illness that differs and is taking into account as part of the outcome.

If you assign value to acts, then you don’t really have utilitarianism anymore, and you can probably describe nearly any ethical system in that language?


Most ‘don't be utilitarian’ arguments are really ‘don't be a stupid utilitarian’. If a course of action where you do some amount of harm to mitigate a larger amount of harm would clearly result in a worse world, then you are NOT actually summing up all the resulting negative utility in your so-called-utilitarian calculations.

It is in fact the case that we do often do bad things for more important goals. We kill people in wars. We support legal systems that we know sometimes imprison innocent people. The government takes money from people against their will for their own purposes, which isn't stealing mostly only because it's written in law.

What's the difference? Well, in those latter cases, it is actually considered better in sum for that action to be taken. For most of the examples in the post, there are simple and clear arguments that it is not better in sum, and the only reason the article is claiming otherwise is that the author is pretending to be dumber than they are by not mentioning the rest. So don't do that.

(And for some cases, though you wouldn't know it from the article, the full utilitarian calculus does actually say that uncomfortable things are the right thing to do, and in those cases you should just bite the damn bullet.)


And in the "don't be stupid" line of thought - the examples are fundamentally unrealistic because examples have to be.

For example, the murdering doctor example assumes that there are no dangerous second order consequences to randomly murdering people. That isn't a safe assumption. A doctor might reasonably conclude there is enough value in having trustworthy institutions that the locally optimum choice is different from the globally optimum choice after evaluating the risks.

A utilitarian could make either choice in any of those examples if they disagree with the assumptions implicit in the question - which isn't really allowed in this sort of thought experiment. The situation is artificial because the author is allowed to create certainty and constraints which don't map to the reality where these problems will be encountered. Any moral framework will give weird results if applied to artificial situations, so the construction of artificial situations where the author objects the utilitarian answer is unless as evidence.


No, those unrealistic scenarios make it easier to see the problems without having to deal with all the real world complications. Many of the examples just try to point out that utilitarianism allows you to treat one group as badly as you want as long as you overcompansate their suffering by just making another group happy enough. It's a stupid idea in its pure form.


A utilitarian would absolutely say that causing one group severely negative utility is worth a corresponding positive utility in another group. However, they'd also say that these are useless examples to apply knee-jerk aesthetic analysis to, because they're practically impossible.

The aversions we have to this kind of inhumane utility juggling are because we instinctually recongize the second order and knock-on effects without needing to specify them. Dehumanization, inequality, precedence - these are all utility-real things which a responsible utilitarian incorporates into their utility calculus as major negative impactors.

When you strip the situation of any practical relevance, of course the conclusions are 'stupid' because the criteria you apply to determine that stupidity are based on a practical reality.


"Don't be stupid" unfortunately is not a practical rule for making judgements, as you have no way to determine what is stupid. It is just a hand-wavy dismissal of the argument.


> If a course of action where you do some amount of harm to mitigate a larger amount of harm would clearly result in a worse world, then you are NOT actually summing up all the resulting negative utility in your so-called-utilitarian calculations.

This is a really good point and one that doesn’t get raised enough.


> Most ‘don't be utilitarian’ arguments are really ‘don't be a stupid utilitarian’

Someone being a "stupid utilitarian" a subjective judgement.

> then you are NOT actually summing up all the resulting negative utility in your so-called-utilitarian calculations

Well that's the trick isn't it. What's the correct calculation? There is no objective answer to this. You can change the calculation to suit your opinion.


The question of how one actually resolves the utility calculation (once weighing bases are determined) is a question of applied ethics, not normative ethics.

Consider: it could be entirely consistent for a Utilitarian to make the utility justified decision that utility calculus is outside of their capability, and a Deontological framework would serve as their best in-situ heuristic. They would still be a Utilitarian, even thought their black-box criteria resembles a deontology.


What exactly is the point of utilitarianism then? What can you say about utilitarianism itself? Under this scheme, it seems to possess no qualities.


Well that depends on what you want to do - if you're trying to interpret an applied conundrum, it provides a framework for accepting or rejecting justifications towards that question. It doesn't necessarily prescribe a solution, but provides a metric for evaluation of a solution's justification.

Alternatively, on a normative level, you can still debate the merits of the framework when compared to other systems (deontological - virtue - care - etc.) by beginning with meta-ethical axioms and resolving them to normative positions. Consider an ontological framework which doesn't recognize free will of others - this could dramatically inform a delimma between a utilitarian system and an egoist one.


Sure, but that still contradicts with the GP saying "just don't be stupid and calculate the utility properly".


I don't see the contradiction. There is, after all, no one true utilitarianism.

What you're not allowed to do is say, as the article did, that X has the highest utility when strawmanning a utilitarian, only to turn around and say X sucks once the strawman is over.


Throwing out utilitarianism in ethics because you can't exactly calculate which actions precisely correspond to the brightest futures is a lot like throwing out mathematical modelling in engineering because you can't exactly calculate which arrangements of atoms precisely correspond to the most optimal bridges.

I'd be more open to questions about how a rational person might figure out what the best things to do are in the face of uncertainty, if the article's examples didn't include things like should we torture Jewish people for the enjoyment of Nazis? The right intervention to that is much more baseline, of actually stopping to think about what one is saying.


>If a course of action where you do some amount of harm to mitigate a larger amount of harm would clearly result in a worse world, then you are NOT actually summing up all the resulting negative utility in your so-called-utilitarian calculations.

This is kind of missing the point. Even if it were to result in a "better world", the intuition is that it is still wrong to murder the innocent man to save 5, just prima facie. This is a problem for the utilitarian.

Their best response might be to just bite the bullet, but it significantly weakens their argument overall.


> Their best response might be to just bite the bullet, but it significantly weakens their argument overall.

I disagree. To me it seems fairly obvious that it is morally correct to murder one innocent person to save 5 innocent people. Not only that, but we often as a society make this exact decision (e.g. send young people to fight in armies, send firefighters to rescue people despite potential harm to themselves, etc).

I don't know why this supposedly weakens utilitarianism - the whole point of the philosophy is exactly to point out that this is the morally correct choice.

The problem with this and the other examples in the article is that there's a big difference between the hypothetical examples often given, which assume no other effects, and the real world. In the real world, we often only allow these kinds of decisions in society-wide settings - e.g. the examples I gave above (armies, firefighters). We don't allow people to just choose on a case-by-case basis who should live or die, because it would cause far more problems (and thus, negative utility) to do so.


> It is in fact the case that we do often do bad things for more important goals. We kill people in wars.

I may not be a utilitarian, but I do think war is a bad thing, so I’m not sure how this is a great example.


Isn't the main problem with the utilitarian views presented here that they are deeply hedonistic?

Yes, if you only care about pleasure, you will make horrible moral choices.

Now if we include in our definition of goodness things are are actually important like survival, stability of order, justice and fairness, scientific and personal progress and so on, we might get more practical. Of course then these decision become multi-dimensional and and highly dependent on your personal priorities but that is the real world for you.


Bentham or Mills did actually introduce a concept of "quality" to pleasure. Where an intellectual pleasure was considered more desirable than basic "animalistic" ones.


Utilitarianism will not lead to counterintuitive answers if the broader consequences of each action is taken into account. For me, this is the true defence of utilitarianism. Each of the examples are completely stripped of societal consequences, which is wholly unrealistic. A utilitarian doctor will not kill their healthy patients in a real hospital, in a real society, where their actions will have major consequences on countless more than the 6 people presented in the text, and far into the future.

They are comparable to to the question "But if you want to live a happy life, why not inject yourself with heroin and feel super happy for a few hours? It would be the best way to live a happy life today."

Upholding personal rights, predictability and principles will maximise human happiness in the long term, while optimising for short-term happiness will often cause suffering in the long term.

Some examples:

> Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?

A society where people are deathly afraid of sending loved ones to the hospital, for fear they might be killed and used for organ harvesting, will lead to the collapse of healthcare and cause far more suffering than temporarily increasing the supply of organs will prevent.

> You’re the sheriff in a town where people are upset about a recent crime. If no one is punished, there will be riots. You can’t find the real criminal. Should you frame an innocent person, causing him to be unjustly punished, thus preventing the greater harm that would be caused by the riots?

If this framing of the innocent is discovered, which is not unlikely, it will lead to complete distrust in law enforcement, and over time the corruption of law enforcement will cause far more suffering than stopping this specific riot will prevent. And the unjust punishment might cause riots for a different reason, anyway.

This might even match 1-to-1 with your intuition, if your intuition is "This seems logical on paper, but I don't want to live in a society where this happens, so I'm against it."

TL;DR: We live in a society.


Well, I am not sure if I can see the counterintuitiveness(?), because

> pleasure/enjoyment (and the absence of pain), > Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?

How is murdering a guy, not painful for the guy? So I guess that's clear how utilitarianism handles that

I think if we consider the frustration and pain there is nothing that is not clear


A horrific piece. How can this person possibly be a professor of philosophy?

The examples are simply absurd. As many other commenters have pointed out, he simply doesn’t evaluate them reasonably.

You can certainly make the case that this is a real problem with utilitarianism - i.e. that there is no good function for evaluating scenarios.

But that’s not what the author s arguing. He is arguing that utilitarianism leads to repugnant trade-offs.

He’s not even wrong. He just doesn’t seem to have thought deeply about it at all.


Another thing (usually) not discussed in such articles is that a group may benefit more from different types of thinking, which balance each other out naturally. I’ve lost a link to the video about it, but it shows how a group of behaviors A and B is unstable and leads to either all-A or all-B, which then heavily underperform because of their traits. But if you add enough C proportion into this group, it eventually stabilizes, or cycles more effectively (depending on a definition of it). So you may be or not be foo/bar-ian, but in a sense of a system you live in, it may produce different results than if everyone were acting like that.

This simulation was indeed very simplistic and cannot represent or predict reality, but neither can you. Until you test it, you won’t figure out if it’s beneficial or not to someone, everyone, anyone. Watching these dilemmas in isolation does nothing to your understanding, whichever philosophy you use.

We don’t build our way to our goals, we explore the insanely complex network of possibilities and choose (and stick to) those which worked good enough. It’s like an edge of a fractal, sometimes a little change converges to nothing different, sometimes it turns everything upside down.


I'm afraid the utilitarian vs deontological argument will never be resolved to everyone's satisfaction, because ultimately it depends on what axioms you choose to believe in. Personally, I stick by my conviction that utility cannot be assigned a cardinal, but only an ordinal number. You can't do math with ordinals apart from comparison operations, therefore dooming any attempt at a calculus of utility from the get-go.


The author tries to derive a dilemma for utilitarism:

> If you don’t accept ethical intuition as a source of justified belief, then you have no reason for thinking that enjoyment is better than suffering, that satisfying desires is better than frustrating them, that we should produce more good rather than less, or that we should care about anyone other than ourselves.

We have to take the author's word for this, because he does not offer any support for his claim. I wonder whether he believes that his personal ethical intuition can/may/should be generalised. The structural similarity to "whosoever does not believe in (my) god has no base for ethics" should serve as a warning.

> If you do accept ethical intuition, then at least prima facie, you should accept each of the above examples as counter-examples to utilitarianism. Since there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall.

The auther is willfully ignorant about utilitarism and should make amends.


I don't think either utilitarianism or deontology can claim precedence over one another. We are doomed to have the two fighting in the human brain for as long as there will be human brains. And different people will come up with different conclusions in the same circumstances. A third principle is "First, do not harm". Not just for the medical profession but for everybody in any circumstances. I think in many cases that one should let that take precedence over both utilitarianism or deontology. Both utilitarianism and deontology assume we are smarter than we actually are. We cannot compute the pleasure of all human beings in the aggregate. Nor are we very far-sighted prioritizing between principles in many cases. I am sure this conclusion is a disappointing one to the philosophers in the room but I think it is true.


Most if not all these supposed dilemmas are the author missing to account for the aftereffects.

Organ harvesting: How bad is the fear in every healthy person for risking to be harvested? Healthy people are much more that people in need organ transplants. How that fear condition their life. Is range generated? With what effects on in need organ transplants?

Framing the innocent: If the action from sheriff comes out and don't get punished what's the effect in people that that fear to be framed for no reason? What's the effect for people losing trust in justice? If you need just a temporary guilty just fake it as long as you find the real criminal or you improve the situation with other means. Obviously situations abstracted from the context and for others lines of actions lead to absurd solutions.

Deathbed promise: Considering the aftereffects of this case are so complex that giving a clear response is quite complex at best.

Sports match: Just save the person and let the other people know that they helped to save a person in need. Gratification will be much more that annoyance for short interruption and memories for that extraordinary events will be good too. If the person was just experiencing a little distress for the latest minutes of mood landing just let him experience it. Utilitarianism let you balance different situations.

Cookie: If we are talking about cookies who care. If we are talking about reward we can balance two things: - a gentle act toward a mad person can have much bigger effects - letting people know that good people get rewarded can generate more good people and more good with time

The Professor and the Serial Killer: Professor do good mainly with his work and with his personal life. Charity is only a small part. How being a lawyer would affect this total outcome? These more money he earn come from somewhere. What's the total effect?

Excess altruism: An altruist get joy from helping others so giving the cookie to Sue the total pleasure is maximized.


I think the fallacy for some of these examples come from assuming a theoretical world where the future or alternatives are known with certainty.

>Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?

There is no mention about possible complications with surgery (all five patients could die shortly after recovering from surgery) or whether there are other humans in the world that could donate the organs without sacrificing a life.

In real life you would wait (i.e., do nothing, do no harm) to find alternatives, which would redefine the problem. In the theoretical world there's no gain in waiting because the problem is strictly defined and immutable.


I'm completely unmoved by all of these, except for (f), sadistic pleasure. In my opinion this one is poorly presented, because it doesn't highlight the real problem well enough, since it is so shocking. The issue is not so much that some concepts of pleasure are better or worse (this may be true, but it doesn't make me really worry for utilitarianism). The issue is that interpersonal hedonic comparisons are not possible. There just is no way to do this. If one person says they enjoy something more than another person does, how do you confirm or falsify that? My best answer is to do what you would if you knew that tomorrow you would wake up as a random person, but one can immediately find problems with this.


Utilitarianism can be a powerful philosophy in some situations. Without a fake contract, the experiments that led to Pong wouldn't have taken place [1].

Also, if you have been in the software industry long enough, you have probably been in a situation where you had to deliver buggy or incomplete software because your customer wanted it yesterday.

Personally, I'd rather let someone know the stuff they are getting is not as good as it should be (yet) than fail to meet a deadline and permanently damage our relationship.

This is of course heavily dependent on the problem domain. The failure of a seismometer and a hypercasual puzzle game have very different consequences.

[1] https://spectrum.ieee.org/pong


> It’s not good because utilitarianism, like all ethical theories, rests on ethical intuitions.

This is not the case, if it would be philosophy wouldn't exist instead we would just have people un-necessarily explicating their intuitions, like the article.


The article omits what I always considered the best argument against utilitarianism. It's basically a cutting-off-the-branch-you're-sitting-on argument, and it's due to Bernard Williams.

1. Utilitarianism asks me to choose to save 10 strangers over my own kid, as this would maximize human happiness.

2. The bond a parent shares with their kid is among the most foundational examples of human happiness.

3. If human happiness is so flimsy that I should be willing to destroy the root of my own in the advancement some abstract ethical theory, why is human happiness worth maximizing in first the place?


Utility isn't the same as happiness. And in any event, another significant problem is in quantifying happiness, even if you were to base your decision making on it: how do you know that the decision-maker deciding to sacrifice their own kid would maximize overall happiness? How do you know those 10 strangers wouldn't be overcome with guilt, or that the pain of he decision maker wouldn't equal whatever suffering might ensue in others from the alternate decision?

Also, if that decision were made, it's not in the service of advancing some theory, it's in sacrifice. Wouldn't to do otherwise be a selfish decision? And if not, what does it say about maximizing happiness? Whose happiness is relevant? The 12 people involved, or whoever might be privy to the decision?

Some of these things might argue against utilitarianism, but I'm not sure any other ethical framework would be more immune to criticisms.

Sometimes I think these kinds of decision scenarios are fundamentally intractable, and don't really speak to reasons for adopting or not adopting any particular ethical paradigm.


> Utility isn't the same as happiness.

In theory no, but in practice it is the concrete instantiation of utility most commonly proposed. Moreover, I think any definition of utility we invoke will be susceptible to a similar critique.

> another significant problem is in quantifying happiness

I agree with this, and I think the measurement problem is insurmountable. But I still think Williams' criticism is deeper. If we magically solved out the measurement problem (which, again, I think is impossible - the question is ill-posed) I would still be opposed to utilitarianism, because whatever notion of goodness we invoke becomes trivialised when we boil it down to a liquid currency.

> Sometimes I think these kinds of decision scenarios are fundamentally intractable, and don't really speak to reasons for adopting or not adopting any particular ethical paradigm.

I think they can be useful, but analytic philosophers have a bad tendency to lean far too heavily on them. Often results in missing the forest for the trees.


This seems to assume that the 10 strangers are not someone else’s children.

My understanding of utilitarianism is that other factors than keeping someone alive should be considered. Is it utilitarian to save 10 elderly people over the life of a child? It might not be, depending on how long the elderly are expected to live, and if the remainder of their lives is as valuable as those of a child (eg, will their remaining years be unproductive and sickly due to age, while the child’s life may be full of productivity and health).


> This seems to assume that the 10 strangers are not someone else’s children.

It does not make that assumption.


I feel like the examples that were given were quite narrow-minded. Utilitarians aren't narrow-minded robots, I think most people will include the bigger picture when making decisions, including ethics.


I hate this type of article. "Let's condense a philosophical idea that people have spent centuries developing into one sentence and then wrote several paragraphs attacking that sentence."


It can't possibly be wrong because people thought about it for centuries? Lots of such theories are wrong.

I think it is good to point out the flaws in a short and concise way. An achievement, really. It is more difficult to be short and concise than to write long articles.


He said nothing of the sort, he claims that the author is oversimplifying utilitarism.


It's entirely possible the idea is nonsense, attacking a straw man version of it doesn't show that an idea is nonsense. An idea developed over that period can't be meaningfully summed up in a sentence.


It can of course be wrong. But it is hubris to think that these ideas are new and haven't themselves been discussed and challenged in the moral philosophy community.


Most, if not all, of these problems can be done away with if you don't assume that utility is linear, but e.g. logarithmic.

Then things like enjoing one cookie slightly more than somebody else isn't a problem, because enjoyment of ten cookies is not much greater than enjoyment of nine cookies and you can have a spare one for somebody else.


> Most, if not all, of these problems can be done away with if you don't assume that utility is linear, but e.g. logarithmic.

Really? You picked the easiest example in the article. Try a more interesting one, like, say, organ harvesting: how does logarithmic utility deal with that?


Second order effects. If the past few decades have shown anything, its the second order effects that are much more important than any single effect.

What is the second order effect in this case? "Humans might be killed for organ harvesting. This leads to widespread uncertainty over who would be killed next. And uncertainty leads to other issues."


Rawls’ Veil of Ignorance[0] is a fantastic tool for evaluating these things.

For the organ harvesting scenario, if you were behind the veil and had to choose what society you'd enter, would you choose to go to the one where a yearly physical might end up with you dead while you're parted out for others in the hospital—and also might mean you get your organ replacement lickety-split should you need one—or would you rather go to the one where this does not happen?

[0] https://en.wikipedia.org/wiki/Original_position


If it comes to idealised organ harvesting: I honestly don't know morally which side of the trolley problem (which this one is an instance of) is the one to be picked, so it's unclear to me what side is the right one. Remember that you're a few times more likely to receive an organ than to donate all organs in this case.

When it comes to "real" case: it causes distress to the whole population and transplanted organs have consequences (e.g. immunosupressants) that lower their value.


I don't understand the difference between "idealized" and "real" organ harvesting. Unless "idealized" just means "ignoring all the actual hard questions involved".

In any case, your claim that I responded to was that "most, if not all, of these problems can be done away with" by adopting logarithmic instead of linear utility. You haven't said anything about how logarithmic utility "does away with" the problem of deciding whether organ harvesting is ethical. Or about any other of the scenarios in the article except the cookie one. So it looks to me like you are backing off from your original claim.


I am not a utilitarian either, but who in reality actually is? Seems to me that arguments either way are sterile if they’re devoid of questions of identity and strategy. I do what I do because of who I am and what I want to achieve as much as any action-by-action decision making, which would be paralysing anyway.


Pretty much any political or moral philosophy reduced to a single sentence or paragraph becomes silly – if not downright evil – if applied uncritically to any and all situations.

Many people are utilitarian to some degree, and most – including those who might self-identify as utilitarian – would say there are limits to it, too. There are extremists who push the utilitarian limits very far, but those are relatively rare.

The same applies to many other philosophies. Many people – on both the left and right – would agree that anyone "should have the freedom as they want, without too much interference of government or other organisations", and are libertarians to some degree. It's only when applied to the extremes without critically considering the trade-offs that it becomes silly, or even evil.

The author prefers deontology; that okay, but it's not hard to come up with crazy examples for that, too. If a patient's life can be saved by investing all the hospital's financial resources then deontology, pushed to its extreme, says we should do this. This would be silly, and no doubt the author would agree, because you don't have infinite money and need to think about the patients that come in tomorrow, too.

In short, I feel this article is rather myopic, as well as an example of "nut picking" (that is, pick the nuttiest/craziest views and then criticise the entire concept with those views).


But... but... the article claims false moral dilemmas... ¿Why framing a innocent is better than a riot? ¿Why kill a healthy man is better to let die 5 ill of them? ¿Why charity ---oh my god--- why charity is better than the usual inherintance rule? Stop reading here.


How are these false? Framing an innocent person causes less overall harm than allowing a riot to happen. Killing a healthy man causes less overall harm than letting 5 men die, etc.

If you think killing the healthy man is wrong, you probably don't agree with utilitarianism.


Ahh, the author is a philosopher. That explains idiocy.

The only thing that this text points to is that it's better not to be strict utilitarian if you have iq below 50 or a mentality of evil djinni.


Shouldn't there be hard limits outside of ones system of moral philosofy? A limit like don't kill people on purpose seems to take care of some of the problems...


I've always felt this line of argumentation makes a lot of unimaginative assumptions about the way in which we weight pleasure vs. displeasure. Take the example from the article:

> There is a large number of Nazis who would enjoy seeing an innocent Jewish

> person tortured – so many that their total pleasure would be greater than the

> victim’s suffering. Should you torture an innocent Jewish person so you can

> give pleasure to all these Nazis?

This makes sense if the weight of pleasure is uncapped, but imagine a utility function where lots of pleasure still only has a bit of utility (say 10) whereas very extreme displeasure could have much more negative or nearly uncapped negative utility (say, -(10^20) for torture). Then no number of Nazis that could ever physically exist would be able to cause this situation to have negative utility.

Thinking in those terms, I find the criticism weak.


That conundrum is rather easily solved by noting that the "pleasure" a person gets from watching someone be tortured is clearly not the kind of pleasure we're talking about. Celebrating the torture of another human being is likely to lead to yet more torture, so whatever little "pleasure" a masochist might get from watching torture is quickly offset by increasing their desire to continue to torture. Clearly the universe which maximizes total pleasure is one in which the Jewish person goes untortured and the masochistic Nazis receive psychological help.

Just like most ethical conundrums, these basically all rely on carefully controlling the bounds of the hypothetical to be both short-sighted and unrealistic to the point of farcical. How does the sheriff know for certain there will be riots if he doesn't hang an innocent person? He doesn't. How will all those sports fans feel if they learn that an innocent man needlessly died to avoid interrupting their feed for 10 seconds? They would probably feel terrible. Are we really just going to accept these extremely unrealistic hypotheticals where the author assures us his two options and consequences are genuinely the only possibilities? How is the surgeon going to feel if he murders someone as part of his job? It wouldn't take long for him to burn out and commit suicide, adding another needless death and depriving the world of a presumably talented surgeon.

In general most of the examples seem to imply that a utilitarian would do the very short-sighted option that has the biggest immediate increase in pleasure, while completely ignoring all the other negative long-term effects of their decision.


Why should the negative weight of displeasure be uncapped but not the positive weight of pleasure? Having the capping symmetrical (either both capped or neither) would seem to be preferred by Occam's Razor.


"Pleasure is never as pleasant as we expected it to be and pain is always more painful. The pain in the world always outweighs the pleasure. If you don't believe it, compare the respective feelings of two animals, one of which is eating the other." —Arthur Schopenhauer


This is comparing actual pleasure and pain with expected pleasure and pain. It says nothing about why actual pleasure should be capped but actual pain should not.


The first sentence of the quote does as you say. The second and third sentences make more general claims.


Even those sentences don't argue for a general cap on pleasure but not on pain. In fact, it's hard to come up with any general claim that they do amount to an argument for. The specific example given, one animal eating the other, is obviously asymmetrical, so of course we would expect the respective valences of pleasure and pain to be asymmetrical too. But that's a particular property of that specific example.


Well, I think you'd have to accept the Schopenhauerian worldview (or something similarly pessimistic) in toto, that sentient life involves a surfeit of suffering and conflicting desires that cannot possibly be reconciled, in order to accept the general asymmetry. A short quotation can, of course, only give a glimpse into that: it should be understood as rhetoric more than an analytical chain of reasoning. One animal eating another is absolutely central to the system of nature, since it's how the system sustains itself; the pain of one creature being hunted down and eaten is necessary for the other creature to survive, and yet the pain of dying is obviously greater than the satisfaction of eating.


This is played out in numerous sci fi, where some ai/machine is given control and it decides the most efficient thing is to murder everyone.


So, the gist is that this person doesn't like utilitarianism because they believe it renders unsatisfactory results.

The irony.


Try negative utilitarianism, it's much more consistent. Pleasure is overrated.


I had no idea utilitarianism was an ethical framework.TIL.


The purpose of moral intuition is to help you be a good ally. That's it. This is why those close by seem more morally salient - they're the ones who you can actually be an ally to. In Singer's thought experiment about the drowning child vs. the children starving in Africa, he's trying to overcome people's natural intuition to favor those close by. Utilitarianism in general is trying to overcome a lot of intuitions about who you can realistically be an ally to - treating every person as equally morally important regardless of their relation to you.

Many of the examples in the article are constructed to omit outside observers. In the real world, this is very unrealistic. Our intuitions about the importance of acting consistently reflect the risk that we'll be discovered doing something abhorrent. Reputation is important.

In fact, if we make the examples more abstract and impersonal, then it's much easier to see how utilitarianism makes sense. What if there was no actor making a choice; instead, the two possible outcomes happen by chance or natural causes. Say the healthy patient dies from a freak accident, allowing other patients to be saved. This seems like a better outcome than five people dying, all else equal. It's only the possible effect on the doctor's reputation that makes the original a moral dilemma.

Similarly, in the riot example, if the innocent person was implicated by accident, that would be a better outcome than many innocent people dying to the mob. It's only introducing the sheriff and his reputation that makes it a dilemma.


Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?

This question is constantly trotted out and it's a great example of the deeply unreal world ethicists insist on operating in. In the real world, you don't, because making people people live in a world where that happens costs more than doing it would benefit you. Doing the murder is therefore not, I repeat, not the utilitarian answer.

They always refuse to pose the question in a way that might actually make students consider whether the murder is worth it. For example, suppose the seven people are the crew of a spacecraft sent to divert an asteroid from wiping out human civilization. Is murdering one to save billions justified?


Is murdering one to save billions justified?

No.


That's an interesting response. I wonder if you getting caught up on the word "murder"?

A nucleur missle is about to fire off and kill everybody in your country. You can only shoot it before it launches, missle defense can't catch it after launch.

You can preemtively destroy the missle, but an innocent child playing near the missle silo is going to be killed.

Do you stop the missle? The child's death will be called collateral damage. Nobody was "murdered".


The nuclear missile is a worse example because there is already this element of hostility and self-defense in the scenario. You might be able to justify killing five million people in another country in order to prevent them from launching a missile onto your country that would kill a million of your people. [1] I have no idea what utilitarians have to say here, will they be okay with getting a million of their people killed because that is a smaller lose of utility than the prevention of the attack which costs five million lifes in the hostile country?

[1] For simplicity ignoring all the complications of who wants to attack, probably the government or the military, and who would be victim of the prevention, probably military and civilians, and that you therefore might have to kill people that do not support or even oppose the attack.


Without the support of human civilization they are probably a bit boned anyhow. :)


Sure, but that doesn't really help to justify killing them. Can I walk around and kill random people? I mean sooner or later they will all die for one reason or another, I am not really changing the outcome for anyone...


No, but the people in this case are not random, their death somehow saves billions, which means it very different from killing random people on the street.

In fact, we make far worse moral calculations than that frequently. Eg, if you've ever supported a war, you've agreed with killing completely innocent children, often in horrible ways, for the sake of whatever benefit the war might have had for the side you support. And there have been plenty wars fought for reasons other than self-defense. Take Iraq and Afghanistan for instance.


Why not?


This is a a good article




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: