Hacker News new | past | comments | ask | show | jobs | submit login
How to Be Good: Derek Parfit's Moral Philosophy (newyorker.com)
52 points by fforflo on March 12, 2016 | hide | past | favorite | 16 comments



Thanks, a great read! Still the fact that so many brilliant thinkers can not agree on the definite answers to the ethical questions seems to me to be not the evidence that they are all climbing for the same summit from different sides (like it was for Parfit) but rather the evidence that the questions themselves are misstated.


> the questions themselves are misstated

This proposition accounts for about half of the history of western philosophy. The other half unsurprisingly concerns itself with the relationship between humans and '[g|G]od[s]?'. Vide Wittgenstein for a modern(ish) view of both.

> so many brilliant thinkers can not agree on the definite answers to the ethical questions

They can, and do. The really big (academic/philosophical) problem isn't ethical, it's meta-ethical: they agree that, e.g., murder, rape, discrimination and fraud are wrong, but the arguments they find sufficiently sound to undergird those conclusions are mutually exclusive. More worryingly, the premises and inferential rules these arguments rest on undermine each other: Murder is bad because if you generalized the actions of murderers, no humans would be left; Wrong, murder is bad because the Right to Life is given by a Natural Law; Wrong, murder is bad because it is moral to maximize happiness/fulfilment over a population and the dead form a local minimum; etc, etc...

Some of this stuff seriously reads like an OO-evangelist trying to convince a lisp-er of the benefits of encapsulation. The OO'er says "separate concerns to manage state" and the lisp-er replies "immutability", and neither realizes that actually, they're operating in different domains, because enough vocabulary is shared by them that they can each rationally argue that "that other guy is nuts". Their arguments are incommensurable, but neither one is outrageously wrong.

That's kind of what modern meta-ethics is like, except with about 2,000 extra years of potential divergence, and a few dozen Turings, Knuths and Dijkstras on each different side of the debate.


Yes, everybody kind of agrees on what are the right answers in common situations (like "murder of a human who does not present a threat to others is bad") but for every principle which aspires to be general its detractors have been able to present an example where it leads to conclusions which can not be universally accepted as ethical. For me that is strong evidence that there is no such principle.

An apt analogy with software engineering! I guess I believe that there are no universal principles there either. If the OO-evangelist and the lisper then each go on to develop useful and robust applications then they are both right (and that puts me squarely into consequentialist bucket with regards to software :)).


For moral truths to be objective is high on my list of things I would like to be true, and not only because I would prefer alien life we encounter to be more like Culture Minds than Kzin. One of the few really important things we lost through the rise of rationality was the conviction that things really matter, and I think this loss is behind problems ranging from corrupt politics to crappy software to the agonizingly overdone cynicism that runs through pop culture. I have little hope that Parfait's massive opus will be widely read, but perhaps its ideas will find their way down, by some means.


> Sometime around 1982 or ’83, the philosopher Janet Radcliffe Richards moved from London to Oxford, having ended her first marriage. She had become well known a few years earlier for writing “The Skeptical Feminist,” a fierce attack on anti-rational tendencies in the women’s movement, and was teaching philosophy of science at the Open University. She was very beautiful and very feminine. She attended a seminar that Parfit was teaching. She had never encountered anyone like him: he was obviously a strange person, but not in any of the usual ways. Afterward, Amartya Sen, a friend, who was co-teaching the seminar, greeted her, and, when she left, Parfit asked Sen who she was.

D.P.: I read some of Sam Scheffler’s recent work and he’s arguing that people care about the future of humanity much more than they realize. And I think that’s right, actually. J.R.R.: The Future of Humanity Institute people keep talking about engineering humans to make them more moral.

https://en.wikipedia.org/wiki/Future_of_Humanity_Institute says it was founded in 2005. Does anyone know what she could have been referring to in the 80s?


I think the author might be quoting 2011-era conversations between Parfit and Richards (recorded while interviewing both of them), rather than conversations from back when they first met.


Parfit thinks we need an absolute proof for goodness because humans have only desires. But lots of science shows that human beings also have many inborn moral instincts, such as for fairness, group loyalty, and the concern for the well-being of others. See, for instance, Larry Arnhart's book, Darwinian Natural Right.

The real problem is how you make society such that inborn desires and moral senses lead to the the best oucomes.


To a philosopher like Parfait, our inborn desires and instincts are beside the point, though. If we do something "altruistic" because it gives us a nice warm fuzzy feeling, it isn't really an altruistic act. One could ask, would you still do it if you didn't get the fuzzy feeling, and if the answer is no, you were clearly just acting out of hidden self interest. Only if someone else's suffering is in and of itself reason to alleviate it was such a bona-fide altruistic act.

You also, of course, have the problem that you might find ways of giving yourself that warm fuzzy feeling without doing something good, and there's sort of the innate problem with desire-based morality. Parfit's absolute morality has the practical advantage that it's tied to the "real" concept of good by definition (we think), rather than by the happy accident of some genetic wiring.


I strongly disagree. As human beings, we have a moral obligation to make life good as it can be, and the only way we can do that is to understand people"s basic motives and try to use them to motivate good behavior. Do you really disagree?

Parfit's solution of a single absolute moral principle is simply not workable as a way of getting people to behave well. Again, do you really disagree? The reason he tries for it is he has a greatly distorted view of human nature. As the article say, he thinks that the human moral evalution that cruelty is wrong is "just a psychological fact—flimsy, contingent, apt to be forgotten." No, the prohibition against cruelty is a permanent part of human nature, the problem is that we have many other drives, and circumstances may favor them.

Instead of coming up with useless principles, what we need to do is understand in detail why people behave in good or bad ways. That is what, for instance, Aristotle did in his Ethics, and Smith in his Theory of Moral Sentiments. Let me add that Parfit is such a profoundly odd person that I am pretty sure his understanding of human nature is quite poor, and so he is simply the wrong sort of person for this job.


How about a simple, preliminary measure for morality:

How much suffering does someone inflict upon the rest of humanity?


Yep - that's another way of stating the consequentialist view, which is a class of normative ethical theories.

( edited as my first take could be seen as condescending.)


[2011]


A train has free will but still has to go where the tracks are laid down - that's my philosophy.


I don't think the question of free will was even touched upon in the actual article.


You're right, it wasn't. However, people who propose a single, absolute moral principle generally believe, I think, that one will use it to freely overrule anything determinate that might be opposed.


I know, but I read something like that and thought it might amuse. Cause going on the number of down votes I attract there is a distinct lack of same on here. Anyways, it is getting tedious round here so I'm off. Someone please do delete this account.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: