Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The rationalist idea of doing morality as math with utilitarian consequentialism always seemed dangerous and a big mistake to me. It is easy to rationalize things that are obviously awful or absurd from common sense, and not meaningfully consistent with normal human experience or human brains and motivations. SBF for example justified all of his crimes with rationalist logic.

I'm not going to walk past a drowning kid in a lake so I can urgently go to a nerd meeting planning to save a quintillion imaginary sci-fi distant future kids - even if some made up math says the expected value of the meeting is a thousand times higher.

Fundamentally, I do have deontological ethics- I think the ancient stoics basically had morality/ethics right, and admire people that take a Socrates like stand on doing what is right on principle even in the face of manipulative people trying to control you by creating bad consequences.



It’s not just dangerous, but plainly incorrect in most cases.

It’s the usual GIGO problem. These arguments almost always start with a bunch of completely made-up numbers. It doesn’t matter how good the math is, the results will be useless.

It can work. When a government regulator decides whether to mandate some new safety equipment and after rigorous technical analysis concludes that it would result in net lives lost and so doesn’t require it, that’s sensible. But thats not what happens here.

I occasionally see this problem acknowledged, but even then, the given error bars are way too small and then it’s just full steam ahead anyway.

It could be dangerous anyway, but this makes it even more so.


Yeah, I think it is literally provably 'optimal' if you can execute it correctly with informative data, don't forget or omit any important considerations, and aren't just making up BS- all of which are almost always impossible for regular humans in real life no matter how much 'rationality training' they've had. It makes sense both for optimal behavior of some hypothetical superintelligent AI to realize its own goals efficiently, or for something like a government to weigh pros and cons of a difficult regulatory choice with well defined short term consequences - neither of which are anything like the everyday morality decisions humans make.


I take an even more sour view of this thought process. I don't actually think that SBF did the math and concluded that rationality justified his crimes. I think that he wanted to do those crimes and then, consciously or unconsciously, spread the veneer of rationality over them as a form of self-justification.

I think that a community that engages in brute math with unbounded values for priors to justify action would be worrisome. By choosing the right priors you can conclude almost anything. But I actually think that it is just roughly the same decision making that the rest of the world makes, with an unusual post-facto justification that also feeds one's ego.


It seems like that to me as well- that the whole thing can be a manipulative way to make what you wanted to do anyways seem somehow objectively correct. Which is basically the postmodernist criticism of any attempt to use logic or science for anything- and in some cases is valid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: