I agree in regards to the universe's indifference! Cradling that understanding had a big part in my conclusion of moral objectivity. The freedom I gained from that is what allowed me to discover what I know now.
My basic formula for moral truth is: starting from a non-maladaptive state of mind, would I prefer to have or recoil from 'X' experience? With that in mind, could I reasonably conclude that sentient others around me would arrive at the same conclusion as me? How about strangers? Other animals? An extraterrestrial? A synthetic being? Humans in a million years?
If my chain of reasoning doesn't break at any point as I expand the set of sentient beings to check against, then it is likely that I now have clear guidance on engaging in actions that produce experience 'X'.
Here is an example: Should I break my friend's arm against his will? I would not want that to happen to me. Most people I know prefer not to have their arms broken. My dog would not want that. A cow would not want that. A fish would not want his tail snapped off. People in the future won't want their arms broken if it feels how I imagine it feels. I don't imagine that a sentient robot would like its gripping mechanism broken. Whatever sorts of appendages extraterrestrials have, they probably don't want to lose the ability to use them.
I can then extract the generalizable term from "I shouldn't break my friend's arm against his will" to, "I shouldn't hurt sentient beings when it's counter to their overriding preferences."
It now follows that if I were able to poll every sentient creature through all time (sentient according to a rubric), I would expect the majority to share the uncovered preference or aversion in question. I now have a positive statement about morality that is universalizable, apparently akin to Kantian philosophy, which I have some bare surface knowledge of.
The exercise seems pretty robust to me insofar as language and idea-derived conclusions hold up. For example, culturally dependent preferences, idiosyncrasies, aesthetic preferences, etc, fail the test easily.
Thanks, I thoroughly enjoyed your reasoned example!
> starting from a non-maladaptive state of mind
This stage-setting phrase seems to be doing a lot of heavy lifting. Because of course "I should not break the arm of X" is situational: under certain circumstances, such as defending yourself or something you care about, you absolutely should break the arm of an adversary if necessary.
Fully exploring the space of possible situations to find a "truth" is at least difficult and maybe impossible. I speculate that the theoretical limit you could approach is the scientific approximation described by Karl Popper, and explored by Thomas Kuhn in _Structure of Scientific Revolutions_, which means that you're never completely certain and your model is still subject to paradigm shifts. But because ethics is squishier than falsifiable scientific propositions, it will be hard to asymptotically approach that limit.
This is why a lot of the discussion about octopus farming has left me cold: many ethical propositions are being presented as absolute truths, but are almost certainly situationally incomplete. (What is sentience? What is pain? To what degree can we practically spare sentient creatures from pain? Under what circumstances does minimizing pain backfire and increase it in the end? And so on.)
My basic formula for moral truth is: starting from a non-maladaptive state of mind, would I prefer to have or recoil from 'X' experience? With that in mind, could I reasonably conclude that sentient others around me would arrive at the same conclusion as me? How about strangers? Other animals? An extraterrestrial? A synthetic being? Humans in a million years?
If my chain of reasoning doesn't break at any point as I expand the set of sentient beings to check against, then it is likely that I now have clear guidance on engaging in actions that produce experience 'X'.
Here is an example: Should I break my friend's arm against his will? I would not want that to happen to me. Most people I know prefer not to have their arms broken. My dog would not want that. A cow would not want that. A fish would not want his tail snapped off. People in the future won't want their arms broken if it feels how I imagine it feels. I don't imagine that a sentient robot would like its gripping mechanism broken. Whatever sorts of appendages extraterrestrials have, they probably don't want to lose the ability to use them.
I can then extract the generalizable term from "I shouldn't break my friend's arm against his will" to, "I shouldn't hurt sentient beings when it's counter to their overriding preferences."
It now follows that if I were able to poll every sentient creature through all time (sentient according to a rubric), I would expect the majority to share the uncovered preference or aversion in question. I now have a positive statement about morality that is universalizable, apparently akin to Kantian philosophy, which I have some bare surface knowledge of.
The exercise seems pretty robust to me insofar as language and idea-derived conclusions hold up. For example, culturally dependent preferences, idiosyncrasies, aesthetic preferences, etc, fail the test easily.