Hacker News new | past | comments | ask | show | jobs | submit login

I just popped back to delete my comment, but since it has a reply now I will leave it and ask another question I've been wondering about your Bayesian remark:

Does that mean all Bayesian's are trying to be the same as each other? That Bayesianism considers human individuality silly?

Because if all Bayesians must agree given sufficient information, then the only things left to disagree on (potentially) are ideas which have no fixed grounding in reality, such as whether Wilkie Wilkinson's writings show colonial alienation - and maybe they would eventually agree if given enough information on each other's mental states and histories.

So, is it pushing towards the idea that there is only one possible rational superintelligent view?




Given enough information, rational thinkers should come to the conclusions about matters of fact.

Bayes says nothing about goals, however. Rational Bayesians can have completely divergent goals, so becoming increasingly rational does not impinge on "human individuality."


If your goals are picked based on facts then you are subject to the idea that rational thinkers will come to the same conclusions about matters of fact and so when exposed to the same facts will come to the same goals.

If your goal was not picked rationally based on facts, you can't claim to be a completely rational thinker.

So the only room for human individuality is the knowledge gap between us and omniscient beings where we can pick goals arbitrarily, then pursue them as rationally as our limited mental abilities allow.

?


Fundamental goals are neither rational nor irrational. A rational thinker with no non-rational goals would have no reason to ever do anything--including think at all.

Two perfectly rational agents with the same fundamental goals would, of course, inevitably select the same sub-goals and actions to reach those goals.

But at some point if you keep asking a rational agent "but why do you want to do that?" the only answer will be "because it is my nature."


>Fundamental goals are neither rational nor irrational.

That's a rather controversial statement. You have to bear in mind that the very idea that there's a clean split between issues of fact and issues of "values" or "goals" is a very controversial one. Hillary Putnam's book on this issue is a great read ("The Collapse of the Fact/Value Dichotomy").

Of course, there are also issues with the notion of "observation" used in Bayesian statistics. Since any observation which isn't actually just a direct report of a sense experience (e.g. "I see red in the top left corner of my visual field") involves the application of some theory, it's not clear what should count as an observation for a rational agent or group of rational agents. For example, did John observe that a car drove past? Or did he observe that something which looks like a car drove past? Can John observe that Bill was being petulant, or is that a value judgment? Did John observe that the current is 0.5A? Or did he observe that the ammeter reported that the current is 0.5A? Etc. etc.

The Bayesian framework is an idealization, resting on an ideal notion of "observation". I personally think that it's crazy to take this framework to be a model for rationality itself. We are not able to make observations in the Bayesian sense.


Does this mean some form of mystery about oneself must always remain? Assume complete self-referentiality of a being. Then the ultimate source of one's goals is no longer the black box called "my nature". What is it then? And would it be the same for two completely self-referential being?

(By self-referential I mean one who understands oneself absolutely; one who possess a perfectly accurate map of oneself. There probably is a better word.)


If the human brain functions more or less according to classical physics, with causality running forwards in time, then "one's nature" is just a very complicated function of the local state of the universe - the previous state of one's brain (including the encoding of the consciousness function itself) plus incoming sensory data.

More than once you've probably made a decision, at least in part, by psychoanalyzing yourself (and maybe psychoanalyzing the way that you're psychoanalyzing yourself) - so imagine being able to psychoanalyze yourself in real time, with (near-)perfect accuracy. (There might be information-theoretic problems with trying to simulate your whole brain inside of itself at speed.)

If one is the kind of person who has given serious thought to one's motivations in life (which must include coming to the realization that they are, in a certain fundamental sense, arbitrary) then I don't know that they would necessarily change all that much under these conditions.


Your goal is picked based on what makes you happiest (your utility function). You don't pick your utility function, all you pick is your strategy for maximizing it.

For example, smart indian girls make me happy. Perhaps dumb blondes make you happy. Is it irrational for us to choose different goals based on these facts?

What rational argument and evidence do you think could change my emotional reaction to one type of girl (or food, preference concerning temperature) over another?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: