Hacker News new | past | comments | ask | show | jobs | submit login
The Simple Truth (2008) (yudkowsky.net)
48 points by 001sky on Dec 23, 2012 | hide | past | favorite | 37 comments



I normally would not make a comment that adds nothing to the conversation, but I feel I need to in this case. If you are one of those who reads the comments before the links, a warning:

This does not have much of a point. It goes on forever. It's really not that great, and doesn't give much of a take-away for its length. Even if you just want to satisfy your curiosity, you will probably walk away disappointed. The foreword contains the only real content.

Yes, there is somewhat of a point to the metaphor, but I think more horses died than sheep.


Also, it looks like the author is poking fun at certain trends in philosophy, such as intentionality[1], and if you don't know or care about this stuff, the article probably isn't intended for you. (I don't know or care about this stuff; I just got curious and looked it up.)

[1] https://en.wikipedia.org/wiki/Intentionality


I was going to post something similar earlier. I was put off by what I perceived as a very condescending tone in the essay. I got the impression that the author envisioned his audience as children, or unintelligent adults.


Well to be fair, if you're the type that is sure that you can simply will your beliefs into becoming reality (preferably without recognizing the concept of reality), then you almost certainly fit into one of those two groups.


The essay is not about willing beliefs into reality, it is about showing the relationships between our beliefs and reality and how the former should be based on the latter. Or to put it in another way that a true statement is a statement that corresponds to what is going on in the real world. (for the record, I haven't re-read this essay in a long time, there might be some sort of a revision done on it, but I doubt that much has changed)


After reading a couple paragraphs, I came here to see this specific comment. The extremely generic title was another clue to the substance of the content. Thanks for the time saved.


I'm feeling positively stupid - I read through the whole thing and the only point I got from it is that author paints one of the characters as some kind of very dumb postmodernist philosopher and then kills him. But in service of which point and how the point is being proven? And why after declaring that it's "too simple" author proceeds with 6000+ word parable the point of which is not exactly crystal clear? Anybody could give me a TLDR version (I actually did read, but that didn't help) of the point of it?


You're right, I think, in complaining that this is too verbose. Yudkowsky has some very good writings, but this isn't among them. The key point, I think, is here:

"'I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’."

I don't think the essay as a whole communicates much more than that point, but it's not a bad one.


This is an interesting point, though it seems to entirely correct to me - even if I had complete understanding of all physics laws and could predict results of any experiments I can imagine - they would not mean the reality does not exist separately from me. If I could cause any experiment produce any result - including same experiments producing different results in the same circumstances - then I would have a good claim on controlling the "reality". How to determine if I caused the result and not just predicted it is an interesting topic, which probably requires some careful design, but since we know about existence of no-deterministic processes, this should not be too hard to do I suppose.

In other words, what is described here is a sufficient, but not necessary condition for existence of the reality. Though thinking more about it, it's not sufficient either. We know people that are not able to predict or control their own thoughts and actions. They are usually considered "mentally ill", but that's beside the point - it is so only because such mode of existence is very inconvenient for the person and hinders his interactions with the society. But if there was not reality except for the said person's mind, this person still could make wrong predictions about what is going to happen. If fact, one does not have to be particularly crazy for it - many people, just this season, claim with complete belief and certainty, that they will do certain things very soon over which they have full and complete control - such as start exercising, eat healthily, cease smoking, improve their behavior in some other way, etc. Many of them will discover, some time later, that it did not happen. Does it mean something other than their own will made it so? No, I think it does not - it just means they were fallible.

So it looks like the experimenter being wrong is not either necessary not sufficient condition for the existence of the reality independent from the said experimenter. (Note of course I don't claim it does not exist - I just say the proof offered in this paragraph does not prove what it intends to prove).


I think he also paraphrased a quote I like fairly well. I don't feel like reading that whole paid-by-the-word essay again, but it goes along the lines of "reality is the thing that exists whether or not we believe in it." A reasonable analogy would be "truth is the thing that is accurate regardless of our opinion about it." I am obviously unfairly dismissing relativism outside of a particular range of variation since to do otherwise makes most discussions about anything pointless.


Yudkowsky has some very good writings, but this isn't among them.

I would be very thankful if you could point me towards his better writings!


Lot of stuff here: http://wiki.lesswrong.com/wiki/Sequences

It's less about the quality of writing or how quickly you can absorb it and much, much more about fascinating ideas and concepts.

And I have thoroughly enjoyed "Harry Potter and the Methods of Rationality" http://www.elsewhere.org/rationality/


That post is part of a sequence called "Map and Territory" [1] in which Eliezer explains that your beliefs are a map, which may or may not correspond to reality, the territory which the map is supposed the represent.

In this post, Eliezer is mocking philosophical arguments about the meaning of "truth", to illustrate that truth isn't that complicated: a belief is true if and only if it corresponds to reality. Eliezer has recently written a much clearer version of this viewpoint [2], in which he again uses the idea of walking off a cliff to illustrate the difference between strong beliefs and true beliefs.

[1] http://wiki.lesswrong.com/wiki/Map_and_Territory_(sequence)

[2] http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/


OK, this latter link explains a lot. Now I see where the cliff argument fits - it's actually not the argument about proofs, but an illustration of truth being defined as match between expectations and actual results. Also explains where the shoelaces thing fits in, thanks.


Does he have an original take on the correspondence theory of truth or is he rehashing things from early 20th century philosophy?


I don't know much about philosophy, but I know that Less Wrong's opinion of it is pretty low [3].

Eliezer probably talks about quantum mechanics more than most philosophers, so his recent explanation [4] of how statements about day-to-day objects can be true despite the fact that only fundamental particles really exist is probably original.

[3] http://lesswrong.com/lw/frp/train_philosophers_with_pearl_an...

[4] http://lesswrong.com/lw/frz/mixed_reference_the_great_reduct...


As far as I can tell, the introduction of a lot of terms from physics doesn't fundamentally change what he's doing philosophically, which is largely a rehash of logical positivism. I could be wrong, but he's setting off all sorts of "don't waste your time here" bells in my head, so I'm not going to really dig into him and see whether it's a waste of time to do so.


Don't logical positivists usually assuming only testable assertions are meaningful? See: http://lesswrong.com/lw/ss/no_logical_positivist_i/


As someone in the comments notes, he incorrectly dismisses his own example that illustrates why he's not a logical positivist. A momentary cake in the heart of the sun would have physical consequences in the universe that a (sufficiently advanced) being could detect (or not detect), so a logical positivist would agree with him that the statement is meaningful.

I don't know that it's fair to say he's just a logical positivist. But he seems intent on layering LP with a lot of jargon from physics and information theory to create distinctions without difference.


To me, it all his arguments about "truth" boil down to "I'm into AI and don't bother me with Sociology", which is a terrible shame once he actually tries going into sociology. Since, you know, his own advice of following who has done the research is contradicted here. All he does is dismissing every argument he does not like as "discussing politics". Which is fine if you are an AI researcher, but very very poor advice to the other 99.999% of the world's population.


The core bit is:

>“The sheep interact with things that interact with pebbles…” I search for an analogy. “Suppose you look down at your shoelaces. A photon leaves the Sun; then travels down through Earth’s atmosphere; then bounces off your shoelaces; then passes through the pupil of your eye; then strikes the retina; then is absorbed by a rod or a cone. The photon’s energy makes the attached neuron fire, which causes other neurons to fire. A neural activation pattern in your visual cortex can interact with your beliefs about your shoelaces, since beliefs about shoelaces also exist in neural substrate. If you can understand that, you should be able to see how a passing sheep causes a pebble to enter the bucket.”

I think the main thrust of the article is about quantum reality. (Note also the part about 'private universes').


Not sure I understand the point still. This Mark character must be a clinical idiot if he does not understand how one thing can cause other to move. How he then wears his toga (or whatever those Rumans wear) and eats his breakfast in the morning? Is is kind of strange being explained that one even can cause other event - we observe this thousands of times daily. Why he needs to be explained that in such a complicated way? There must be some deeper point than this in it. What he actually tries to explain here?


Quite a bit of it pokes fun at General AI researchers from the '80s, actually.


This post is very long and probably not worth the time of HN's readership. On this thread[1], there's an explanation of why it is like this:

"The Simple Truth" was generated by an exercise of this discipline to describe "truth" on a lower level of organization, without invoking terms like "accurate", "correct", "represent", "reflect", "semantic", "believe", "knowledge", "map", or "real".

And a summary:

The only way you can be sure your mental map accurately represents reality is by allowing a reality-controlled process to draw your mental map.

A sheep-activated pebble-tosser is a reality-controlled process that makes accurate bucket numbers.

The human eye is a reality-controlled process that makes accurate visual cortex images.

Natural human patterns of thought like essentialism and magical thinking are NOT reality-controlled processes and they don't draw accurate mental maps.

Each part of your mental map is called a "belief". The parts of your mental map that portray reality accurately are called "true beliefs".

Q: How do you know there is such a thing as "reality", and your mental map isn't all there is? A: Because sometimes your mental map leads you to make confident predictions, and they still get violated, and the prediction-violating thingy deserves its own name: reality.

[1] http://lesswrong.com/lw/66u/rewriting_the_sequences/4cj6


I always felt that the "there is no such thing as truth" argument was easily dispatched with the following logic:

1 - There is no such thing as truth

2 - If [1] is correct it follows that it is internally inconsistent. Something cannot be true if truth has no meaning.

3 - Therefore [1] is not correct.


What you are stating is, crudely, Godel's Incompleteness Theorem

http://en.m.wikipedia.org/wiki/Gödels_incompleteness_theorem...

Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic.

Essentially, any system of axioms will have an inconsistency of essentially the form you described. But this only says that all models will be incomplete. Eliezer is questioning -- or perhaps reminding his readers -- the relationship between models of reality and reality itself.

In other words, the only thing you "know" is that you cannot model consistently a system where truth is non-existent. You don't know that, therefore, truth exists.

And more precisely, you are only pointing out a flaw in logical truth, and logic is merely one model.


The ancient scepticists tackled that problem by simply not asserting that there are no truths that can be known, they suspended judgement in the matter (which they did with literally everything, they must have been infuriating to debate with).


No such thing because we experience reality differently (or only through our 5 senses) and cannot determine 'actual truth'?

That's the most interesting part of the question for me - our own inescapable biases.


To me it seems inherently impossible to "see" a system objectively when you're part of it, as that would seem to create a paradox. I may be wrong but it sounds great IMHO.

However, I do think there is a reality that is not dependant on interpretation or observation to exist; I just can't experience it without filter. But there is something there, and it's consistent, I can communicate about it with others etc... that's imperfect, it's often silly, but that's fine; I cannot dismiss my small, subjective truth just because it's not as good as some kind of objective knowledge of reality that doesn't even exist, and probably can't exist (I mean that knowledge of reality, not reality itself).

Truth is the thing we must never cease reaching towards, even though we will never reach it, or even come closer to it. Just because it's there :)


Do "correct" and "true" identical meanings?


I think that's what Yudkowsky is essentially claiming, if I got it right - that true map is the one that gives correct description of the territory - meaning, if we make a prediction using this map and then actually interact with the territory and the prediction turns out to be correct, then it's the supporting evidence for the map being true, and vice versa. I derive it from the links given here: http://news.ycombinator.com/item?id=4961488


Good catch. I spent a long time thinking about this. I don't know who downvoted you, but you were on to something.


"There is probably no such thing as truth."

Problem solved!


You can download the essay in audio format here: http://castify.co/channels/3-eliezer-yudkowsky-the-simple-tr...


This is from 2008 (should probably be mentioned in the title.)


My head hurts.


"Better Nate than lever."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: