Hacker News new | past | comments | ask | show | jobs | submit login

I actually disagree with the Gettier thought experiment and don’t believe it demonstrates anything interesting.

When you see the cow (but it’s really a convincing model), then in your mind, there should be some probability assigned to a variety of outcomes. The main one would be a cow, another one might be that you’re hallucinating, and so on down the list, and somewhere the outcome of cow-like model would be there.

From that point you can go in at least two directions, one would be something like a Turing test of the fake cow... beyond a certain point it’s a matter of semantics as to whether it’s a real cow or not, or you could say that your “justified true belief” had to apply to the total state of the field. If you believed there was both a cow model and a cow behind it, that woukd be justified, but the existence of the cow behind the model would not justify incorrect belief that the model was a real cow, in the sense of not admitting uncertainty over the things you see.




Your model of thinking is something like idealized bayesian one, not like human. If your mind decided that something is true with sufficient probability, it then fails to see alternative explanations. This "something" will be a reality to you, not just a belief about reality. But it is not the all. Your mind have some ideas associated with cow you've seen, and those ideas also becomes beliefs of yours. Implicit beliefs, you are probably not aware of them, you could notice this implicit beliefs only if they will begin to contradict evidence in a sharp way. For example, you can implicitly decide that this is a soft tempered cow, that would like to lick you in a face, if you come near. It was not important when you saw the model of a cow, so you didn't become consiously aware of your idea of a cow temper. But it was planted in you idea of reality. Maybe you would have an intention to come near the cow to be licked in the face, and you might be unaware of this intention. Human mind can easily manage such tricks.

And it leads to a funny thing. You saw the model of a cow, and it make you believe that there is a cow in the field and that you saw a cow. Then you could find a heap of poo, and you will strengten your beliefs futher. You might find a lot of evidence and it all will be explained under assumption that you saw a cow. And this evidence will strenghten your belief that you will be licked in the face, when you come near the cow.

But you didn't saw the cow that made this heap of poo. The real cow is pitch black, with a horns of gigantic size and they are really sharp. The real cow has red glowing eyes and it is going to kill you. But before you see the real cow itself, all the evidence that would point that there is a cow would also reinforce the idea of soft tempered black and white cow. The longer you manage to keep youself oblivious to real cow traits, the more surprised you will become when you find the real cow.


> From that point you can go in at least two directions, one would be something like a Turing test of the fake cow... beyond a certain point it’s a matter of semantics as to whether it’s a real cow or not, or you could say that your “justified true belief” had to apply to the total state of the field. If you believed there was both a cow model and a cow behind it, that woukd be justified, but the existence of the cow behind the model would not justify incorrect belief that the model was a real cow, in the sense of not admitting uncertainty over the things you see.

You're replacing the model it was criticizing with a different model and then saying that it doesn't say anything interesting about your model, so it's not interesting. It's not an argument that knowledge isn't possible, it was an argument against the traditional definition of knowledge as it was almost universally understood at the time.


I’m saying the model it would like to criticize is not an interesting or worthwhile model to talk much about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: