> A philosopher might say that these aren’t bona fide Gettier cases. True gettiers are rare. But it’s still a useful idea...
At least as presented, I see the idea being used to do more harm than good.
Take the first example, with the form not autofocusing. We're already not in a Gettier case, because the author didn't have JTB. The belief that he caused the bug obviously wasn't true. But it wasn't justified, either. The fact that he had rebased before committing means that he knew that there were more changes than just what he was working on between the last known state and the one in which he observed the defect. So all he had was a belief - an unjustified, untrue belief.
I realize this may sound like an unnecessarily pedantic and harsh criticism, but I think it's actually fairly important in practice. If you frame this as a Gettier problem, you're sort of implying that there's not much you can do to avoid these sorts of snafus, because philosophy. At which point you're on a track toward the ultimate conclusion the author was implying, that you just have to rely on instinct to steer clear of these situations. If you frame it as a failure to isolate the source of the bug before trying to fix it, then there's one simple thing you can do: take a moment to find and understand the bug rather than just making assumptions and trying to debug by guess and check.
tl; dr: Never send philosophy when a forehead slap will do.
Belief: The pull request broke the search field auto focus.
Truth: The pull request did break it. There was an additional reason beyond the pull request unknown to the author, but that's not important to the Truth portion here.
Justified: This is the only one you can really debate on, just as philosophers have for a long time. Was he justified in his belief that he broke autofocus? I think so based on the original JTB definition since there is clear evidence that the pull request directly led to the break rather than some other event.
I think that when claiming it's not a JTB you're choosing to focus on the underlying (hidden) issue(s) rather than what the author was focusing on, which is kind of the whole point of Gettier's original cases. For example Case I's whole point is that facts unknown to Smith invalidate his JTB. In this programming example, facts unknown to the author (that someone else introduced a bug in their framework update) invalidate his JTB as well.
His real belief was not exactly that the PR broke it, it's that the root cause of the break was isolated to his code changes. This is evident from the debugging procedure he described. And that distinction is very important, because that detail, and not some abstract piece of philosophy, is also the real source of the challenges that motivated describing the situation in a blog post in the first place.
What I'm really trying to say is that the article isn't describing a situation that relates to Gettier's idea at all. Gettier was talking about situations where you can be right for the wrong reasons. The author was describing a situation where he was simply wrong.
> His real belief was not exactly that the PR broke it, it's that the root cause of the break was isolated to his code changes. This is evident from the debugging procedure he described. And that distinction is very important, because that detail, and not some abstract piece of philosophy, is also the real source of the challenges that motivated describing the situation in a blog post in the first place.
Yes, but the exact same point can be made about the Gettier case. The problem is inappropriately specified beliefs. The problem with that is that it's impossible ex-ante to know how to correctly specify your beliefs.
For instance, you could just say that the problem with the Gettier case is that the person really just believed there was a "cow-like object" out there. Voila, problem solved! But the fact of the matter is that the person believes there is a cow - just like this person believes that their PR broke the app.
I think I agree with the parent. While this can be made into a Gettier case by messing with the scope of the JTB (pull request broke it vs change broke it) I don't think it really works as intended by the author, and it feels like a poor example in a field teeming with much more straight forwards instances.
I can't simplify the explicit examples I have in my head enough to be worth typing up, but the gist is that I can be correct about the end behavior of a of a piece of code, but can be completely wrong about the code path that it takes to get there. I have good reasons to believe it takes that code path. But I don't know about signal handler or interrupt perhaps, that leads to the same behavior, but does not actually use the code path I traced out.
This happens to me reasonably often while debugging.
I think this is accurate, and not at all pedantic.
The idea that software has 'gettiers' seems accurate and meaningful. To some degree, making and maintaining gettiers is in fact the point of black-boxing. Something like a well-implemented connection pool is designed to let you reason and code as though the proxy didn't exist. If you form beliefs around the underlying system you'll lack knowledge, but your beliefs will be justified and ideally also true.
(One might argue that if you know about the layer of abstraction your belief is no longer justified. I'd argue that it's instead justified by knowing someone tried to replicate the existing behavior - but one form of expertise is noticing when justified beliefs like that have ceased to be true.)
And yet this story isn't about facades breaking down, it's just a common debugging error. Perhaps the precise statement the author quotes is a true and justified, but the logic employed isn't. And it's an important difference: being aware of environment changes you didn't make is a useful programming skill, while being aware of broken abstractions and other gettier problems is a separate useful skill.
Agreed. This is not some unusual philosophy case. This is a case of a programmer who should know better (rebase is not exactly a no-op change!) ignoring one of the most obvious possibilities.
I think you're interpreting the author's belief in a way that you want rather than what he actually says. I read the belief based on this statement from the article:
"When I released the new version, I noticed that I’d broken the autofocusing of the search field that was supposed to happen on pageload."
That's it. That's the belief - he broke autofocusing when he released the new version. This was true. The later digging in to find the root cause is merely related to this belief. And yes I agree that Gettier's cases were meant to show that correct belief for the wrong reasons (maintaining the three criteria essentially), but this case meets that intent as well. The author is correct that he broke autofocus via his pull request, and thus JTB holds, but the actual reason for it is not his personal code and thus the Knowledge is incorrect.
Typically, philosophers would not consider a belief about a formal system justified unless that belief is backed by a proof.
In software, for known behavioral specs, you don't have a real justification until you write a formal proof. Just because formal proofs are uneconomical doesn't mean there's some fundamental philosophical barrier preventing you from verifying your UI code. Doing so is not just possible, there are even tools for doing it.
So really, this is not a Gettier case, because in formal systems a true justification is possible to construct in the form of a mathematical proof.
An example of a Gettier case in a software system would be formally verifying something with respect to a model/spec that is a Gettierian correct-but-incorrect model of the actual system.
There are almost no software systems where we have true JTB (proofs), so there are almost none where Gettier cases really apply.
Uncertainty in software is more about the economics of QA than it is about epistemology or Gettier style problems, and that will remain true until we start writing proofs about all our code, which will probably never happen.
Maybe I am arguing the same point as you here, but I am uncomfortable that you are painting the justification criteria as being debatable in these situations.
In particular, I think your criteria for justification is too low. The standard for justification is - however much is necessary to close off the possibility of something being not-wrong.
I find the JTB concept to be useful to reminder us (1) that the concept of knowledge is an ideal and (2) how vulnerable we are to deception.
As an idea survives rounds of falsification, we grow confidence that it is knowledge. But, as Descartes explained in the evil demon scenario, there is room for doubt in virtually everything we think we know. The best we can do is to strive for the ideal.
> however much is necessary to close off the possibility of something being not-wrong.
This is borderline self-referential with respect to the whole Knowledge definition, though. If you have enough information to remove the possibility of a belief being not-wrong then there's no point in defining Knowledge at all. The whole debate around the definition is that humans have to deal with imperfect information all the time, and deciding what constitutes Knowledge in that environment is a challenge.
Philosophy major here. Didn't read the article, but will point out:
The significance of Gettier problems as we investigated it is the exposure of an entirely _wrong_ mode of philosophy: philosophizing by intuition. Ultimately, the reason Gettier problems are significant is because for philosophers, the textbook Gettier problem works because _for philosophers_ the problem captures their intuitions of knowledge, and then proves the case of knowledge fails.
Most normal people (i.e., not philosophers) do not have the same intuitions.
After Gettier analytical philosophers spent decades trying to construct a definition for knowledge that revolved around capturing their intuitions for it. Two examples are [The Coherence Theory of Knowledge][0] and [The Causal Theory of Knowldege][1]. Ultimately nearly all of them were susceptible to Gettier-like problems. The process could be likened (probably) to Goedel's Incompleteness proof. They could not construct a complete definition of knowledge for which there did not exist a gettier-like problem.
Eventually, more [Pragmatic][2] and [Experimental][3] philosophers decided to call the Analytical philosophers bluff: [they investigated if the typical philosopher's intuition about knowledge holds true across cultures][4]. The answer turned out to be: most certainly not.
More pragmatic epistemology cashes out the implicit intuition and just asks: what is knowledge to us, how useful is the idea, etc. etc. There's also a whole field studying folk epistemology now.
> The significance of Gettier problems as we investigated it is the exposure of an entirely _wrong_ mode of philosophy: philosophizing by intuition. Ultimately, the reason Gettier problems are significant is because for philosophers, the textbook Gettier problem works because _for philosophers_ the problem captures their intuitions of knowledge, and then proves the case of knowledge fails.
And to concretely tie this directly back to software[0]:
Intuition is a wonderful thing. Once you have acquired knowledge and experience in an area, you start getting gut-level feelings about the right way to handle certain situations or problems, and these intuitions can save large amounts of time and effort. However, it’s easy to become overconfident and assume that your intuition is infallible, and this can lead to mistakes.
One area where people frequently misuse their intuition is performance analysis. Developers often jump to conclusions about the source of a performance problem and run off to make changes without making measurements to be sure that the intuition is correct (“Of course it’s the xyz that is slow”). More often than not they are wrong, and the change ends up making the system more complicated without fixing the problem.
You make a good point, but I think the framing of “justified true belief” may still be useful for the reason you illustrated. It gives a developer a nice way to sanity check their instinct. Instead of “I’m sure my code broke this,” asking “is this belief justified” may help them pause and consider “well I did rebase so I can’t really be sure, I need to eliminate what’s in the diff first.”
Example would be the above comment about "There is a cow in that field". It would be to then ask, did I see the cow move, eat, or do anything else that you would expect a cow to do - rather than just seeing the silhouette?
I generally think the point is that you oftentimes have beliefs about why something broke and that it is important to check your beliefs.
Arguably the Gettier example of the cow falls into the same realm. Cows move, eat grass and go 'Moo. Did the observer see it do these things, or just go on the silhouette? Then they were not rigorous enough to have a justified belief.
But then we can get back to a better Gettier case if we suppose that Boston Dynamics happens to be testing their new Robo-Cow military infiltration unit in the field at the time.
What if the facsimile does all of those things? My initial reaction to the example was, "so what"? The guy _knew_ there was a cow but was wrong. The fact a _real_ cow existed behind the fascimile is irrelevant. All knowledge is contingent and relative. How could it be otherwise? What if 100% of all cow facsimiles ever displayed in the past were displayed on a farm with real cows--would that alone be sufficient to justify an inference of a cow nearby? To say that a belief is _justified_ simply begs the question--how can you _really_ know the truth of something?
In the scientific method you can never prove hypotheses, you can only disprove them. 'nuff said.
I find the Gettier problem entirely uninteresting myself. Someone elsethread explained it in the context of technical discourse in philosophy, in which case I could maybe appreciate the issue as a way to test and explore epistemological theories in a systematic, rigorous manner, even if for many theories it's not particularly challenging to overcome.
It's interesting because until Gettier nobody had actually proved that a justified true belief does not constitute what we commonly consider to be knowledge.
It's all very well dismissing it an inconsequential, but several generations of philosophers and scientists have grown up in the post-Gettier world, before which the Justified True Belief account was widely considered to be unassailable. Yes _now_ we all know better and are brought up knowing this, but Gettier and his huge influence on later thought is the reason why and it's just that not many people are aware of this.
But plenty of philosophers prior had suggested, if not argued, the contigency of knowledge. Kant famously discussed how knowledge is shaped and circumscribed by our faculties (e.g. sense of space and time).
Positivism was already well developed long before 1963.
I could on. I've studied enough philosophy to be comfortable with my criticism. But I'll grant that it may not have been until after 1963 that all these philosophical strains became predominate. But that doesn't excuse people from not seeing what's coming.
I agree with your point I think, however all of the Gettier cases that I know of seem to be examples of a particular choosing of the fact. In the cow example, he had a justified true belief that there was a cow in the field, but if you had phrased it more naturally as 'I know that that black and white shape I see in that field is a cow', that would not have been true.
I suppose the point is just that having a justified true belief purely by chance, almost certainly depends on the justification itself being a false belief, even in true Gettier cases.
Anyway, I find the whole thing fairly unmysterious - I just take 'know' to be a particularly emphatic form of 'believe', and I like your conclusion.
Yes, this was my gut reaction to the Gettier exceptions as well. But you picked out the specific detail that seems to clinch it for me: the framing of the "fact" under question matters greatly.
From the author's first example, the framing of the statement was also critical. Rebasing introduced the bug, and it would be a correct statement to say "something I just did broke autofocus." However, it would be incorrect to state "my code in the last commit broke autofocus."
In many ways, programmers need to be as fussy in their statements as philosophers. Since computers are stupid, and do exactly what you specify (usually...) it is important to pay close attention to these exact details. Assuming that the new code contains the bug is incorrect, and proper debugging requires careful attention to these details.
Ive certainly had bugs that were caused by some other, hidden factor like this, and typically the best way to find them is to carefully examine all your assumptions. These may be ugly questions like "is the debugger lying to me?" or "did the compiler actually create the code I intended it to make?" So while these may not be strict Gettier cases (and the author admits this in the article) they nevertheless are fairly common classes of similar problems, and framing them as such does provide a useful framework for approaching them.
The author did have a JTB that his pull request caused the bug. The point is that we hesitate to say that he knew that his pull request caused the bug, because he had the wrong justification. If you take issue with the justification, suppose that he was supposed to be the only one working on the code that day, and that the person who introduced the bug was working from home without telling anyone. He’d then have a JTB but still lack knowledge. If even that isn’t enough justification for you, then you have a problem with the entire JTB definition, and you’re essentially offering a circular definition as an alternative (since the only justification you’ll accept is justification that itself provides knowledge).
I think sometimes the spirit of these observations can be lost in the specifics.
Maybe this doesn't confirm strictly to the philosophical definition, but as an analogy I find it succinct and useful.
Many times I've fallen in the trap of absolutely _knowing_ in my head the reason things are the way they are, only to find out later I'm completely wrong. In most cases the evidence was right in front of me, but software is hard and complex and it is easy for even the best developers to miss the obvious.
Putting a term around it, having an article like this to discuss with the team, these are all useful for reinforcing the idea that we need to continually challenge or knowledge and question assumptions.
one countermeasure i use for this is to substitute the phrase "i'm pretty sure that..." in place of "i know...". that gives both you and others the mental room to consider alternative causes. the 5 why's is another more formalized technique: https://en.wikipedia.org/wiki/5_Whys
The auto complete quandary comes from ambiguity of language. The phrase "Auto-complete is broken" is ambiguous in the article encompassing at least two defintions.
I know _that_ auto-complete is broken (I see it, the test fails)
but
I do _not_ know _why_ auto-complete is broken (some other dude did it)
But I still think it's very interesting to talk about if for no other reason than that it clarifies terms and usage.
At least as presented, I see the idea being used to do more harm than good.
Take the first example, with the form not autofocusing. We're already not in a Gettier case, because the author didn't have JTB. The belief that he caused the bug obviously wasn't true. But it wasn't justified, either. The fact that he had rebased before committing means that he knew that there were more changes than just what he was working on between the last known state and the one in which he observed the defect. So all he had was a belief - an unjustified, untrue belief.
I realize this may sound like an unnecessarily pedantic and harsh criticism, but I think it's actually fairly important in practice. If you frame this as a Gettier problem, you're sort of implying that there's not much you can do to avoid these sorts of snafus, because philosophy. At which point you're on a track toward the ultimate conclusion the author was implying, that you just have to rely on instinct to steer clear of these situations. If you frame it as a failure to isolate the source of the bug before trying to fix it, then there's one simple thing you can do: take a moment to find and understand the bug rather than just making assumptions and trying to debug by guess and check.
tl; dr: Never send philosophy when a forehead slap will do.