His thought experiment had the opposite of its intended effect on me. He describes a bug in a framework that could very plausibly have been used as a vector for a XSS attack. If browsers implemented strict parsing it is far more likely that this bug would have been noticed sooner. The bug might never have made it off the original developer's machine.
The case in favor of strict parsing is always that you'll catch bugs sooner.
The point of his argument is that you forgot about pesky humans, and real life software - where strict parsing means a computer that's much less useful.
(In his particular example, which is just an example and not the whole point, you could parse trackbacks strictly, and still give people browsers that just work. And strictly parsed but poorly filtered third party HTML will give you XSS attacks, too. And his example happened to be buggy filtering of charsets, and not specifically poor parsing. So I don't really see the objection.)
Agreed. I'm generally a believer in applications being "strict in what they emit, and liberal in what they accept" which seems to be what this guy is advocating. However, after reading it, I immediately equated this with an XSS attack and was re-evaluating the benefits of the strictness-always approach.
Good point. Also, if the browser is able to recover from malformed markup, why can't you have a module on the server to do the same thing, and rewrite the malformed markup to well-formed before serving it?
Classic 2004 article from Mark. I somehow wandered onto it just now and was reminded of how awesome it was, and thought people here might be interested if they hadn't seen it before.