Hacker News new | past | comments | ask | show | jobs | submit login

> One of the ways that DWiki (the code behind Wandering Thoughts) is unusual is that it strictly validates the query parameters it receives on URLs, including on HTTP GET requests for ordinary pages. If a HTTP request has unexpected and unsupported query parameters, such a GET request will normally fail.

This goes counter to the robustness principle: Be conservative in what you do, be liberal in what you accept from others (https://en.wikipedia.org/wiki/Robustness_principle).




The robustness principle is widely considered to be a bad practice these days.

Strictly failing is a good thing, generally. Accepting arbitrary query parameters is pretty much a de facto exception though


As an example, rejecting unrecognised query parameters can cause ossification of web protocols. For example, I do a lot of work with OAuth and it is accepted security best practice now to use PKCE [1] on all authorisation requests [2]. However, despite the OAuth spec saying that authorisation servers MUST ignore unrecognised parameters [3], there are many major services that fail if you add a PKCE code_challenge parameter. This means generic client libraries either don't implement PKCE or have to maintain a list of sites that don't work (which gets stale over time) or have some risky downgrade logic.

So there is a tradeoff between strict validation and allowing future expansion and upgrades. By all means strictly validate the parameters you need, but maybe don't reject parameters you don't recognise.

[1]: https://oauth.net/2/pkce/ [2]: https://tools.ietf.org/html/draft-ietf-oauth-security-topics... [3]: https://tools.ietf.org/html/rfc6749#page-19


The same thing happened with TLS!

TLS 1.2 has a field for the protocol version during negotiation. The spec says that you're supposed to proceed with the lower version if the other party claims a version higher than yours.

Turns out in practice that some implementations just decided to give up instead. This meant that it wasn't possible to increase the version number for TLS 1.3, because advertising it would break a lot of people's connectivity. Instead, TLS 1.3 uses the 1.2 version, and added ANOTHER field containing the real version.

So Google introduced GREASE in RFC 8701. Long story short, Chrome now advertises a bunch of bogus extension values, thus making sure that implementations can't simply ignore unknown values.

This makes it a lot more likely that future cipher additions will not break older clients, which is pretty damn important for something as critical as TLS.


Funnily enough, I actually suggested we adopt something like GREASE for OAuth parameters: https://mailarchive.ietf.org/arch/msg/oauth/Rqxs-QDqqdQkt36S...

There wasn’t a lot of support.


That list archive suggests that although there wasn't a clamour of applause for the idea at least some people were receptive. However it also suggests that there were plenty of more subtle incompatibilities which GREASing just this one place wouldn't solve.

It also seems like you admit that in other places ForgeRock is also doing something that's unsound:

> (Basically there are quite a few clients that use JSON mapping tools with enum types - List<JWSAlgorithm>. I know there are parts of our own codebase where we do this too).


Well like any large codebase there will be things that need improving. The handful of places where we have similar issues are relatively unlikely to cause a real problem in the near future.


Windows once used a similar fuzzing technique[1] to mitigate a video driver bug.

[1] https://devblogs.microsoft.com/oldnewthing/20040211-00/?p=40...


I often wonder why implementors do such hacky gymnastics instead of just breaking the clients of the corps that use janky noncompliant middleboxes and causing people to replace their broken shit.

Other than those corps who bought crap, nobody would blame Google, for example, and I bet those orgs would only do so temporarily.

Make life slightly more difficult for those whose poor decisions CAUSED the ossification.


"Last Change Broke It" is the rule. Nobody cares whose "fault" it notionally is that something doesn't work now, they will focus on whatever change made it stop working. This is not least the case when you paid $1M for this security appliance two years ago, and then a free Chrome update broke everything on Tuesday morning.

Are the vendors going to give back your $1M? No they are not. Are they going to fix the appliance? Maybe. Although perhaps not unless they can see how that'll bring in more money. So you blame Chrome, because that's what broke.

Now, for the sake of security sometimes, very gently, the browsers will push to fix things because there's no alternative. They have to co-ordinate (if one of the big browsers defects then users switch to "the one that works" and you all lose) and it takes a long time to do. So that's the last option.

For example, TLS 1.3 as shipped includes an anti-downgrade mechanism. It was not possible to ship this in draft versions so it was not widely tested. It turns out that some middleboxes from popular vendors did something so stupid and dangerous that it broke the anti-downgrade. (Specifically they copied the Server.random field from a session they were having with an actual HTTPS server into the Server.random field they delivered to a browser, even though the standard is emphatic that you need to choose your own random numbers or it isn't actually secure).

So, for about a year Chrome did not enable anti-downgrade in order to allow for a fix to be developed, shipped and installed at most organisations that used the affected middleboxes. But today your Chrome (and presumably Firefox etc.) actually has anti-downgrade and any remaining middleboxes with that bug are just broken.


> Are the vendors going to give back your $1M? No they are not. Are they going to fix the appliance? Maybe.

This means that the organizations never learn plainly which of their purchases were jank and which were not, and do not improve their decision-making over time.

I think this disables/obscures an important market feedback mechanism. If a vendor sells you expensive crap with the implied promise that it is forward compatible/spec compliant, and the version field increments and, it turns out, your box isn't and breaks, it's likely to damage the reputation of the vendor (and perhaps the person within the organization that bought the janky device).

The game is an iterated one. Rewarding the not-diligent vendor/purchaser with diligent effort on the part of client software development teams does three things, none of which are desirable:

1. it shields shitty vendors from the market consequences of selling crap.

2. it shields shitty customers from the business consequences of buying crap.

3. it massively raises the barrier to entry for someone who wants to develop good and widely-accepted client software, keeping it in the realm of basically only a few large multinationals, none of whom really give a fuck about your privacy.

It's bad for everyone.


I think you've drunk too much Free Market kool aid and have begun to mistake a mere means for an end to itself. We're not trying to make the market set prices efficiently, we're trying to make the Network better, an efficient market is exciting for prophets of the dismal science, a Network that works better is good for all the actual people.


The network is better in the long run if people are more careful about which vendors they choose and vendors are disincentivized to ship broken shit.


I feel that you’re right - it’s increasingly considered bad practice - but I’ve not seen this conversation play out.

Of course, this often comes up in collaborative coding exercises (work, open source), so seeing a discussion on the topic would be helpful.

Anyone have a link to such a discussion?



If you're designing things where you expect this to be an issue you can do a few things to ease your path:

Firstly you want to have a clear distinction between "Things that it's fine to just ignore because you don't understand them" and "Things that if you don't understand them that means you're screwed, abort". For example in PNG some of the bitflags cunningly concealed in the capitalisation of the extension chunk names indicate whether it's OK to just ignore the chunk if you don't understand it, and whether it's OK to just copy this chunk across into a modified image without understanding it. In X.509 there's a bitflag that means "If you don't understand this part of the certificate then you mustn't rely on the certificate at all".

Then you need a way for people to agree on how to allocate the extensions. The IETF has an impressive array of different mechanisms available, from "You need to actually write and publish a Standards Track document" to "First Come First Served" which suit different purposes. Consider just copying some or all of them. https://tools.ietf.org/html/rfc8126

TLS had big problems with (non-)interoperability and as a security feature couldn't just shrug and put up with never moving forward so it had to endure painful upgrades each time. This time they got GREASE (RFC 8701) to try to stop this ever happening again. Basically they agree how to extend things, then, reserve some of the extension namespace and routinely shove crap into it on purpose. Things that properly understand how extensions work will go "Eh, I don't understand this" and ignore the crap. Everything else explodes, and you catch the explosions early because they happen everywhere all the time with all the GREASE you're using.

GREASing new features has become a go-to strategy and you should consider this too. For example Encrypted Client Hello (the current iteration of ESNI) is intended to be used for all connections. Basically every server you talk to, you always say "Also here's an encrypted hello". Maybe the server can read it and the plaintext hello was just a dummy - maybe it just reads the plaintext hello, an eavesdropper can't tell. So if they want to block encrypted hello, they have to block everything, even though probably very few servers will offer this feature at first.


You have to ensure you don't over-engineer specifications or it will be ignored as well. HTTP requests are one thing, imagine if we were strict on HTML validation. We now have robust browsers and I am not sure if that is only a bad thing. Although it would be nice if people would check their documents once or twice.

edit: I don't think considering the principle as bad is a sensible approach as it doesn't mean to ignore errors in theory.


> imagine if we were strict on HTML validation

You don't have to imagine it, this actually happened. I don't remember the details, but back in the HTML 4 era, there was XHTML, and if you gave it an XML media type, many browsers would parse it as XML and reject the entire document if there was any error (and display nothing at all but an error message).

There were a number of articles and blog posts about this back in the day.


All the worse for the ‘robustness’ principle then.

https://tools.ietf.org/id/draft-iab-protocol-maintenance-04....

(Shame this draft just expired before being finalised.)


Which we’ve increasingly moved away from because of some of the absurdities (and security problems) that it’s caused.

Nowadays the w3c and es committees work hard to try and ensure no ambiguity is left in the spec, and have tests available early on, and do interop comparisons between engines to see if there’s anything missed.


A big issue with being accepting is that you risk breaking things tomorrow when you no longer want to be as accepting. And it can prevent adding features that would play badly with others' out of spec implementations.

Like if twitter adds these s=NN parameters, that means you can't use an s=NN parameter for your own purposes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: