Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You seem to have missed a good chunk of what I said.

But to address some particular points:

> sometimes the application he is using for bookmarking, sharing, etc. drops the slash

Do you happen to have any evidence of this? I’ve heard it mentioned very occasionally, but never seen it, including any probability of it in logs (though I have seen more bizarre things), and the only ways I can imagine it being likely to happen would break many other things too, so that it doesn’t seem likely.

> perhaps you've been inconsistent along the years, thus having redirects saves the dead links...

And so I strongly advocate for retaining such redirects. Just not gratuitous support for other things.

> It is when you don't want dead links.

I said for the sake of it. If by “dead links” you mean “existing URLs that worked in the past”, that’s not “for the sake of it”, but good cause. But if you’re speaking about proactively allowing things that never worked in the past, that’s exactly what I’m arguing against. I want robust justification for every extra URL that is supported, of the machine or human that is likely to encounter it and why. (As an example of this, I’d honestly quite enjoy returning 400 for requests with unknown query strings parameters, which in the context of static websites mostly means any query string, in order to truly have only one URL for the content; but I acknowledge that this is not pragmatic because it’s not uncommon to inject additional query string parameters, typically for the purpose of spying on users in unwanted utm_* parameters and the likes.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: