Hacker News new | past | comments | ask | show | jobs | submit login
HTTP Security headers you should be using (ibuildings.nl)
278 points by relaxnow on Jan 23, 2014 | hide | past | favorite | 61 comments



> By default jQuery sends the X-Requested-With header. It was thought that the mere presence of this header could be used as a way to defeat Cross-Site Request Forgery.

By who? Who thought the mere presence of an arbitrary header made the request "safe"?

Seriously, who the fuck thought this was a good idea? I have so far seen one single use for X-Requested-With - returning a page as "content only" - i.e. omitting the header, nav, footers etc for XHR calls.


Loads of people thought it was a good idea. Traditional CSRF attacks work by pointing an HTML form on an evil third party site at an endpoint on the trusted site. HTML forms can't include custom HTTP headers, and you can't make Ajax requests (which can have custom headers) to different domains due to the same-origin policy - so the presence of an X-Requested-With header should be enough to "prove" that the request came from the same domain as you and not from a site run by an attacker.

Unfortunately certain versions of the Flash and JavaScript plugins allow requests to be made to other domains with custom headers, which left CSRF holes open.


What do you mean by "JavaScript plugin"?


Think it's a typo for Java.


Ah, that makes sense! Been such a long time since I've run a Java applet that I don't think about them anymore.


My guess was browser plugins (implemented in Javascript).


Great article. A lot of times these "you should be doing this" articles are really annoying. It is rarely true that some advice is good for all situations. This article, however, takes the time to explain, list the purpose, and point out relevant caveats for each header. I was not aware of all of these headers, so thank you for bringing them up!


As an author that doesn't write publicly all that often, thank you for your kind words. Lots of feedback is amazing in that it helps me grow, but the occasional compliment from a stranger does feel really really good.


Can you specify domain wildcards in Content-Security-Policy script-src? For example:

    Content-Security-Policy: script-src 'self' *.somedomain.com
Also, while it looks great, usually its very difficult to integrate this header, without breaking existing JS functionality. For example, if you set this header, inline JavaScript will not be executed, this includes added event methods directly on DOM elements. Also eval() and writing setTimeout() and setInterval() like:

    window.setInterval("alert('hi')", 10);
Are forbidden from executing as well. Finally, using 3rd party libraries (Mixpanel, Google hosted libraries, Intercom.io) becomes nearly impossible without explicitly whitelisting domains in the header (a huge hassle to maintain).


Can you specify domain wildcards in Content-Security-Policy script-src?

Yes. You may. http://www.w3.org/TR/CSP/#source-list See the host ABNF grammar.

while it looks great, usually its very difficult to integrate this header, without breaking existing JS functionalit

I haven't read much about 1.1 but as far as I know nounce and hash added to 1.1 is to deal with whitelisting inline scripts.

Please see http://w3c.github.io/webappsec/specs/content-security-policy...

Reference: https://bugzilla.mozilla.org/show_bug.cgi?id=855326

https://bugs.webkit.org/show_bug.cgi?id=89577

http://lists.w3.org/Archives/Public/public-webappsec/2013Jun...

Also, if you are interested in client-side security, Mike West (from Google, one of the editors of CSP) has given a talk recently. http://www.parleys.com/play/529bee0be4b039ad2298ca0b

edit remember that the support of 1.1 is relatively low and incomplete as the webappsec group is voting on whether moving it to WD (working draft or not). So for cross-browser compatibility, you are still better off with 1.0 which is at this point very stable in major browsers.


Awesome. Thanks for the note about version 1.1 of the standard. Wasn't aware nonces and hashing were added.


The inline restriction burned us as well when we tried to implement this.

The solution is, of course, to do everything in separate script files, but it's not a trivial migration effort.


I definitely see it as being the biggest roadblock for most sites. The report-only mode is useful for getting a work list of what needs fixing.

Plus long term moving inline JS to separate files must improve maintainability, version control, and also means your caching might work more effectively. So there is a little gain for the pain (aside from just the content policy).


It looks like you can enable inline and eval stuff.

    Content-Security-Policy: script-src 'self' 'unsafe-eval' 'unsafe-inline'


But in that case you don't benefit much from it... (afaik)


"For example, if you set this header, inline JavaScript will not be executed, this includes added event methods directly on DOM elements. Also eval() and writing setTimeout() and setInterval() like: window.setInterval("alert('hi')", 10);"

Sounds like a feature, not a bug.


> Can you specify domain wildcards in Content-Security-Policy script-src?

Yes: http://www.html5rocks.com/en/tutorials/security/content-secu...


While it may be hard to migrate existing applications, since about 2 year I've started building all applications with externalized javascript.

So sticking to that rule for new stuff can greatly increase security.


it isn't that difficult to white list a few domains.. It's a grand total of 1-5 lines of code that contains a string you can change...

CSP is super valuable and it's worth the few extra lines


To answer your first question: yes[1]

[1] http://www.w3.org/TR/CSP/#matching


Content-Security-Policy is a tricky one. For starters, enabling CSP if you're using Angular causes a 30% slowdown[1]. Secondly, a lot of performance optimizations can be gained by inlining JavaScript. The latter one can be circumvented using SPDY server push, but that isn't widely used yet.

[1] http://docs.angularjs.org/api/ng.directive:ngCsp


Wow. There are more implications than I would have guessed with CSP. Thanks for pointing that out!


We wrote an extension for Google Chrome which provides a toolbar interface for testing extensions on a page. It looks at the HTTP security headers, enumerating the configured values and provides guidance on the most secure settings. Other things like meta data and form fields with security specific settings are also reported. If you're a developer, or software tester (or security professional) then you might find it a useful addition to your toolbox.

Details here:

http://www.recx.co.uk/products/chromeplugin.php#httpheaderan...

Or search for 'Recx HTTP Header and Cookie Security Analyser'


Nice work!


Sorry for brevity as im posting from mobile, but you should have a look at OWASP page on usedul http headers

here:

https://www.owasp.org/index.php/List_of_useful_HTTP_headers


I must admit it is a little ironic that your site doesn't use the security headers you advocate.


The irony is not lost on me, we have a saying in the Netherlands "the carpenters doors are the creakiest". Also we don't use them all the time yet even for customers, this blog post (and an accompanying internal training next week) is meant to remedy that.


Indeed they're. Looks like comments on your blog don't work. And there's no RSS feed :(


Tell me, which of the four headers he talked about --- CSP, XFO, XCTO, and HSTS --- are going to cause serious problems for a blog?


CSP would be useful defense in depth if the blog has comments and an admin interface on the same domain or subdomain within TLD/public suffix — XSS in a comment could lead to session fixation/hijacking.

XFO also might be useful if the blog has predictable layout of the admin interface — logged-in blog admin could be tricked via clickjacking to perform potentially undesirable actions.


The directory entries at http://ibuildings.nl/robots.txt suggest that they're hosting much more than a blog on that domain.

Edit: As others pointed it out, it's a Drupal installation. Trusting Drupal to be 100% safe in regards to malicious attacks is not a good idea. I know this is nitpicking, it is however still ironic.

http://www.cvedetails.com/vulnerability-list/vendor_id-1367/...


It's a drupal-powered site. It's no different from WP though IMO.


At least he should enable XFO if he doesn't want to be framed.

His HTTPS endpoint is not trusted by the browser, either self-signed or missing some intermediary cert... If he wants to enable the contact form for business inquiry or personal inquiry, might be worthwhile to get some $10 cert? If he already ahve gotten that far to get a 443 port enabled.

XCTO is pretty cheap to enable and doesn't hurt. The only problem is IRRC IE has a different MIME list then Firefox and Chrome.

It's pretty cheap to enable some of these security headers, just as it is relatively easy to disable some of the server-type headers (x-powered-by e.g, which apparently he doesn't have it exposed I think)


> 2. X-FRAME-OPTIONS

Please don't use that one everywhere, frame-based embedding is very useful in e.g. web-based feed reader (provides the original view of the site without having to go out of the reader and into a new tab).


Unfortunately, as the web stands today, for our developers I do advocate including it by default, unless a specific use-case comes up to not include it.

While for a simple content page allowing for framing should be okay, the truth is very very few pages are actually purely content. As soon as you have something like a comment form there is a chance that framing it could be used to reveal some private information (autofill / password manager leakage). And who is going to manage what pages are framable and what aren't? The customer would need a UI and training and developers can't always see what will be hosted on a page.

Also, as an advertiser I would very much like you to visit the full page instead of trying to view it through some limited frame, cutting off the sidebar with ads.


Am I the only person who thinks that relying on client behaviour for "security" is, well, a bit naive?


It's part of "defense in depth." Mess up one input validation? No problem, your CSP prevents client-side execution of injected scripts for most users.

Returning user, temporarily on an untrustworthy network? No problem, your HSTS header ensures they only attempt to talk to you over SSL.

It's the same reason you should set cookies to `secure; HttpOnly` -- you don't expect untrustworthy scripts to run on your page, but if they somehow do, you've got a second line of defense.


For X-FRAME-OPTIONS there is no alternate way to protect yourself server side. This one really has no other option, you could hack something with checks on window.top in javascript but then you are still relying on client side behaviour.

For the others you shouldn't rely on them, just use as backup.

And to be nitty picky, you are always relying on client side behavior. What if suddenly Firefox one day allows cross site requests in javascript, or starts making random requests to other sites containing all your cookies, or allows executing javascript on embedded iframes.


It is not about securing your server. It's about securing the user who connects to your server.


client security is about enabling your clients to be secure. Securing yourself from your clients is a totally different topic. If a client goes out of their way to break client security, the only risk is to themselves.

Headers like these are somewhat analogous to reminding people to lock their doors at night. Not everybody is going to listen to you, but you might help those who do.


Well, it makes perfect sense. You are protecting the client it self - by dictating which sources can be used for fetching information and code. XSS is about malicious code injected on your site, in their browser - it's not the client who's misbehaving.


No-one said you should rely on only these techniques. These should form part of your security approach. Yes use HSTS, but also set you webserver to always redirect http to https, etc.

Remember an attacker only has to find one way in, but you need to defend against everything. You should make it as hard as possible for an attack. Every brick in the wall helps.


In this case you are protecting the client from other connections it is making at the same time, such as when they have multiple tabs open.


Would these headers break ads that come from unpredictable sources through ad networks?


You probably should not use ads networks that run javascript from random domains. You'll end up serving malware, infecting your users and getting banned by search engines' safe browsing.

You can still whitelist valid ads network domains.


Content-Security-Policy would break ads. That's the main reason why so few sites use it.

X-Frame-Options is fine, since modern ads use JS and not IFRAMEs.

X-Content-Type-Options is fine because you are essentially telling the browser to trust the mime-type and not speculatively parse the resource. You know what the mime-types should be so no problem here

Strict-Transport-Security is fine as well. It also has the added benefit of forcing SSL for all traffic, and thus could be used to force SPDY instead of HTTP for 50%+ of a website's visitors, which is a huge performance win.


> X-Frame-Options is fine, since modern ads use JS and not IFRAMEs.

Also because X-Frame-Options restricts what can frame you, not what you can load in frames.


The only downside of Strict-Transport-Security is that you must have at least visited the HTTPS endpoint once.

Therefore, the sane way is to do 301 redirect from all HTTP to HTTPS and HTTPS response header must include Strict-Transport-Security.


Should the HTTP response also include the Strict-Transport-Security header?


No:

  Note: The Strict-Transport-Security header is ignored by the browser when your site is
  accessed using HTTP; this is because an attacker may intercept HTTP connections and
  inject the header or remove it.  When your site is accessed over HTTPS with no
  certificate errors, the browser knows your site is HTTPS capable and will honor the
  Strict-Transport-Security header.
https://developer.mozilla.org/en-US/docs/Security/HTTP_Stric...

Also read RFC 6797, section 7.2 in particular.

  An HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over
  non-secure transport.
and

  If an HSTS Host receives an HTTP request message over a non-secure
  transport, it SHOULD send an HTTP response message containing a
  status code indicating a permanent redirect, such as status code 301
  (Section 10.3.2 of [RFC2616]), and a Location header field value
  containing either the HTTP request's original Effective Request URI
  (see Section 9 ("Constructing an Effective Request URI")) altered as
  necessary to have a URI scheme of "https", or a URI generated
  according to local policy with a URI scheme of "https".
https://tools.ietf.org/html/rfc6797


No. The header is enforced on https connectiin only.


> How would you like to be largely invulnerable to XSS? No matter if someone managed to trick your server into writing <script>alert(1);</script>, have the browser straight up refuse it?

I don't quite get how this header makes you invulnerable to XSS, would someone mind explaining?

It seems like it only prevents XSS attacks from loading remote javascript files. What's to stop the attacker from just injecting the entire script inline? If you can get a small piece of javascript to execute you should be able to get a larger piece to execute just fine.

I can see how it makes XSS more inconvenient, but I don't understand how it makes you largely invulnerable to it.


Inline Javascript is outright banned by default. Read the HTML5Rocks article they linked, it is much more comprehensive and answers most of the questions like that that I had.


Ahh, now that makes sense, thanks! The HTML5Rocks article cleared up my concerns.

For anyone reading this later, it was the section starting with this paragraph:

Inline Code Considered Harmful

It should be clear that CSP is based on whitelisting origins, as that’s an unambiguous way of instructing the browser to treat specific sets of resources as acceptable and to reject the rest. Origin-based whitelisting doesn’t, however, solve the biggest threat posed by XSS attacks: inline script injection. If an attacker can inject a script tag that directly contains some malicious payload (<script>sendMyDataToEvilDotCom();</script>), the browser has no mechanism by which to distinguish it from a legitimate inline script tag. CSP solves this problem by banning inline script entirely: it’s the only way to be sure.


'inline' is also treated as a separate source. If your policy is 'self' that only means same-origin code is allowed to run, and inline JS is disabled.


not bad..i seem to have got at least 3 of them right :)

http://jimmyislive.tumblr.com/post/67125455740/securing-ngin...


* CONTENT-SECURITY-POLICY So, other than sloppy sanitation, what's the use?

* X-FRAME-OPTIONS I feel like the number of times this has prevented a useful action vs prevented a bad action is many:0

* X-CONTENT-TYPE-OPTIONS So, instead of browsers actually honoring the Content-Type header, we have to ask with a "pretty please"?

* STRICT-TRANSPORT-SECURITY Doesn't prevent problems on a first connect, but definitely a good idea (if your site supports SSL that is:)). Blanket TLS is a good thing.

Just my 2¢


* CSP is actually the header that most security professionals are the most excited by (in my experience) as it gives you more control over what resources are and are not allowed. Especially when you're thinking of a future where you want to securely 'mash up' content, being able to set policies is essential.

* XFO, it you're doing API first there really is very little reason to frame a page. Maybe Twitter style widget support? But in that case you can have a separate URL for that.

* XCTO, yes, welcome to the 'organic' web :p

* HSTS, the first connect is difficult, maybe one day via DNSSec? But I must confess to know very little about DNSSec.


I wasn't saying HSTS is useless, just pointing out one of the issues with it. I think HSTS a great thing!

With XFO, doing something like adding 'reddit.com/' before the domain to see if it's been submitted becomes much more computationally intensive on reddit's side if they can't just put it in a frame. This is where I run into issues mostly, with tools such as that. That said, there are other things it could do (take me to a submit page or a discussion page instead of framing the page). And frames suck anyway, so there is that.

I can see the usefulness in CSP, I just feel like it's a band-aid on larger problems.


X-FRAME-OPTIONS I feel like the number of times this has prevented a useful action vs prevented a bad action is many:0

I am rather curious where did you get this claim? Any stories I should check?

And also what's the larger problem? Again, I'd really love to hear you elaborate on your thoughts.. thanks.


As for XFO, I specifically said that that was my opinion and my experience. I even give a specific example: adding reddit.com/ before the domain of hackernews, for instance, won't let reddit put it in a frame (in order to put the reddit toolbar above it). I've only ever encountered tools such as that breaking because of XFO.

Also, it's my user-agent, it's suppose to do what _I_ want it to do, not what the content author wants it to do, and I can't find a way to disable honoring XFO.

For CSP, one example of a larger problem would be excepting and storing unsanitized input. If it's going out to the user as (otherwise executable) javascript, are there other places that you're placing unescaped user-submitted data that could be an issue (a sql statement perhaps or the API to a site who trusted you to sanitize things (although they shouldn't)?).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: