This is a welcomed change as it prevents unrelated sites from making XHRs to unrelated sites (already mitigated by browser restrictions with allow credentials and very loose Access-Control-Allow-Origin server headers) but I'd like to point out that if someone stole your cookies via a XSS request they could free themselves of browser restrictions by making requests by not using a browser.
This is particularly a problem for sites that have XSS vulnerabilities and don't use the http only property for sensitive cookies and non-https sites on open wifi networks. Sites should use a combination of secure (cookies transmitted over https only) and http only (JS can't access the cookie) cookies and redirect all http traffic to https. Ad networks need to get better at helping sites monetize on https only traffic, it's a huge barrier for many sites making the switch and helping the internet finally relegating plain text http traffic to the history books.
The primary problem this hopes to solve is actually CSRF. Simply generating an HTML form for a website for any website and submitting it sends the cookies of the target website, regardless of where the form is based. XHR isn't so much of an issue as it has same origin policy restrictions.
This spec allows you to set cookies that turn this outdated and age-old security policy on its head, so instead of having to generate and validate cryptographically derived client-correlated tokens on every form (CSRF tokens), we can simply set this flag and refuse to send these cookies from any other site. This has long been known to be the right thing to do, which is why other new-age web policies like CORS refuse to send cookies completely by default.
The HttpOnly flag is meant to mitigate cookie theft risk via XSS. To my knowledge this particular innovation actually does nothing to that risk.
Yep, CSRF is a completely opt-in problem to have. There is pretty much zero (valid) reasons to need Cookies anymore. Although I agree this spec is an improvement. Its main purpose should be to make legacy systems more secure. The best course of action would be to avoid cookies entirely.
It seems that this can be used to easily disallow hotlinking of content. It should be enough for a content hosting site to set any cookie (doesn't need to be a secret cookie) with the
Same-site parameter and drop all requests without the cookie.
Not really. Hotlink protections at present check the Referer header to see if the request was initiated by another site. Not only can this be bypassed on a browser level, HTML5 allows <a> tags to have the rel="noreferrer" attribute, which prevents the referrer from being set.
This proposal allows sites to set a cookie (essentially, an authorization token), that must be actively sent for data to be available, and which can only be sent when the site being browsed matches the domain on the cookie.
I've used it for many years. And in all that time I've only had to make a half-dozen rules (i.e., whitelist sites, though you can write the rule narrowly) to get specific sites working with it; otherwise it's invisible.
It's based on academic research; be sure to look around the site if you're interested.
For a time being, it won't - unless you intend to only provide CSRF protection for the browsers that support this extension (and can server even detect whenever the UA is capable or not?). I think for at least a few years, Cookie+POST data is the only reliable option.
> I think for at least a few years, Cookie+POST data is the only reliable option.
Not entirely. If you're willing to set headers you can do so as a trivial anti-CSRF measure. Just setting a header like "X-Totes-Not-CSRF" would suffice as CORS will prevent arbitrary sites from setting such a header. Its value does not matter.
A similar anti-CSRF measure is implemented in some application frameworks by default. For example, When performing XHR requests in AngularJS, "the $http service reads a token from a cookie (by default, XSRF-TOKEN) and sets it as an HTTP header (X-XSRF-TOKEN). Since only JavaScript that runs on your domain could read the cookie, your server can be assured that the XHR came from JavaScript running on your domain. The header will not be set for cross-domain requests."
This is an effective approach because unless an attacker has already compromised the relevant cookie, they will be unable to spoof the X-XSRF-TOKEN header in a cross-origin request. On the server-side, you just need to validate that (a) the X-XSRF-TOKEN header was sent and (b) it contains the expected value for each HTTP request received.
However that does cause an extra round trip for the OPTIONS message (slowing down your site). The options can be avoided if you do not set any optional headers (plus a couple of other restrictions).
It is not about the domain scope of a cookie, but defining when should a cookie be attached to the outgoing http request.
Right now when evil.com does XHR or form submission to google.com, browser attaches all google.com cookies automatically. The request happpens even though response can't be read back by evil.com. With SameSite attrib on all cookies, request to google.com from evil.com won't send any cookies. This will severely limit CSRF and other types of attacks.
In fact I'm surprised no one proposed this earlier. Hindsight bias of course.
The current spec has a serious flaw for CSRF prevention - it doesn't include the protocol in the definition of site, only the domain. This allows a MITM'd http page to CSRF a https site. This same flaw is in cookies themselves - a cookie set over https is used for http requests.
This is particularly a problem for sites that have XSS vulnerabilities and don't use the http only property for sensitive cookies and non-https sites on open wifi networks. Sites should use a combination of secure (cookies transmitted over https only) and http only (JS can't access the cookie) cookies and redirect all http traffic to https. Ad networks need to get better at helping sites monetize on https only traffic, it's a huge barrier for many sites making the switch and helping the internet finally relegating plain text http traffic to the history books.