Hacker News new | past | comments | ask | show | jobs | submit login
Changes to SameSite Cookie Behavior – A Call to Action for Web Developers (hacks.mozilla.org)
231 points by weinzierl on Aug 5, 2020 | hide | past | favorite | 110 comments



I tried to inform HMRC (and GDS: id4181490) about a bug with their Gov.UK Verify authentication process for self assessment due to this behaviour change.

They have no email address to report problems, the live chat agent wasn't technical enough to understand the issue (and blamed me) and GDS put their hands up and said it was nothing to do with them.

https://github.com/webcompat/web-bugs/issues/56216

I have no faith this'll get fixed on many sites before it breaks a much wider proportion of users' workflows.


The solution to notifying companies via live chat agents is to format your message like this:

> I have a technical message for the person or team responsible for building the XYZ website. Please ensure the following message gets delivered to them. Not acting on this message could cause the website to stop working entirely.

> [Technical details]


Have you seen a message like this work?

I'm asking because support agents sometimes receive messages from people who offer consulting services, often related to website security. Some of them are automated or not useful, but to a layperson they look similar to your template above.

I could imagine a scenario where a support agent dismisses the above message as spam.


I guess if you can find a manager of some sort within the org would be your best bet.


To me, that wording looks like a scam attempt or a malware threat, and most first-level techs I've worked with would reject it out of hand for that reason.


I suspect this would just get ignored as it would sound potentially like a phishing attempt. They'll have it drummed into them that there is no reason for such technical matters to be discussed in that channel and that they should politely brush it off.

Such training will likely be stepped up, making this sort of message more likely to be ignored, after the events at Twitter and Garmin recently which both seem to have stemmed from the phishing variety of "human factors" engineering on the part of the attackers.


I just tell them their software is broken on X browser, then wait till it gets escalated high enough for someone to care about the technical details.


Poked a GDS engineer I know, and it's been passed onto HMRC.


Thanks a bunch, I must admit this was one of the motivations behind me complaining about it here ;)


Maybe that explains why I’ve had numerous expensive problems with HMRC site, including missing VAT filing deadlines. By trial and error I found the only way to successfully use their site is to delete all HMRC and UK gov cookies before logging in.


Weird issues like that are a big reason why I use in-private browsing all the time now.


As someone who used to work in Civil Service I can't say I'm surprised. The range of technical competance across the different government departments is staggering. Some are really making leaps and bounds while others are bloated, technical monoliths.


I know it's not the best way, but you could try raising an issue against an active project in the same area using HMRC's github? e.g. https://github.com/hmrc/auth-client


GDS people should be fairly easy to find on twitter and github, if you were so inclined.


Good change. It's sad that Firefox can only do these sort of changes under the cover of Safari's or Chrome's larger marketshare. I imagine they would instigate more of such changes sooner if they had a more dominant position.


I feel like anyone who has the slightest interest in the web, or even tech in general, should use Firefox. I don't even think whether Chrome or Firefox is the superior browser is relevant in this context.

Firefox has become the only real cross-platform and non-Google-controlled alternative to Chrome/Chromium. It needs as much marketshare as possible in order to stay relevant and prevent Google from completely dominating the web.


After seeing how google treat's users of other browsers as second class (YouTube, search, to name a few), I decided to switch to Firefox before Google gain's complete control over the open web


Google might be #1 but they're there because people chose them and people can just as easily choose something else. The only company that actually has control of the web is Apple because the 1.5 billion iOS/iPadOS users have no choice in browser engines and so Apple can basically prevent any standard from progressing by refusing to ship it in Safari iOS. On Mac if they don't ship people can choose Firefox or Chrome or whatever else. No other company has that kind of power over the web, certainly not Google since at any time people can switch away from Chrome.


I disagree. While I agree with you that it is highly problematic that iOS/iPadOS users have no alternatives to WebKit in terms of what rendering engine their browser uses, I disagree with the sentiment that Apple has a level of control over the web that is remotely comparable to Google’s.

Why? For one, Chrome has much higher marketshare than Safari on mobile. While Apple has a huge marketshare in terms of revenue, Android devices are much more popular than iOS/iPadOS devices in terms of sheer numbers, and these devices predominantly run Chrome.

As such, Chrome dominates both the mobile and desktop browser market, and the only way for the consumer to work against that is, simply put, to run Firefox/Gecko on his computer and his (Android) phone, or if you’re basically anti-Google like me, WebKit on your iPhone.


No, people don't choose them because they like Chrome, people choose them because Google makes sure that their services don't work well in other browsers!

Just to point out one of the many such complaints https://youtu.be/ELCq63652ig


> No other company has that kind of power over the web, certainly not Google since at any time people can switch away from Chrome.

it seems the android situation is not much different. Yes, you can install something else, but people won't.

It's Windows+IE all over again.


It's the only browser that gives me complete control over my session logins in containers, and it respects the cardinality of the URL - never meddles with it, conceals part of it, autocompletes it... I use Chrome only when forced to by lack of Firefox support.


> conceals part of it

Since a couple of versions ago, the history dropdown in the URL bar (but not the URL bar itself) has been hiding the "https://", at least for me. Which is incredibly annoying when you're used to "not having a https:// prefix" implying "use http:// as the prefix", since it makes it look like every site in the history dropdown is insecure. And it's inconsistent with the URL bar itself: type "example.org" on the URL bar, and not only will it go to http://example.org, but also it will hide the http://.


* about:config * search: browser.urlbar.trimURLs * Double click to toggle to false.. and non-https protocol is now displayed

Source: https://support.mozilla.org/en-US/questions/881261


Agreed. As a bonus, it really is a very good browser! It's not a charity-case at all.


People aren't going to do that, though. There's not enough people that understand the issue and are willing to work through the problems to use Firefox.

This is a classic case of chasing after the ideal solution and ending up nowhere instead of making a compromise and actually improving things. It would be far better for everyone if Mozilla used the Chromium engine and built features on top of that like Microsoft is doing. Instead they are putting massive amounts of effort into a rendering/js engine that will always be "broken" because it's not exactly the same as Chrome.


You had me until

> It would be far better for everyone if Mozilla used the Chromium engine

No. Please no. That puts web standards in even more danger then they are now.


I have this option enabled and you would be surprised how many major web sites it breaks.

We can blame chrome and safari all we like, but the real blame is on web devs who had 1-2 years to fix this and just can't bother (until forced by Google or Apple).

Edit: I generally address this by contacting the site admins, showing them screen dumps of their broken site and say they should demand their web devs fix this for free since their design goes against industry best practices (I often include link to Mozilla article on this).

If that doesn't help, I try again with the higher ups, telling them their store have silently been rejecting paying customers and their IT people have refused to fix it. This usually fixes the issue quickly, only to come back after the next major site update.


> I have this option enabled and you would be surprised how many major web sites it breaks.

It's caused two major outages for us. Rather insidious as well since Chrome does A/B testing and gradually was rolling this change out.

> I generally address this by contacting the site admins, showing them screen dumps of their broken site and say they should demand their web devs fix this for free since their design goes against industry best practices

That's just not how the world works. Sites today are a mish mash of different application engines, SAAS providers, libraries and frameworks. If the version I built on two years ago doesn't set the Same-Site property that's not my (or the vendor's) fault. They gotta pay me to upgrade to fix it. And that's hoping that everything I depend on has upgraded their stuff to fix it.


Requiring the whole world to change instead of just changing a choke point like three major browsers is an unreasonable ask and only postpones actual progress.

Nothing is tangible to anyone until something is broken.


While I agree that much is on the web developers, there is also management to blame for not allowing time for fixes, until stuff is truly broken and lies in pieces.


Mozilla is no longer in a position to make such a change. Unfortunately, it's very unlikely that management will care in most cases and the errors resulting from this will be blamed on FF and not on the site in question.


Chrome, Safari and Edge are also implementing SameSite...


I don't really recall them doing so back when they had real marketshare.


Firefox never had the dominant position that Chrome has now.

Chrome can get away with breaking the web because most people wouldn’t even think to switch to Firefox or IE at this point.


It was important enough around 2010 to have a real impact (30+%).

Why wouldn't people switch to FF anymore? I'd say because they prioritize their product development in the wrong way. It's really sad. Competition would be helpful for the market.


I don't really think that's it; I think people just don't believe their personal choice of browser is going to make enough of a difference to privacy and competition to actually positively affect them; they just don't care in other words. And... I'm not sure their wrong. Sure, more competition would be good, but it's hard to see that being enough to address the monopolistic issues browsers currently face. And on iOS - you don't even really have a choice, given apples rules.


I have tried switching from Chrome to FF multiple times in the past 18 months. It just doesn't work. Neither for development nor for general use. It's sadly just not competitive anymore.


That I don’t understand personally since I use Firefox for both my work as a web developer and for personal use.

It works perfectly fine and if anything I more frequently encounter compatibility issues with iOS Safari.

But there are so many people who have a Chrome-only attitude that I don’t see firefox’s market share going anywhere but down in the next decade.


I like to use my Bluetooth headset (or to be more specific, the touch bar on my Mac indirectly) to control YouTube video playback. It's 2020 and Firefox still does not support it. It's really a basic feature that any other app on the Mac supports. Don't get me started on its performance in the dev console and in general. It just cannot keep up performance-wise, neither with Safari nor with Chrome. Instead, we now got a Mozilla VPN. I am not at all surprised their market share won't go up.

In fact, it's just sad. I would really love to make the switch to support Mozilla as an alternative to the Google Ads monopoly. But not at the expense of giving up basic features that I have gotten used to. Maybe I'll try again once Google has managed to completely screw up on uBlock Origin and other beloved filter extensions.


You've got to admit that's a pretty hyper-specific feature there. It shouldn't surprise you that many users haven't run into it. But a quick google finds an add-on for that: https://addons.mozilla.org/en-US/firefox/addon/media-keys/ - does that do what you want?

Stuff like this is largely a matter of habit. You're used to chrome's details - so they make sense. But examples the other way around exist too - e.g. I'd find it hard to miss the picture-in-picture feature from FF when watching youtube in chrome. 'course, there's an extension for that too ;-).


So the browser as an integrated media playback system is an edge case? Are you serious? Media control keys support for software that claims media playback as one of its main features is hyper-specific? I mean if it's so rare, then why did Google decide to implement it? And why do other FF users not run into this? I'll tell you. Because there are virtually no FF users anymore. They don't even install it.

And btw. thanks for the media keys recommendation. From the extension's description page, literally the first thing that comes up when you look at it:

1. The browser window must be active for media keys to be detected due to a Firefox limitation.

2. Only Play/Pause is supported due to a Firefox bug.

3. Pinned tabs get priority over all other tabs when pressing a media key.

4. macOS is not supported due to Firefox bug.

It's literally mentioning three FF bugs preventing you from building something useful with it on its front page. Meanwhile, that feature should have been something that doesn't require an extension on any supported platform in the first place.

None of these bugs/limitations are present in any Webkit/Chromium based browser I tested so far on macOS.

I won't even get into other unrelated "curiosities" here, like the order of DOM mutation events being wrong when debugging, but correct when the debugger is not attached in FF.


Hey no need to twist words - I think the use of bluetooth controls for desktop browser media playback is an edgecase. On windows at least, most people use the website directly, not via media buttons - I'm not sure I've ever seen anyone use the buttons like that, chrome user or not. Still, it's a nice feature - sure! Shame the extension doesn't work.

As to a feature being something that should not have required an extension in the first place - I don't agree with this. It's not just that the feature may or may not be attractive to a small slice of users, it's that we're quibbling not over the functionality, but over how it's delivered. Not every project needs to increase its scope to cover every possible use-case, even if they're valuable; that's kind of the point of add-ons.

As to mutation events - https://caniuse.com/#feat=mutation-events that's deprecated, so not exactly surprising to see a stagnant debugging support. Or perhaps you meant its replacement https://caniuse.com/#feat=mutationobserver? Is this an issue of insufficient backwards compatibility (event's replacing observers are around 8 years old), or an issue in the newer api?

In any case; I'm not begrudging you personal bad experience with FF - it's your experience after all. But as you say, it's a shame there isn't more competition and diversity; so I'm curious as to what the root cause is for it's lack of competitiveness.


As a fun exercise, if you want to, type "media keys" in Google and look at the autosuggestions coming up :) Must be an edge case also.

I was talking about MutationObserver. I didn't have the urge to self-injure even more, so I didn't dive deeper into why FF is buggy there as well. I accepted it and moved on. By now, I am sure you will be able to find a way of arguing for FF regardless how buggy it is, so peace out bro – let's not spin this further :)

Just one last opinion: As to the root cause for its lack of competitiveness: My opinion is, as you may have guessed, that FF's codebase is pretty rotten and their product ownership is incapable of prioritizing correctly, both tactically and strategically. But of course, I won't be able to prove that. And you won't be able to disprove it. So let's leave it there.

FF's global desktop market share is at 8.6% [1] as we speak. Safari (macOS) is about to overtake it. When looking at global browser market share overall, it's at about 4.2% [2]. Safari has now double the market share of FF.

[1] https://gs.statcounter.com/browser-market-share/desktop/worl...

[2] https://gs.statcounter.com/browser-market-share


Chrome announced this change a year ago.


A welcome change indeed. Not that it couldn't be a good mechanism, but abuse has shown that these steps are necessary. I would go even further.

But I think this can be an argument to switch browsers. Common users might not have the technical insight, but are certainly not keen on being tracked.


Needs lot more imagination than what you see on the Mozilla discussion boards. Which is mostly people just reacting to issues like a mom and pop store reacting to Walmart. But timelines in brick and mortar land are very different from how things work on the internet.

Since things move fast these days(See TikTok) I think its a better strategy to just speed up Google's goal of turning the Internet into an exploitative cess pit.

Their main goal is to increase sewage flow and ensure all flows move through their useless cloud and devices. Every time a cell divides, if DNA has to be read from Google through Google that would be optimal. Its all great and visionary, but with those increasing flows the stink of their creation constantly rises.

The more it stinks the more people can't ignore it.

So lets speed it up. Mozilla get out of the way guys. They will just use you guys, as they have, to signal their generosity or openness to learning or whatever bullshit. Why allow that?

Everyone likes to focus on how fast things can scale UP.

But they can collapse fast too. We need tools to do both.

So if Google and Facebook and Twitter want to scale up and give us 20 Trumps and Brexits this year and 200 next year and 2000 the year after, how can we help them do it faster? Thats the only way change will happen.

If that's not interesting to Mozilla, its could just sell Firefox to Samsung or whoever and go off and do something totally different as the Whatsapp, Signal guys did.


Funniest written angry comment I've read recently, thank you. If the topic was not sad, I'd probably laugh by now.

The problem is, that many big ones including government and also in consequence smaller fish wont quit the Googlenet, but for the sake of short term profit will stay in it, no matter how much it stinks. (Oh, you got no Google shopping card? How are we going to check your identity then? No, we cannot deal with you here, we are sorry!) This might make it very difficult for people to get official service without involving big corp.


Please browser developers just make it easy to switch requirements for HTTPS off during development.

There’s any number of circumstances in which my development machines don’t have HTTPS.


This is so annoying. Our app requires HTTPS when not in debug mode. If you run the app locally and accidentally forget to turn on debugging then it will try to redirect to HTTPS. Firefox sees this and then from then on tries to force HTTPS, even after you've restarted the app with debugging. And now you simply can't access the non-HTTPS local site and I have no idea how to make Firefox forget about the HTTPS redirect. Infuriating.


Clear it from your HSTS cache, I bet you're setting that in the app when https is hit.


It's not HSTS. I've tried multiple methods to clear localhost from Firefox HSTS cache and it's not in there. But it still forces the HTTPS redirect.


301 cache then?


With Chrome you can go to chrome://flags, find “unsafely treat insecure origins as secure”, and allowlist your dev server.



Maybe there just needs to be better tooling around generating certificates or the "invalid certificate" warning should be toned down.


Set up local nginx reverse proxy, test and develop in production-like environment. It's really not hard.


Here's a reminder about some of the fun complexity of this with Safari: https://www.chromium.org/updates/same-site/incompatible-clie...

It's worth noting that as well as 3rd party tracking cookies - this has impacts for things like SAML Browser Post Profile.


If I want to iframe a website that doesn't want to be iframed on my browser so I can make some quick local tool, how do I even do it now? Previously I could just nuke x-frame-options and the CSP headers and that's all accessible via extension hooks but cookies aren't easy enough to do that way.

Like, go ahead and try iframing linkedin.com, with x-f-o and CSP nuked it'll work on Firefox today but not Chrome. Even trying to intercept the cookie doesn't work.

I wish there were a way to just easily turn this stuff off so I could manipulate my user agent more easily.


Isn't it the case that you can easily turn it off, in your own user agent?

For example, in this case (Firefox), to turn this new behaviour on ahead of time, the article says you can go to about:config and set network.cookie.sameSite.laxByDefault and network.cookie.sameSite.noneRequiresSecure . I'm sort of hoping/assuming that after this becomes the default, you'll still be able to override it using those exact about.config properties.


You should probably also do that in a separate FF profile, which is easy enough. After all, you probably shouldn't disable security/privacy features like this for your normal browsing.


Firefox does have that, thank you, but I think the Chrome equivalents don't do the trick. I believed for no reason that would be the problem on Firefox too.


For Chrome:

chrome://flags/#same-site-by-default-cookies

chrome://flags/#cookies-without-same-site-must-be-secure

I found that at least the #same-site-by-default-cookies flag behaved as I would expect when setting it to "Disabled". Use case was to load an iframe that sets it's own cookies with no SameSite value and still have that value default to "None".

We fixed this issue properly ("SameSite=None; Secure" in the cookie set in the iframe), but using the #same-site-by-default-cookies flag was a workaround for a little while.

What was a bit strange was the default behavior on Chrome was different for users even on the same Chrome version. They seemed to be rolling it out in phases. More info on that here: https://www.chromium.org/updates/same-site


We figured it out. They moved some of CSP/X-F-O to `extraHeaders` in a recent release and it was being rolled out slowly across the world, so we were on an early batch of rollouts. Works now.


Hmm, I did toggle those flags over to no avail. Perhaps linkedin.com is using additional protections that don't show up. It _is_ a 200, but the Chrome page just shows the broken icon with "www.linkedin.com refused to connect." even with x-f-o suppressed, CSP suppressed, and those flags tripped. All on 84.0.4147.105 on Linux/Mac.

Ah well, I guess I take the L on this.


In general: Use a custom proxy instead of using your browser as a proxy, in essence.

I'm curious too - what's the use case for this? Why would you want to iframe linkedin?


It was a trivial half-hour project that I sunk-cost-fallacied into a couple of hours. I wanted to embed LinkedIn into our recruiting lead qualification tool so that you don't have to open a new tab (it's right on the page in a two-pane display). It was enough of an improvement that people could just start their day, qualify some 10 leads, and then bang on with the day. We had a chap build it and it worked really well actually till SameSite became required on Chrome. Opening in a new tab was frustrating enough that people would feel like maybe doing a couple and then push it to the rest of the day and it would fall by the wayside against their "real job", so to speak. Besides, they'd be doing it feeling like an obligation which I don't think puts one in the right frame of mind.


Yeah, so I think this is pretty much the a kind of scenario the feature is designed to break - an essentially stock browser navigating to some site, that can then wrap a trusted site, and potentially be a stepping stone for phishing or other nastiness. Obviously, if you have detailed control over the client - everything goes! But for fairly standard browsers, you don't want to allow this kind of nesting; at least not when the two sites aren't cooperating (and clearly linkedin isn't).

It's possible there's some addon api to work around this, otherwise you'll need to use a proxy - which isn't too hard, really. Essentially: you need to have admin level control over the browser to allow this, or some other proxy; the browser won't allow a random website to see another site's cookies, even indirectly - which is by design.

So in your case: since this is a feature, not a bug - a trivial workaround looks unlikely.


Oh yeah I totally understand why it is what it is. But I like being able to choose a configuration combination that releases me from the rules.


There are better ways than iframing for test/dev - eg. Fiddler, Selenium


Many sites have been blocking iframes for a decade or more.

They’re not good tags, if you want a link then make a link.


If you're happy using extension hooks already, you could consider a greasemonkey userscript - allowing you to run javascript when the browser is already at linkedin.com


I did exactly this in Chrome the other day. What worked was to put the iframe on a chrome-extension:// page and give the extension <all_urls> permission.


Everyone’s already moved onto using service workers to implement third party cookies. I know because we are using it in production.

-edit-

This is how we use it:

We have two domains.

Our main site `widget.com` and `widgetusercontent.com`.

`widget.com` contains an iframe from `widgetusercontent.com`.

We have a service that runs on `widgetusercontent.com/service` that is controlled by us, but it needs to be authenticated with a temporary credential (doesn't matter if it gets leaked) and cannot run on the same domain as `widget.com` since it also contains user generated content.

We used to embed the URL `widgetusercontent.com/service?auth=$AUTH_COOKIE` on `widget.com` and have the `widgetusercontent.com/service` set the cookie, but this no longer works because this is third party cookie and has never worked on Safari.

The solution is to load `widgetusercontent.com/service` as a blank page containing only a service worker. We ask this page to load `widgetusercontent.com/service/sw.js?auth=$AUTH_COOKIE`. The service we control returns a service worker with the auth cookie embedded in it, and is set to rewrite every request under `widgetusercontent.com/service/*` and inject the $AUTH_COOKIE into the request header.

I came up with this myself but I assume others would have as well.

I don't know if this'll work for tracking and other nefarious stuff ad networks use but this is a legitimate use case for us.

Yes it works in Safari. Cookies don’t work but this does.


Interesting, thanks for the update. In my understanding most browsers now partition their cache for third-party resources like iframes, making it dependent on the combination of the embedding and embedded domains (as otherwise it would provide a trivial way to set third-party "cookies" e.g. via ETags). So if you'd load the service worker from "widgetusercontent.com" on another site, the browser would not use the cached version it has from "widget.com", so you wouldn't be able to pass the same auth cookie to your service (I haven't tested that though).

That said for your use case I think you could set a first-party cookie on "widget.com" and pass that to the iframe. That would require a script on the page though as opposed to an iframe, so maybe your service worker version is easier to implement.


> and cannot run on the same domain as `widget.com` since it also contains user generated content.

Why not host at usercontent.widget.com? You can pass credentials via a frame postMessage, are protected by same-origin policy, and cookies would be considered first-party


I don't understand why don't you just specify SameSite=None for your cookies?


Incognito mode blocks third party cookies by default and apparently there is a bug in Safari up to iOS 12 that treats None as strict:

>Versions of Safari and embedded browsers on MacOS 10.14 and all browsers on iOS 12. These versions will erroneously treat cookies marked with `SameSite=None` as if they were marked `SameSite=Strict`. This bug has been fixed on newer versions of iOS and MacOS.

https://www.chromium.org/updates/same-site/incompatible-clie...


that's the root of it right? Many app servers or "wafs" inject/validate csrf tokens on requests/responses. There may be a way to set the SameSite flag on cookies at the server level without even having to touch app code. "if SameSite isn't set then set it to None".

I've been running into this issue in an number of projects all involving SSO and custom in-house IDP implementations. It's an easy fix but getting the teams together and coordinated has been the hardest part.


Please explain how service workers relate to third party cookies?

Surely service worker https requests are bound by the exact same cookie rules as any third party script, resource, etc?


Service workers are a lot "stickier" than that. If you think about it, they are all about caching. A service worker activation could count as a cookie itself since it can modify future requests, and if you modify the requests in a way that it emulates a cookie being set then it is a cookie.


Is this currently blocked by uBlock Origin/uMatrix?


Nothing blocks service workers except the Brave browser.


You can turn them off in browser settings, I have in firefox :)


for completeness, the config setting in firefox is dom.serviceWorkers.enabled (set it to false, obviously).


(By default) obviously.


True!


I block service workers

    delete Object.getPrototypeOf(window.navigator).serviceWorker
    delete window.ServiceWorker
    delete window.ServiceWorkerRegistration
    delete window.ServiceWorkerContainer


Love to hear more about how this approach works :)


Does this work for you in Safari? ITP partitions service workers, but I could imagine this approach would be a workaround.


Care to elaborate how this works?


I updated the parent post.


The article recommends setting network.cookie.sameSite.noneRequiresSecure to true - this sounds like it would ignore the https only flag (on cookies that set it)?

Is samesite=lax, cookie = https only not a valid config?


They recommend that if you want to try what will be the defaults when they release this. The other setting is the other half of it.

If you enable only the setting you specified, you won't be able to set samesite=none on http cookies, only "secure" cookies for https-only.

If you enable the other setting, cookies will default to Lax, regardless of "secure" status for the cookie.

If you enable both, the only way to have a samesite=None (totally insecure) cookie is over https and by manually specifying None.


I got caught out by this when upgrading all our sites to Django 2.1, as that version sets SameSite: Lax by default.

So, I'm assuming that any Django 2.1+ site will already be compatible with this.


Is this the same change that Chrome recently introduced? Or will we have to go through a lot of changes and uncertainties again?


Seems to be the same change.

"This is an industry-wide change for browsers and is not something Mozilla is undertaking alone. Google has been rolling this change out to Chrome users since February 2020, with SameSite=Lax being the default for a certain (unpublished) percentage of all their channels (release, beta, canary)."


This must be infuriating for webdevs if some of their users report an error because they're in the percentage while the devs who have to fix things aren't in it, unable to reproduce the error.


The browser dev tools warn you about it (with links to how to fix it) and I think if you’ve got the change they also tell you which cookies have been blocked. I don’t mean that to sound holier-than-thou, even with the messages I spent a couple of hours last week debugging the exact problem you mention on an internal tool.


This is why web developers and testers should test pre-release browser versions. Better to find out that a code change in Chrome Canary or Firefox Nightly broke your website 4-8 weeks before the new browser ships than after it ships. If the breakage is a browser bug, you still have a chance that Google or Mozilla can fix the regression before it affects your users.


There's an argument to send pertinent A/B study information in a request header of some sort, for this reason. It's now no longer enough to just look at the UA.


Can confirm it is infuriating.

Even worse when it's retail customer facing and it works for all developers. This broke our chat widget and is probably the explanation for our declining chat numbers. Nobody connected the dots until some developers got included in the rollout. Now all hell broke lose because the business realized we've been serving a broken chat feature to some users for months.


> For any flows involving POST requests, you should test with and without a long delay. This is because both Firefox and Chrome implement a two-minute threshold that permits newly created cookies without the SameSite attribute to be sent on top-level, cross-site POST requests (a common login flow).

Anybody have some background on this note?


Perhaps the type of login flow they’re getting to is that of an OIDC form_post response method? An auto-posting form is returned from the identity provider, which is then submitted to the relying party.

https://openid.net/specs/oauth-v2-form-post-response-mode-1_...

At least in .NET Core I observe a cookie for the OIDC nonce (.AspNetCore.OpenIdConnect.Nonce - defends against replay attacks) and a correlation cookie (.AspNetCore.OpenIdConnect.Correlation - tracks session through the redirect handshake). Both of these are created during the login redirect sequence and not intended to live beyond it.

The correlation cookie is set to SameSite=None here

https://github.com/dotnet/aspnetcore/blob/master/src/Securit...

The nonce cookie is set to SameSite=None here

https://github.com/dotnet/aspnetcore/blob/master/src/Securit...

I’m not familiar with many other auth flows outside of OIDC/OAuth2 but I wouldn’t be surprised if other SSO-like flows have similar mechanisms


Magento2 discussion on this, including a contributed module to work around it by overriding the built in cookie management: https://github.com/magento/magento2/issues/26377


My site https://5pagesaday.com/ is showing Cookie “_ga” has “sameSite” policy set to “lax” because it is missing a “sameSite” attribute, and “sameSite=lax” is the default value for this attribute.

How to fix this? Anyone any clue?


That's Google Analytics. Remove it, or ask Google to fix their own stuff.


Removing it has the advantage to fix it not only for you, but also for your visitors.


The latest FireFox also logs warnings in the console if a site cookie lacks SameSite, or has SameSite=None without Secure.


"an unacceptable amount of site breakage"

I am already seeing that happen with FF unfortunately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: