> Blocking pop-up ads in the original Firefox release was the right move in 2004, because it didn’t just make Firefox users happier, it gave the advertising platforms of the time a reason to care about their users’ experience.
Wow, I had completely forgotten about what a scourge pop-ups used to be, and what a relief it was to finally be free of them. The fact that what used to be such a prevalent scummy tactic could be completely abandoned due to pushback from browser manufacturers gives me a tiny bit of hope that maybe pervasive tracking isn't an irreparably permanent feature of the web after all.
The real scourge at the time was actually pop-unders, which you would never figure out where they came from and saw them potentially hours after visiting the site that produced them.
It's honestly an improvement that sites have to obscure their own content (and thus spite their nose to save their face) and nothing else in order to give you intrusive ads. You'll always know what websites don't put value on their own content that way.
Eh, it's kind of different when they're literally exploiting something in new versions of browsers that are actively updated and are going to get fixed.
Back then it was just supported behaviour and the dominant browser went several years without a major update. Like night and day.
Chrome provides many content settings that tend to toggle between "disabled, ask, allow". It should be possible to provide an option for spawning windows even when originating from a user event (although there isn't one currently).
Your https://www.youtube.com/watch?v=VcFQeimLH1c is a 4.5-hour video, the first 5 minutes of which shows a completely static video track with an audio track of a guy talking slowly.
The funny thing with these under a tiling window manager, is they turn into pop-sideways-es :D (unless you are in a stacking mode), makes it a bit easier to kill them immediately and also makes it more obvious where they came from. Haven't come across these in a while now though.
Let me turn off Javascript more easily. And back on again easily. I usually need it gone for like 5 seconds, and then I need it back on, each of which is one key and one click in Safari and an entire journey to the moon and back in Firefox.
But lacked fine grained control, at least the last time I used it. With uMatrix I can tell the browser to disable something (cookies/script/frames/media etc.) here but keep it enabled else/everywhere and the other way around. It gives the finest possible control but of course requires some time to whitelist the good stuff on new pages. A minor annoyance compared to the control it offers.
I wish it allowed for even finer control, though. What if it could prevent one specific function from running? Or replace tracking functions with no-op ones? Or hook into HTTP APIs in order to let you audit, modify or interrupt communications?
I guess you could use GreaseMonkey for that, though I can't imagine how you could keep track of everything. Might be useful for smallish modifications though.
The biggest difference between the two, in my experience, is UX. uMatrix is the best UX for a content blocker I've ever seen - very intuitive and simple.
(That’s command comma. Which I described as one key, cheating slightly, but hey they are so close to each other.)
Preferences panel comes up.
If you leave it on the security tab, the checkbox comes up. Uncheck it and now the site is usable without JS.
Another click on background or Esc to dismiss the dialog if you want to, but for some things that’s not even needed (therefore I claimed that low click count... ymmv).
the problem with using a plugin for something like this is that as soon as the plugin becomes popular enough, a third party purchases it from the original developer and adds a bunch of telemetry. stylish is a recent high-profile example, but it happens to small, benign-looking extensions even more often.
the problem has gotten so bad that i no longer use the plugin "store". i only install plugins that are open source, and only by building the source myself... meaning i miss out on automatic updates (but at least i know my plugin isn't spyware!).
It is also difficult to see which extensions have been granted what permissions. Right now I’m sticking with known extensions but maybe I should be compiling my own
Literally yes, it will never be that powerful, and that's intentional. The old way was so powerful, able to do anything, because well, it basically just ran your addon in the core browser's execution context. Way too scary to give add-ons that much power.
However I genuinely do want stuff that isn't yet implemented. For example I want add-ons to get access to Firefox's copy of the Public Suffix List, which is what you usually wanted when your add-on actually just treats TLDs as special. I just this evening sent a PR to PassFF, a popular password manager to do something crude as a stop gap while we wait to access the PSL via Web Extensions.
That sounds great, but it seems like a Hard Problem, given that lots of sites/apps use modals for necessary things like logins and shopping carts. I suppose one approach would be to inject some kind of 'close and stay closed' control into any elements that cover the page (nice for paywall gray screens, too...)
I'm not sure it's all that hard. It's never been a great idea to make core user flows reliant on modals (by all means use them, but make sure you either practice progressive enhancement or have a fallback). Sites doing so are at risk of failing their users. That said, I'd love for a solution akin to popup blocking: block the popunder but alert the user that it happened and allow them to show certain ones.
Logins are not a real purpose for modals. Just go to the login page, and redirect back to where you were. I'll give you confirmation yes/no modals, but that's about it.
I particularly hate every idiot who decided that scrolling up meant I was done and wanted your garbage top bar to obscure half the screen (including, no doubt, the stuff I was trying to read).
It’s also distracting even if doesn’t obscure what I am looking at. Sudden unexpected movement in my peripheral vision! It’s one of the worst web design trends. The iOS Safari bar is done right, at least.
In all seriousness, it's partly that I have dumb habits. I highlight text as I read it (and that causes all sorts of idiot pop ups to share the quote on social media etc) and micro scroll up and down even when not actually trying to actively change the field of view.
I can't even justify why I do it. But I do. It's just how I read longform text on computers. On the plus side "reader view" can be a lifesaver
I use the top of the browser's viewport as a transient "bookmark." That is, I read for awhile and then scroll until what I have read scrolls out of view. Now the line of text at the top of the viewport is the first line of what I have left to read.
Scrolling in most browsers is quantized to a value much greater than the line height of the text I'm reading. This means I must occasionally scroll past the part I've read and then retreat to reveal half a line or so of what I already read.
Unfortunately, all the algos that trigger new shit on a change of scroll direction disrupt my own manual adjustment algorithm. Even on a news site, a directional change in scrolling triggers some douchebag horizontal menubar that animates itself into view and obscures the next line of text I want to read. I have to scroll back down to make the douchebag menu disappear. When it disappears it reveals... a line of text which I already read!
So now my bookmarking technique is ruined because the disappearing douchebag menu is guaranteed to reveal at least a line and a half of text I already read.
I don't like this behavior because it disrupts my completely rational and reasonable behavior as a user of the browser. I am the one in charge, and if a web page tries to challenge me on that I swear to god I will copy the damn text and paste it into a contentEditable about:blank.
Then they can only sit there and dream about all the places I'm scrolling.
You're helping your eyes stay on point when reading, just like some people reading a physical book/printout will use their finger or a pen for that purpose. It's perfectly normal. Your habits are not dumb. What those sites are doing with floating headers and bullshit on-select popups that's dumb.
I do it all the time too. Hate when something happens. I've seen sites that also start loading next article if I click on the right side of the text column...
Pretty good because I'm in the habit of scrolling down approximately a windowful (using the Page Down key) then when that scrolls a little too far (because I have low vision and consequently have the text size set unusually large, which many web sites don't handle correctly) using the up arrow key to find my place again.
Where I work, the marketing department is always putting horrible newsletter popups on the web page, etc. I used to say, "That's going to detract from people signing up, not encourage it!". But they've got the data and they've shown it to me. As you say, it works.
I wondered why for a while, because as soon as I see those shenanigans, I'm gone. But then I realised: I'm not the target market for the websites we have at work (a full service travel tour company). Most of our clients are booking travel for a group of 30 people from their company 6 months or a year in advance. They are coming to our website because they want the damn newsletter. Took me a long time to understand that.
Do you have any citations for that? I see people either signing up because they don't realize they have a choice because the button to say "No" is intentionally smaller and obscured, but then they mark the newsletters as spam, or simply throw them out. I also see lots of people adding fake addresses like "screw@you.com" just to get past them. Has anyone verified that they actually do some real amount of good for the business putting them up beyond the simple metric of "we got tons of signups for our newsletter after doing it!"?
You can reduce the problem by turning off CSS animation and blocking elements with Stylus[1][2]. Also turn off all CSS and JS by default and enable resources on a site-by-site basis with umatrix[3][4]. If a page isn't readable, you can turn on Firefox reader mode to format it with one click. The system works well for me.
Arguably, that creates a more user-hostile internet browsing experience compared to just allowing for those in-page modal pop ups. At least with those they're few and far between and you can just close the tab and forget about the site.
I don't think "hostile" is the correct word, since it improves my browsing experience. It does more than stop modals -- it prevents 90% of the design, advertising, and tracking garbage on the Web, and I can now read websites the way I want.
If one browses with NoScript in default-deny mode, then many of those "popups over the content in the window" simply do not occur.
As well, when they do occur, going into Dev-Tools and deleting the <div> (or whichever element it happens to be) that is the actual popup is often sufficient to remove the popup and make the content view-able again.
Ublock origin has a little icon that looks like an eyedropper. You can use it to pick an element you don't want to see anymore on a website. It works on mobile too.
Also reachable using the mouse: right click on the page, select from the menu the "Block element" option near the Ublock Origin icon, then pick the element to block and "Create" the rule in the lower right window that opens as soon as one picks the element.
It also supports wildcards, useful to block similar elements or elements used in numbered pages, so that one * will nuke all of them.
Now we have interstials behave like SPAs, so siblings are calling this modal, but that’s more about a dialog box you can’t click outside of until dismissed.
Interstitial is the entire content until you get past it, like Forbes when you have a content blocker on.
> I had completely forgotten about what a scourge pop-ups used to be
Used to be? IME they still are, just not as bad as they used to be. If you go browse a few adult video sites (for science) you'll see pop-up/unders are still a thing. Even on chrome you'll see a variation of them, making the current tab the ad after the user has opened the link in a new one. Instead of webgl, webasm, etc I'd love to see a browser vendor focus on making a good browser again.
It's also hard to take their privacy promises seriously when they've got google analytics embedded in their blog.
> ... gives me a tiny bit of hope that maybe pervasive tracking isn't an irreparably permanent feature of the web after all
I don't think you can protect yourself from tracking unless you stop using the internet and mobile phones. Wherever you have an account, a cookie, or simply make a TCP connection will be a place that can track you. Especially if they cross correlate with other tracking services.
I remember a prank site that you could direct people to with fake href links and it would open a couple of hundred popup windows before you could close your browser.
I’m a big fan of Mozilla, and I’m really glad to see them focusing on Firefox instead of troubled projects like FirefoxOS.
While they are not immune from poor decisions I really believe they try to do the right thing in the end. That’s more than I can say for many other companies.
FirefoxOS was out last hope for good and accessible mobile operating system - it was supposed to give people who can't afford iPhone something open, instead of evil data-slurp that is Android.
Without it, billions of people in developing world are going to have every every second of their lives snooped by Google.
Agreed; even from the start I had a dim view of FirefoxOS's chances for success, but the sheer audacity of the move (in conjunction with the implicit acknowledgement of how important it is to have influence over the underlying platform, for the long-term sake of both Firefox specifically and the web in general) earned Mozilla a lot of goodwill from me. (The other audacious move from that period that won my goodwill was their investment in Rust/Servo, for which I initially had a similarly dim outlook and have been happily surprised at how they've far exceeded my expectations; goes to show that "audacious pie-in-the-sky idea" doesn't have to imply "tire-spinning boondoggle".)
Rust and Servo are indeed impressive projects. Too bad Servo is jumping the shark by moving away from building a functional web engine to doing VR experiments. :(
Servo's end goal remains to produce a browser engine, and the current focus is on high-throughput low-latency graphics performance, which happens to dovetail with Mozilla's WebVR R&D. Servo hasn't stopped focusing on producing components for integration with upstream Firefox, e.g. WebRender and Pathfinder, which are available in Firefox Nightly.
What annoyed me the most is that FirefoxOS failed because it was mismanaged and badly planned -- It was a a fantastic idea, and the concept worked -- people were buying the phone but they just picked the absolute wrong market to aim at.
I think it was a huge mistake to abandon FFOS but not enough people felt that way.
Compare how FFOS did to how Ubuntu Phone did which was essentially vaporware IIRC.
> I think it was a huge mistake to abandon FFOS but not enough people felt that way.
Sometimes you need to pick your fights and when even MS decides they don't stand a chance I guess it is time to stop bleeding money and try another approach.
I wonder if a main reason why they shut it down might not be financial but rather related to focus:
Firefox the browser was falling behind. While it was always my favourite it was totally eclipsed by Chrome for a while.
After they started focusing on Firefox again a number of great things have happened:
- Firefox is getting faster
- Firefox is getting safer
- Firefox is gaining mindshare
- Techies are starting to use and recommend Firefox again
- etc
All this puts Mozilla in position where they can do things like they now announce: they will make big improvements again, this time by squelching 3rd party tracking.
Still they could have kept alive the low level underpinning (what is called the "gonk" port) to let a community project go on. Mozilla's leadership choose to not do so for reasons that were never substantiated by any data.
> Also, which other approach are you talking about?
Other approaches to furthering their mission: if getting their own phone to market is to expensive, regroup behind the main product(s) and use them to launch new approaches like we are seeing in the linked article.
Yeah but there's a fundamental difference -- FFOS is a non-profit. They have to make money, of course, to stay afloat, but making it seem like a binary decision between FFOS and Firefox is a mistake.
I haven't looked at their books but surely there was enough money to fund FFOS if they just stopped their rapid expansion into every single emerging market they could find. Make 1 high quality, expensive headset, that nerds will buy (OnePlus did this), and keep working on the software.
IMO There are more than enough nerds out there (myself included) who will fork out $400/500 for a FFOS phone with how much of a difference it was from other OSes.
> IMO There are more than enough nerds out there (myself included) who will fork out $400/500 for a FFOS phone with how much of a difference it was from other OSes.
Very good point.
But you really need to nail the marketing on such a thing:
- you really want people to buy it to support mozilla and FFOS
- but you don't want to look desperate
- you want people to talk about it
- but there are a number of reviews and articles you don't want to be written. ('FFOS phone arrives and is already outdated', 'Too late, too little from Mozilla')
- etc
It is still early on the morning and I'm in a hurry so I cannot name any but I have a strong hunch that this has happened to comparable initiatives in the past few years.
You're right -- but I think initially you could drop a lot of those requirements and just market to the diehard F/OSSers out there. It's exactly what Librem is doing, and Ubuntu Phone, and all those other things -- the mainstream will follow once you're established in some niche, especially when the blowback from tracking on all the other platforms (well less so iOS) is so prevalent.
I know it's naive to think so, but fuck marketing posturing, just make a good thing, in a strategic market, and stick to it. They literally did the hard work, making the platform, getting big apps to add compatability (LINE, a huge messaging app here in Japan had a FirefoxOS app) -- which was also easy for them... Then you just throw up your hands because of rough waters in literally the hardest arena you could have gone into (the low margin arena)... Also, people in other developing countries were starting to use the phone and it is way easier to develop for.
They really let go of something that could have changed the game. I see how their other products have benefitted but it really doesn't seem like they didn't have the money to do it, it seems like they didn't have the money to do it the stupid way they were trying to do it.
There's the firefox team, and the thunderbird team. I know mozilla does a lot of other shit, but maybe stop doing that other shit if you want to be an alternative to google/microsoft/amazon level players a mobile OS is strategic. Maybe stop trying to get clicks with IoT shit (gateway is cool though, so props) and just hunker down? They don't have a board in the traditional for-profit company sense so I dunno wtf.
I can't remember where I read (assuming I did) that mozilla's C-level team suffers a lot of turnover because people just come in, do whatever they want with mozilla's direction and then leave to some for-profit company.
Sorry this is more of a rant but I dunno, I just really feel like mozilla screwed the pooch. I literally flew to another country to try and buy the highest spec FFOS phone I could find (LG's FF zero phone I believe), and bought multiple because I didn't want one to die eventually. I can't be the only one who felt that way.
Librem5 has iPhone class pricing. It's no alternative to FirefoxOS. I earn decently, have mid-range need for security, and I won't buy iPhone nor Librem5.
Okay. Well Purism is already a free OS (modified Debian of course). It’s definitely an alternative to FirefoxOS. Librem 5 and iPhone are hardware, so you are correct that they are not software alternatives. The Librem 5 is priced at $600 probably because they’ve never built a phone before. It could be cheaper once it takes off.
You can already run Purism on other machines(laptops, desktops, etc). The mobile version will essentially be Purism with gnome-mobile UI. I’m sorry to read that you don’t think an open device that will have its drivers available and probably work with Android, isnt worth the cost.
Cheap android devices don't have free software drivers, and I thought some connection between those drivers and Google is what holds back a free Android.
Yeah, software is always going to be a challenge early on (whether it’s Microsoft or Librem). However much of the web is available already. I think as long as devs are willing to throw their html 5 apps into something then it will be not too shabby.
There's a few alternatives to the Android/iOS ecosystem, notably Librem 5, KDE Mobile, Ubuntu Touch, microG (e.g. with LineageOS), and last but not least Sailfish.
"As part of the investment, KaiOS will be working on integrating Google services like search, maps, YouTube and its voice assistant into more KaiOS devices, after initially announcing Google apps for KaiOS-powered Nokia phones earlier this year."
Google is everywhere. What if you don't want it on your KaiOS-powered Nokia phone and prefer open source alternatives? If I want Google, I'll get Android with OpenGapps. It has terrific support for Google's products.
There's WebOS and Tizen. I haven't used either, but heard Tizen's codebase is abysmal. WebOS is by LG, Tizen is by Samsung.
MeeGo was a combination of Intel's Moblin and Nokia's Maemo. I'm not sure what became of Intel's efforts after MeeGo. Sailfish is the successor of Maemo/MeeGo whereas Mer is an open source mobile Linux OS which Sailfish uses as base. Both utilise libhybris [1] for Android compatibility layer. Backwards compatibility = important; no apps / ecosystem = no users, and that curve is very steep.
Another interesting effort I saw the other day is actually from Google. An effort to easily build an app which is easily ported to Android and iOS. That might directly benefit Android and Google most, but indirectly it could benefit libhybris users. I'm unsure how good Sailfish 3's Android compatibility is these days. It used to be Android 4.4 compatibility for a long time.
I think a hard fork of Android might have been a better approach. The AOSP has a lot of good, it's just controlled by Google and is more of a source dumping ground and less of an open source project.
They might have even been able to work with Amazon, who has a vested interest in Android a la FireOS, and possibly forced Google to make AOSP a truly open project.
I don't think Android is a good starting point for an operating system. And the big elephant in the room doesn't go away. If most Androids are Google Androids, and developers don't want to write their app three times, they're going to write it for iOS and Google Android, and not actually-open-Android. So either you aggressively do not maintain compatibility, to which developers still develop for Google's platform instead of yours, or you're stuck trying to maintain compatibility so devs release apps on your platform, which means you're still beholden to do whatever Google wants.
You might as well start clean with an OS designed for the modern era, Android's over a decade old now, and it shows. You're stuck with the app gap either way.
I think you are being a touch dramatic, I am perfectly happy with my google free lineageOS setup. Though I agree with what you are saying and the developing world is not really using LOS I still think there is more hope now than then.
I run the same thing and have zero complaints. The only app I've found that actually requires Google services to start is YouTube.
A lot of other apps pops up a dialog saying they need Google services but then work perfectly without it. I'm not a android Dev but it's almost like that behavior is default even if the developer doesn't use any of Google services?
How was FirefoxOS better than Android ?! Giving back control of the OS to carriers. Do you remember/know how hard it was for an indie dev. to publish an app that could be used by millions of people in the J2ME/BREW/RIM era ?
> Without it, billions of people in developing world are going to have every every second of their lives snooped by Google.
Still sounds like a better deal for them than no smartphones at all. Do they even care about Google's data collection? I find it to be a perfectly fair tradeoff for cheaper smartphones and free services.
It's good to see an article about Firefox enhancing privacy, rather than for backdoor installing plugins for Mr Robot, integrating closed source Pocket (the server is still closed source despite the Mozilla acquisition), or opting in sending browsing history to commercial companies like Cliqz without explicit user consent.
Mozilla has done incredibly well to have Firefox survive at all against competition from Microsoft and Google, and has undoubtedly had to make some tradeoffs (such as DRM), but it's at its best when it sticks to its principles.
I'm posting this from Firefox. I think this article, on top of other recent efforts by Mozilla, is the final push to switch me from Chrome to Firefox. I trust Mozilla a lot more than I trust Google these days.
The thing that tipped it over for me was the Multi-Container Tabs[0] Mozilla extension. Aside from being a fantastic privacy and security tool, it is also wonderful for development.
My favorite hidden feature in Firefox is having multiple profiles[0],[1]. Unlike containers, profiles have separate bookmarks, extensions, etc. I have found profiles even more useful than containers for development.
[0] about:profiles (paste into URL bar in Firefox)
Try the Temporary Containers plug-in, it's even better. It will automatically remove all site cookies when you close the tab, and can be configured to open all links as a self contained container.
Basically every site you visit will be completely isolated - and fully cleaned up afterwards.
1. I want my history saved locally (private browsing deletes history on close).
2. Private browsing requires a new window; I want to have a private tab.
3. All tabs in a private browsing window share the same cookies, so I can still be tracked* within a session; I want a new container with each new tab.
*trivially. I'm well aware that I can still be tracked by fingerprinting, etc, and I have other add-ons to help protect against that.
In Safari each incognito tab has separate cookies but in Chrome the cookies are shared across the entire incognito session across multiple tabs and even multiple windows.
I stopped using them when I realized they can switch into a "Facebook" container when I visit Facebook, bit they can't switch back out of it when I subsequently visit another site.
This was a common feature request. Has it been implemented, yet?
Seems high-risk to me: All Facebook needs to do is set up a redirector that isn't at a domain considered "Facebook", and include a tracking ID in the redirector URL, and they'll be able to try and de-anonymize your non-"Facebook" container when your tracker-laden link opens in it.
This looks similar to profile switcher / People menu in Chrome [1]. Chrome's implementation restricts the entire window to belonging to one person, but I use separate windows for this use case already anyway. I'm not sure that I see the use case for the in-between abstraction.
Can you selectively enable an add-on / extension for only one container but not others in Firefox with this extension?
I made the switch soon after the Firefox improvements (Quantum?). Chrome was becoming so slow and unstable (OSX), and although there are many sites with Firefox compatibility issues (or at least I assume it's comparability) I haven't looked back. It kinda reminds me of the day way back in time when I first discovered Firefox and I was liberated from IE.
Thank you very much Firefox/Mozilla team!
edit: One of the "compatibility" issues I always have with Firefox are sites complaining about the adblocker I'm using. The thing is, I'm not. lol
You and me both. Have you noticed that "some" apps google has developed run like garbage when using firefox? Also, have you switched search engines as well?
Just today I noticed Google serves a different version of search to Firefox on Android when compared to Chrome. It is missing some features and harder to use. Apparently the fix is to change your useragent string so Google will give you the full version [1].
The YouTube experience is way worse for me in Firefox than in Chrome. Many times, after seeking inside a video 3 or 4 times, it just stops working. This never happens in Chrome.
I'm not a conspiracy theorist, but seeing this so often...
I don't think there is a single malicious force making google products sucky on browsers other than Chrome. However, I'm sure they do poor testing (any at all?) in Firefox, and are quick to adopt extensions to HTML/CSS that they are trying to push as standards. 50k people not caring about a thing is a powerful force.
That's not a conspiracy theory; it can be explained without assuming nefarious anti-competitive intent. Google has a front-end framework called Polymer, which it is using to push certain new web standards ("Web Components"). It's also pushing those by implementing them early in Chrome. Since other browsers, Firefox among them, have not yet fully implemented those standards, Polymer reproduces them in Javascript, which is slower than the browser supporting them.
A YouTube developer once said on HN that YouTube uses Polymer for non-technical reasons. One could imagine serving as a testbed for Polymer could be that reason, with slower performance on Firefox as an accepted downside.
I’m not the only one!! It simply keeps on spinning (remember buffering?) At this day and age, I can deduce it that it’s not my internet speed. It has to be google intentionally slowing it down. When I switch on over to chrome, it works like a charm.
Don't forget that ISPs have been known (at least in the past) to slow traffic down to certain sites. You shouldn't assume your bandwidth is necessarily routed fairly or anonymously.
As a personal story, a few years ago I was having trouble accessing youtube videos, buffering, timeouts, etc. I couldn't figure out why, tried upgrading my hardware, software, router, everything. But eventually I started suspecting my ISP[1].
I eventually tried using a proxy service to access the youtube videos. I think I routed my traffic through Iceland or something. Low-and-behold, perfect video streams _through_ a proxy routing traffic from another country. I had plenty of bandwidth. My ISP was just throttling my traffic.
Try a VPN, see if your ISP is messing with your traffic.
I don't trust Google either, and they already own me. But I fear my ISP more and don't believe that Google would be intentionally slowing you down. It's probably more of a function that Google optimizes for Chrome, less that they are trying to force you to change browsers.
[1] I will refrain from naming directly, but whose name starts with a 'C' and ends with an 'omcast'
I had Centurylink and I ran into the same issues with Youtube. I changed everything I could, but the only thing that worked was using a VPN or proxy. I used a different ISP at work and whenever there were youtube issues I'd hop on that and it worked fine every time.
I know that it could be any number of issues not related to youtube or centurylink, but that's been my experience 100% of the time so far.
I use Firefox on Windows as my main browser. I haven't noticed any Youtube issues. I believe I did run into them running Firefox on Linux, but it's been a while.
Most recently for me it was Google Calendar that was crazy sluggish in FF. Another time it was search results that would drive my processor to 100 percent twice or more a minute, again if I used FF. This would happen even if tjey weren't active in any way.
While I wouldn't put it past Google to do something nefarious to make this happen, I have noticed they like to take super bleeding edge features and run with them before they're standard, simply because they can. Could be the result of that.
Mine do. Direct debit for those companies I trust, scheduled payments for those I don’t. Welcome to the 21st century where the coffee is great and financial automation is a thing.
> But Mozilla trusts Google more than any search engine provider. That would be the only explanation for Google being the default search engine.
The real explanation doesn't have anything to do with trust: Google pays Mozilla a lot of money to be the default search engine on Firefox. Deals like that are Mozilla's main source of revenue.
Sure it can! Google could be the only option in Firefox. But that's not the case. In any case, the vast majority of users simply do not care and will happily use whatever is given to them. The people that do care have the ability to change the defaults.
I'm not really sure why you're viewing this as the end of the world, especially since all the money that Google pays to Mozilla enables the development of Firefox in the first place.
Uh oh, I just learned that Mozilla injects Google Analytics tracking code into Firefox itself. When you go to about:addons, it sends tracking data to Googles servers.
Is there a way to get rid of that? It seems to be not blocked by a default umatrix for example.
I think we really need a browser by a more trustworthy party. Maybe Debian could make a Firefox fork that is more user friendly in terms of privacy? Is there a way for vote for this or sponsor such a development?
No, you cannot block it via umatrix or any other extension. If you read the whole discussion you will see that this only was possible in the old extension tech that Mozilla meanwhile replaced with webextension. And those can't.
They injected a non-removable external tracking system right into the browser that they market as privacy focussed.
Actually, if you read to the bottom of the discussion [0] you'll see that it was fixed, and FF respects the Do Not Track setting.
In addition, they negotiated with google special terms for their analytics. This is the description [1] and this is the resulting options they got [2].
It is not fixed. You still cannot disable the tracking via an extension like umatrix.
And no, I do not set the 'do not track' thing. Because that is one more bit of data sent out. To every website. Not just to Mozilla.
Actually more then a computer 'bit' by the way. What percentage of users use the 'do not track' setting? Let's say 1%. Voila. Setting it is worth about 7 bits of data to identify you.
I don't mean to be rude, but if you're worried about the "do not track" setting identifying, you honestly shouldn't be on the internet. Or you should be using it like rms does [1] (scroll to "How I use the internet").
I agree with you in that I don't like it - I was just providing some more information on the issue, from presumably the same source you made your comment from.
In terms of a Debian firefox etc, I would worry about two things:
1. You'd be fragmenting the non-corporate* browser market, weakening the good that can come of that. Mozilla are invited to the table at browser discussions, Debzilla probably won't be.
2. You're reliant on the upstream from Mozilla, so you're still needing them to be big enough to continue to generate the base software the fork is coming off.
I don't consider Mozilla to be a bad actor and in fact like them a lot (although you may feel differently) however they have done multiple anti-user actions I don't agree with (this would be one of the lesser ones).
How are firefox design choices steered? Is it just at the whim of the corporation? If not, what would be the best way to become politically active in steering design choices like these in a pro-privacy pro-user direction? It seems like there are enough people with a similar sentiment on Hacker News to provide political weight to these issues.
*yes, strictly speaking Mozilla are corporate but I would say there are appreciable differences between them and Google, Microsoft, etc
> 1. You'd be fragmenting the non-corporate* browser market, weakening the good that can come of that. Mozilla are invited to the table at browser discussions, Debzilla probably won't be.
You mean the tables where they then give in to making copyright a standard and giving legitimacy to that standard by staying on that table?
Firefox has a blacklist of sites that can't be modified by extensions, specifically addons.mozilla.org and testpilot.firefox.com (which can both install extensions). It's there to protect against malicious extensions that might try to install further extensions or otherwise escalate permissions by making use of those sites' permissions.
That block is also going to keep uMatrix from being able to block anything specifically on addons.mozilla.org
It's not for malicious reasons, FF blocks extensions from fucking around with various core mozilla sites like the Addon store and the Testpilot plugins. Otherwise malicious plugins could do a lot of damage there.
Can one block in hosts? Or does it use too many hostnames/IPs?
Edit: OK, so I've played some with Wireshark. And it seems that Firefox is talking to many Google servers. So blocking google-analytics.com in hosts seems to do nothing. But then, this is a Firefox install with several extensions, so it's impossible to say who's doing what. So hey, I guess I need to check this out in a LiveCD VM.
You cannot anonymize data and we've known this since the Netflix data was famously deanonymized. "Anonymized data" is what people call it when they're lying or don't know enough to be trusted with privacy. The primary issue is that data that seems harmless in isolation can be powerful in aggregate, that any data unique enough to be informative to a third party can be used to triangulate an individual's identity, and that data which cannot be matched up with a user today might be in the future when more information is available or statistical tools are improved. If Firefox actually does say they "anonymize" your data, that's a pretty damning indictment on their lack of trustworthiness and diligence.
I'm sure every tracking company out there says they do only good with the data they track. If that is what it takes to make tracking ok, then why bother at all?
No I’m not assuming either way. But we know the incentives are there for tracking to be effective as opposed to ineffective. I’d argue it’s more rational to take the landscape as it is, rather than how we wish it to be.
what does this mean? why is google interested in our about page behaviour? the only motive I can imagine is to understand what fraction of users they are missing and why? why should we tolerate handing out such strategic data?
I've been blocking 3rd-party cookies since a decade now. I have never witnessed any website breaking. And I do regularly buy from online shops (ebay, amazon, lots of others) while blocking 3rd-party cookies.
I'd really want to know which websites do break, and if they do, in which fashion.
Maybe this one? Embedded comments in an iframe, and if you login, there'll be a session cookie, in the iframe. There's no tracking — each blog has its own unique sub domain with unique cookies, and no cookies on the 2nd level domain. Still, disabling 3rd party cookies in Chrome seems to break login:
The immediate thought is "sites that rely on content being discussable" through things like disqus and the like (mostly because that covers one of my websites). The third party cookies allow for off-site autologins. Of course, allowing them also lets BS like facebook in so I'd love to see browser get built-in automatic third-party blocking with per-site whitelisting of 3rd party domains (I would love to auto-clear disqus on sites I actively use, but not on sites that happen to have it slapped onto a random 1 paragraph "news" item).
They fix this by opening a new window and having you login directly to Disqus.com so the cookie can be set. Disqus login still works with third-party cookies disabled...
However, don't use Disqus because they inject vulnerable JavaScript onto the page from unvetted ad networks and inject their affiliate links into every external link on your page[1].
Whitelisting while keeping every website moderately functional seems impractical for the Mozilla team, which means the burden of choosing what to allow falls on the end user.
While that's a great approach for privacy, the usability loss would probably drive the average person away from Firefox. I think the listed approach is likely best for the average user, but I think it would be nice to have an option for a power user to turn on a whitelist-only mode. (One could argue that "install an extension" is an appropriate "option" for the power user, but as you mention, it's nicer to not need to rely on third party extensions)
> Whitelisting while keeping every website moderately functional seems impractical for the Mozilla team, which means the burden of choosing what to allow falls on the end user.
I've used NoScript for a long time, and the hardest thing is knowing what the domain is doing so I can decide what to allow. It's hard to tell the difference between opaquely named ad-networks and opaquely named media player providers.
It would be nice if someone could start compiling a database that
1. groups together the domains used by different sites and services (e.g. website.com and website-images.com) and
2. includes a brief description of their purpose or business.
So, doubleclick.com and doubleclick.net could be grouped and easily identified as an ad network, google tag manager is a tracker, etc.
I doubt such a list would take any more effort to maintain than the current ad-blocker lists.
Yeah, but for non-techies? As a fellow uMatrix user I love it, but normal people don't want to set up a whitelist for every site they visit and get broken websites without.
> I wonder how closely Mozilla analyzes the addons they offer for download? Are they as trustworthy as Firefox itself?
This has been a contentious discussion of late. Mozilla does manual (human) code review add-ons after they're submitted to AMO and made available for install, but they don't tell you which add-ons have been reviewed by a person and which haven't. It's apparently too costly and slow a process to review every submission, so much of this is automated, for better or worse.
Sophisticated users can download them and look at the code, as .xpi files are really just zip archives, but everyone else is on their own.
I'm not entirely sure why the solution to the tracking problem isn't seen as simply scoping cookies to the domain of the top level page. I'm dubious that there's any value to cross site cookies that isn't in service of tracking, since oauth-style flows provide a better and more open mechanism for providing cross- or multi-site login mechanisms anyways.
Is there some value I'm missing? Why blacklist this? I'd rather whitelist it.
See also about:telemetry to see if anything enabled. If enabled, they are sending a bunch of information with unique id. Even if disabled, the telemetry is still always being gathered, just not sent.
about:addons is an iframe that includes the addons.mozilla.org website - which happens to use GA with a special contract. This page now respects the Do-Not-Track setting.
The fact that someone in the organisation actually thought "on let's approach Google and ask for this special agreement" shows a naïvety that really shouldn't exist in a company I'm supposedly trusting to protect my privacy.
I really hope this kind of non-sense starts changing in Mozilla soon. This post is promising, but—as the gp points out—still not without glaring irony/hypocrisy.
What's naive about it when they _got Google to agree under contract_? Let's turn that around instead: I really hope this kind of sensible demand becomes wider spread, with more people going to Google saying "we only want to use your tools if the data is anonymised and you do this under a contractual obligation that we can sue you over if you ever violate, as a partner agreement rather than a service/customer relation".
Plenty of huge sites out there could quite easily demand the same thing without affecting ABC's bottom line while making the lives of others better in incremental steps.
"Mozilla went through a year long legal discussion with GA before we would ever implement it on our websites. GA had to provide how and what they stored and we would only sign a contract with them if they allowed Mozilla to opt-out of Google using the data for mining and 3rd parties.
We now have two check boxes in our GA premium account that allows us to opt-out of additional usage of our data. Because Mozilla pushed Google so hard, those two check boxes are available to every other GA user in the world regardless if they have a premium account like we do. GA also doesn't track IPs or store PII within the tool."
--
Seriously, those bastards.
Only having internal controls and debate, sustained legal engagement, and ultimately DNT-obedience.
"GA had to provide how and what they stored and we would only sign a contract with them if they allowed Mozilla to opt-out of Google using the data for mining and 3rd parties."
(emphasis mine) am I the only one who finds the usage of the word provide odd? Literally it means GA had to list what and how (insinuating they blindly trust GA to do only what it says it does). Not-literally but the flow of the phrase makes it seem like they want to convey "GA had to prove how and what ..." but without actually making that claim. In the case Mozilla does have proof why don't they share the anonymization framework with proofs? In case they don't we are supposed to be OK with their feigned naivety?
Google is a company that have a track-record of breaking the law to contravene user-privacy. They are also Mozilla's primary competitor (albeit also a large revenue source). Please tell me how, as a company selling oneself on user-privacy, approaching such a company to negotiate a contract that ensures you can continue sending them your users' data is not naïve? Calling it naïve is kind, as the alternative is malice.
No matter what way you cut it, Mozilla is sending your data to Google's servers, and they're deciding what to do with that data. An opt-out contract doesn't change any of that.
> I really hope this kind of sensible demand becomes wider spread, with more people going to Google saying "we only want to use your tools if...
To turn that around, you're going to Google saying "your market dominance makes your tools are so indispensable to our business, that we would rather go through an expensive year long legal discussion with you to negotiate better terms that consider alternative competing solutions"
Hey, don't tell me, tell Mozilla if you know of any established alternatives. Bearing in mind this is part of a multi-year funding agreement between two (not even close to competitor) companies, where the "offer our search" is the price for receiving literally hundreds of millions of dollars in funding. If Google was really Mozilla's competitor, we wouldn't even be talking about this: you don't give your competitor money to keep them afloat. Not in America.
And incredibly, Mozilla talked Google down from their normally "and we get everything your users do" conditions to only "and you make us a trivially changed default search engine, and we are under contractual obligation to anonymise the data we get through our GA channel. And offer everyone in the world the option to have that same anonymisation turned on. For free". That alone is worth one-time changing a search engine after installation. After all, you get this product for free, and you're even given every possible way for you to customize it should you not like any of the default settings, from default search engine to default skin to default webgl hardware binding settings.
So if you still think none of that was worth doing, and just seeing google.com find you search results, but clicking three times to change that is too much work, then... I don't know man. I don't think browsers are for you.
Not really, unless a fine 0.002% of their revenue counts.
> and change their ways?
On that specific issue, after being investigated for it, it seems so. On any other handling of personal data, one can only make assumptions based on their ethical record.
It's to assume that in a company as large as Google it's impossible to know what happens with the data and where its stored and where it gets commingled and when maybe even devs are granted access for "debugging". If you give the data away, it's gone, no control. Facebooks issues taught as that.
I don't work for Mozilla but presumably they will be at a disadvantage in identifying and thus prioritizing features that their customers want compared to competing browsers. Less features -> less users -> less revenue from google for paying to be the default search engine which then completes the reinforcement cycle as they will have less money to spend on features.
Now they probably don't NEED it, but with every user they lose to chrome, they get less and less money until the only market for them would potentially be the privacy focused ones (although I here chromes got really good privacy features nowadays) which are such a small population that they wouldn't even have enough customers to justify the revenue to even match competing browser features. Thus, you would end up with a lackluster browser that cannot match competing browsers and its only niche are privacy people.
Main difference is that browser makers (Mozilla, Apple, Google for non-Google ads) are becoming hostile to ads and tracking rather than trying to be neutral.
The Facebook Container plugin has features that aren't available in the Multi-Account Containers plugin, such as limiting the container to a domain so it doesn't follow you when you click a link. Facebook containers should just be a default/recommended setting for the core containers plugin rather than something that requires its own special plugin.
I really like Facebook Container. I completely forget it's there most of the time. Last time I remembered was because I saw an advert before an embedded Youtube video - I pay for Youtube so it doesn't have adverts, but of course the Container doesn't know that. The partitioning that results makes lots of sense for me.
They should expose the switch to turn off Javascript... this should be table stakes for any browser. Yes I realize it can be accessed by a series of steps that include leaving the page you’re on, but there is too much friction in that interaction flow.
The most annoying thing from the current version is that when a site doesn't work, even completely turning off your privacy plugins won't necessarily fix it. You have to remember that this is another thing that you may have to disable to use a site. Other than that, it should be fine.
I use another browser for those sites. So Firefox with all the fancy add-ons for 99% of my browsing and then Edge for those times I really can't have it failing on me (any sort of payment, that's pretty much it).
“Firefox will strip cookies and block storage access from third-party tracking content.”
This sounds ambiguous to me. Does it mean they won’t block third party cookies for NON tracking content?
We rely on third party cookies for Single Sign On auth. The question is, how will this continue to work?
Ideally, these browsers should finally allow access to client side certificates functionality so you can authenticate with websites without being tracked by the certificate’s issuer!
XAuth was a step in this direction. We need a place to store these certs or private keys. But are all major browsers even close to supporting it?
Update: Firefox supports them but it’s so clunky. Focus on letting any site install a certificate with the user’s permission, firefox!! Apple already allows web based download of configuration profiles, which is far more insecure:
This is the control I would like (and maybe someone has tried it...): for a given URL, I control which additional URLs I permit for that page. So, for example, if I type "nytimes.com" as the top level URL, I might allow "nyt.com" as well. The GET or POST for any other URLs on that page will not be sent by my browser.
I'm developing an embedded commenting system that runs in an iframe. Each blog has cookies at `blog-name.example.com` and one gets different cookies at different blogs — there's no tracking.
Still Privacy Badger and apparently iPhone believes the cookies are tracking cookies. Privacy Badger doesn't see the difference between unique cannot-track cookies on `per-blog-sub-domain.example.com` and tracking cookies on `example.com`.
If you have time: How will the new Firefox browser deal with such cookies? (unique per blog cookies, different on each subdomain)
Maybe I'll have to make the commenting system work completely without cookies in any case, because of iPhone and Privacy Badger.
I recently started a new job and took a fresh laptop as as good an excuse as any to switch to Firefox. It's great! It's fast, dev tools are good. I don't miss anything from chrome apart from the automatic google translate - joys of being in the EU.
Unclear from this article is how they determine what is a "tracker"? I assume there'll be a curated block list, maybe based on anonymously collected data. But that detail is key to whether this is a success or not.
Apple's Intelligent Tracking Prevention (version 2 of which will ship in Safari/iOS 12 in September[0]) uses some sort of ML-based solution to decide what is and what is not a tracker, blocks cookies from being sent to domains that haven't been visited in a first party context, and has an explicit way for the user to opt-in to cookies being sent upon interaction in an iframe (e.g. the FB "like" button). Unclear how this Mozilla version stacks up.
I'm curious to see how the blocking of slow-loading trackers will affect browsing ability. I put a pi-hole on my network and have found that some sites become extraordinarily slow to load (like more than 20x as slow as without the pi-hole), presumably because there's something being blocked that prevents the rest of the site from being loaded. Needless to say, this makes the pi-hole a win-some-lose-some proposition.
Hopefully Firefox's implementation will avoid this pitfall!
Dear Mozilla:
I hope your browser have all the internal data flows accounted, i.e. how much of your cookie, location, keystrokes, files goes to each site should be presented to a user. This can be complicated due to cross-site login credentials and whatnot, but it can be visualized nicely in a graph format. This would probably require engine-related changes and affect the performance, but definitely a direction I'd like to see in future.
What's going to happen is they will just change their code to use postMessage or similar to facilitate the tracking. It's really not hard to persist a UUID across many different TLD's using techniques other than tracking cookies.
I'm sorry but this isn't going to help motivated companies who have businesses and teams of engineers. It's just going to be some JIRA ticket that says "fix tracking for firefox users".
Shameless plug - for browsers that don't yet block trackers, I have a little social sharing buttons project that cuts out all the usual cookie and http-request garbage you usually get from social icons:
As a Web Dev I am split on a movement like this. I understand the need for users to control what data they allow companies to have on them, but if other browsers do not follow suit, what will that mean for deployment and making sites consistent across browsers? If Firefox is blocking popular libraries or scripts that end up breaking some webpages the user experience still suffers
I think that's effectively already the world we live in. As a uBlock user I occasionally run into sites that are broken because they wrote code assuming that some function provided by an analytics library would exist, and it doesn't because that library got blocked. I file bug reports when I can, and I'm concientous about handling these cases in my own code.
Writing code to handle the failure to load of third party scripts like this should really be a best practice anyway. Even if you use subresource integrity checks on all the external scripts you load, what if some analytics provider's site is down for a while? Do you want your site to still work? I do. (Obviously this does not apply to scripts that are actually necessary for the core functionality of your site, but that doesn't really apply to analytics/tracking tools for the most part.)
Making this the default behavior of FF will make this sort of breakage more visible to more people, it's true. If anything maybe this will encourage sites to write their code to handle failure more elegantly and I'll spend less time annoyed. One can dream.
Site owners were given freedom for a long time and they consistently chose to do the wrong things with no sign of ever stopping. Now browsers have to step in and moderate them.
You can set a list of fonts to use and the browser will keep trying until it finds one that is installed. Usually the last one in your list is serif or sans-serif that always works.
That's true, but my comment goes further than that. Webfonts require network requests if they aren't available locally. It might make sense to scan the available font list for similar fonts before initiating a download.
Couldn't a browser do that without necessarily revealing the list of fonts?
if (!isFontAvailable(font)) {
downloadFont(font);
}
It's not like a tracking script is going to try to iterate over every single existing font out in the wild for finger printing purposes. Doing so would be too easy to detect and block at the browser level. In the meanwhile, a script can get the list of fonts directly.
What are the requirements for what gets blocked and what doesn't get blocked? Is there a heuristics based approach to tracking blocking or is it all still cat and mouse with block lists? I'm not up to speed on any breakthroughs in that regard.
It’s a great initiative but I am afraid it will only change business mode of tracking providers to offer self hosted software that will no longer be 3rd party ...
That alone doesn't allow cross-site tracking. You need a way to identify that the person on one site is the same person on another site, which is the role the third-party cookie plays. Firefox is eliminating the third-party cookie, and working to eliminate cookie-less fingerprinting techniques as well. Who hosts the information is not the concern.
Good point, that’s a great improvement. I just believe you would still be able to identify and fingerprint individuals across sites if you simply pass data through self hosted solution to 3rd part aggregate
What's worse, knowing you have no privacy, or having the illusion that you have privacy? Companies with deep pockets and skin in the advertising game can still track you. Anytime a defense is put up, they just pivot to another method of tracking. When you have an army of the world's best engineers and unlimited resources a way will be found. We need to start living with the reality that privacy is already dead. Weep if you will, but at least accept the truth that Mozilla can't protect you, and that your data IS out there somewhere.
This is a fatalist attitude that ignores things like ethics still remaining in developers, the fact that the cat-and-mouse means the defenses ARE working, and the fact that the engineers you are so awed by aren't really magicians.
Yes, there are data breaches and tracking, and it will continue. But the fight has moral and practical value, and I appreciate Mozilla for continuing it.
I agree that this privacy/ads arms race currently favors ads, but all it takes is one law (think GDPR) to dramatically reshape the conflict. Luckily that power still resides with the users.
I was going to say that they aren't financed by Google, there's just a search contract, but I found this tidbit online...
"Historically, search engine royalties have been the main revenue driver for Mozilla. Back in 2014, the last year of the Google deal, that agreement brought in $323 million of the foundation’s $330 million in total revenue."
The default search contract went to Yahoo between 2014 and 2017 then back to Google after that. Looks like they do get most of their money from Google.
Flipping "privacy.resistFingerprinting" in about:config to true should be sufficient in recent versions (also realize that this will change your default UTC prefs, some timing prefs, default window size, and a bunch of other stuff ...etc).
Why not both? I agree with you that they should be blocked because they are wrong, but I'd also agree that they should be blocked because they are slow.
This is the wrong angle to take. Plugins are the correct solutions to these problems. A browser should implement the standards. Period. If this breaks a site users want or need, they will go to another browser. If Mozilla wants to fix things, they should be fighting for new standards. Start with a standard that says all third-party content requires a user prompt to enable, always, no whitelist, no blacklist, no way to disable the prompt. Same with all JS hardware API calls. That's how it should have been from the start. Any browser vendor that goes against the standard would be forced to admit they're enabling gapping vulnerabilities.
Maybe your right. I looked quickly but didn't find anything. Is there no standard out there stating, for example, that JavaScript MUST NOT be allowed unrestricted access to local storage or that the location API MUST request permission? These are just accepted as obvious best practices?
In the near future, Firefox will — by default — protect users
by blocking tracking while also offering a clear set of
controls to give our users more choice over what information
they share with sites.
Sounds promising. However, having Google as the default search engine is a good enough reason to discourage one from using Firefox. Wonder if it would ever change.
And what would you suggest as alternative, given that normal users (e.g. people who've never even heard of HN and make up the majority of browser users) expect to search with Google?
As far as I recall, the Google search is part of the funding agreement (because Mozilla is still a non-profit, the money has to come from _somewhere_ and we, the users of the browser, sure aren't paying them?) with the explicit agreement that it is trivial to change the default search engine for people who don't want to search with Google.
Click on the drop-down on the left of the search field in the browser, you click "change search settings", you pick your preferred search (options for which include "duck duck go" these days) and you're done forever. That... feels like a perfectly fine way to go about offering people what they want: make what the majority wants the default, and make it trivial to change for people with different wants or needs.
What about no default ? Just Prompt user after installation which search engine they want to set as default.
HN users will switch to something they prefer, normal users wouldn't know there exists anything other than google. By making what you think majority wants as default, you are forcing something on people who doesn't know any better.
Google pays Mozilla a bunch of money to be the default search engine, and Mozilla needs the money as they are a non-profit.
If somebody cares about their online privacy, they can change the default search engine very easily - I think what we should be doing is teaching the average web browser what kind of tracking goes on so that more people are willing to switch to things like Firefox/DDG.
No need to lie just to make a point. Firefox does not "compromise the privacy of their users" by using google.com as default search engine. You're going to have write quite the detailed explanation if you want to make that case and make a credible claim at the same time.
I don't think so. The vast majority of people outside of China use Google as their search engine, and would want to continue to do so. It'd be bad UX to force them away from this.
Within a few seconds I can easily adjust my search engine preferences in the options menu, and DuckDuckGo is listed as one of the default options.
Honest question - What is it you are considering as the alternative browser in this situation? The other big 3rd party (non-os supplied) browser is literally made by google.
> having Google as the default search engine is a good enough reason to discourage one from using Mozilla
Hm? The only browser that doesn't ship Google as the default search engine is Edge (with apologies to Lynx users). Is the alternative to suggest that people not on Windows simply shouldn't browse the web? Changing the default takes only four clicks: search box -> change search settings -> default search engine -> select from dropdown menu.
So if our SaaS service relies heavily on third-party cookies for single sign-on, how does this impact us? Is there some kind of whitelist we need to apply for, or do we need to completely rethink our product?
So if our SaaS service uses third-party cookies to do single sign-on, what does this mean for us? Can we apply for some kind of whitelist somewhere, or do we need to rethink how our entire product works? We aren't using third-party cookies for advertising purposes.
Wow, I had completely forgotten about what a scourge pop-ups used to be, and what a relief it was to finally be free of them. The fact that what used to be such a prevalent scummy tactic could be completely abandoned due to pushback from browser manufacturers gives me a tiny bit of hope that maybe pervasive tracking isn't an irreparably permanent feature of the web after all.