Hacker News new | past | comments | ask | show | jobs | submit login

From the article:

> Scripts are delayed only when added dynamically or as async. Tracking images are always delayed. This is legal according all HTML specifications and it’s assumed that well built sites will not be affected regarding functionality.

(emphasis mine)




I don't know about you, but I don't browse the World Wide Implements-The-Specification-Perfectly Web.


If you aren't complaining because it violates the spec, what are you complaining about? Is any change to any detail about how a browser works necessarily bad?


I understand the complaint. Let me put it this way: how would you feel if your ISP was delaying your connections to a subset of websites for a few seconds? It wouldn't violate any specs, as far as I know. But a lot of people have expressed the sentiment that they don't want middlemen messing with websites. It's not clear to me that Firefox qualifies as an exception to the rule, especially if this becomes something other browsers adopt.


It's not even a NN-type middlemen issue for me, though that is exactly what's going on here. The bigger problem for me is causing regressions for users. On top of that they're causing regressions just because they don't like X traffic, and they're not even exclusively affecting X traffic - they're affecting unrelated traffic too. It's just an incredibly arrogant, annoying, bad thing to do to users who never requested this to begin with.


Should only affect pre-broken code. Like complaining that a compiler is doing something with undefined behavior than you wanted: I get that it's annoying, but maybe fix your code so it's not a problem?


This isn't even a broken code issue. This is a totally unnecessary functionality regression issue. Instead of just loading a page, they're waiting four seconds to load the page, because the page uses an asset on a domain they flag as a tracking domain.

This is like if the compiler generated loops with 4000ms sleeps because the app links a library the compiler thinks is annoying.

Technically the compiler never said it wouldn't add random sleeps into loops. It's totally in spec! What's the big deal?

Meanwhile, my app is slow now. Or in the case of some apps, actually broken for active use cases where it used to work fine. Which, again, is totally regression by any QA standard.


> they're waiting four seconds to load the page

You make it sound like Firefox is just adding a wait for no reason.

The reality is that the page is asking Firefox to download dozens or hundreds of scripts [1]. Firefox needs to prioritize those loads somehow, because it generally doesn't want to open that many connections to the server in parallel. So it prioritizes the non-tracking bits over the tracking ones. If all the non-tracking bits are done loading, the trackers start loading at that point.

> This is like if the compiler generated loops with 4000ms sleeps

No, it's more like if your OS scheduler decided to prioritize some applications over others based on how much it thinks you care about them (e.g. based on whether they're showing any UI, or based on whether they're being detected as viruses by the virus scanner).

[1] For example, http://www.cnn.com/ shows 93 requests for scripts in the network panel in Firefox. If I enable tracking protection, that drops to 37 requests.

Or for another example, http://www.bbc.co.uk/news has 67 script requests and only 20 with tracking protection enabled.

Or for another example, https://www.nytimes.com/ has 150 script requests and only 40 with tracking protection enabled.


Much of this discussion is missing that the point is to _speed up_ page load/display. It is NOT like a compiler generating sleeps.

The bazillion tracking scripts loaded by pages is slowing down time to view/interaction on the page. Firefox is taking scripts that are _already_ being marked as loadable asynchronously/delayed, and delaying them until the page is otherwise loaded. That's it. It's not an arbitrary 'sleep', it's an attempt to prioritize UI responsiveness over tracking scripts.

To the extent it breaks or _slows down_ pages, that's an undesired side effect, not the goal. If it does that to a lot of pages, the feature won't be succesful and will be rolled back, I bet.


Your app is only slow now if you are blocking its content and/or most basic usability on the loading of external trackers - a lame yet increasingly common practice that needs to stop.

According to the article, they're only delaying these resources when loaded dynamically or async - so developers should be able to "fix" this by loading tracking scripts synchronously, which is what they are effectively doing already if this new FF behavior causes any noticeable impact.

It's hard to feel much sympathy for devs who have _explicitly_ prioritized the sending of their users' info to external parties, over their sites being baseline usable.


I would go to my package manager and install a new ISP, of course /s


You should expect it, though.


I have both written software for standards-based protocols, and used software written to standards-based protocols, so no, I would not expect it.

Not breaking backwards compatibility for existing users is the golden rule of software support. When an unfortunate pull-requester attempts to break backwards compatibility with Linus Torvalds' software, he has some very choice language to complain about the practice. If they were attempting to break backwards compatibility just because they dis-liked some particular app or service or use case, he might even use foul language.

Fortunately, I am a very well behaved and good little HN user, so I will not repeat that language here. But imagine what Linus would have said, really loudly, with all capital letters. There. That's better.


Linux breaks backwards compatibly all the time. Just not for userland programs. But if you are expecting your kernal module to be low maintenance you are in for a surprise...


The kernel has a clear definition of what will be backwards compatible, and what never will be. In-kernel interfaces are never stable, and kernel-to-userspace interfaces are very stable, with an ABI docs directory breaking out what is and isn't.

https://github.com/torvalds/linux/blob/e7aa8c2eb11ba69b1b690...

If the kernel's user interface just started blocking on open for 4000 milliseconds for no apparent reason, people would not be happy. Firefox expects users to demand that app writers edit, recompile, test, and ship them a new app to prevent the block. This is <insert lots of not very nice adjectives>.


This is an interesting example because the kernel's open() interface blocks for 4,000 ms all the time. Heavy swapping or the ext3 journal is full and some other app just called fsync().

Apps have to handle it. If they don't, because for example, they access the disk while also trying to be a display compositor, then they are simply broken. It does not matter if the kernel is usually fast enough. Because sometimes it isn't.


Do you know what the reason for this distinction is? On Windows, it seems kernel-mode APIs seem to stay quite stable as well... there are exceptions mostly on the device driver side because hardware tends to evolve (e.g. display/graphics drivers), but generic drivers (= kernel modules) generally seem to be able to rely on backwards-compatibility too.


My understanding is that it's to intentionally discourage trying to keep things out of tree, where they will inevitably break in worse ways. It also makes the GPL enthusiasts happy, but I doubt that was Linus's big goal.


I see, thanks. I'd be curious as to why he feels it would "inevitably break in worse ways", seeing as how that's not really the case on other platforms.


Basically, if you're going to have drivers out of tree, the driver ABI has to be perfectly stable, which restricts internal refactoring. Otherwise things break - and this does happen elsewhere; lots of drivers for Windows XP don't work on 10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: