FWIW, Chromium devs have just responded[1] to the massive amount of feedback they got on the mailing list, like [2] and [3].
One of the main pain points raised was the lack of any way to dynamically add rules, as well as the low maximum number of rules allowed (30k). Seems they've decided to support dynamic rule addition, as well as increasing the number of rules, though probably not by orders of magnitude by the sound of it.
Hey, dislaimer: I worked on this study. Thank you for your comment.
To me this reaction from the Chromium devs is missing one of the most fundamental issues. I'm not fundamentally against the declarative API because of technical limitations; I am against it because it is a strong innovation lock. The current extension ecosystem is flexible enough to allow hundreds (maybe thousands) of people to actively work on privacy-enhancing extensions (ad-blockers, anti-tracking, etc.) and the technologies, heuristics, solutions to protect users' privacy on the Web are constantly evolving. The APIs are not used today the same way they were used 2 years ago. If Chrome decides to "freeze" the blocking capabilities of the browser into a declarative API that no one but Chrome devs can improve, they will be preventing people from finding new solutions to tracking and advertising (at least from extensions). It does not matter if they replicate 100% of the capabilities of today's ad-blockers, as long as it does not allow evolution and adaption it will become obsolete. There is precedent in this matter: Safari also has a similar API and it has been a huge pain for ad-blockers developers. The reason is simple: Apple or Google do not have the same strong intensives that we have to continuously improve the blocking capabilities of the user agent. My fear is that this declarative API will be an ok-replacement for today's content blockers, but will not allow the same kind of fast paced development we benefit from today in the space of privacy extensions.
I ditched Chrome 8 years ago, first for Opera and then Firefox once Opera became Chrome. It still has memory issues[1], but overall I'm very happy with it.
[1]: it's probably not plain old memory leaks anymore, but due to using a few long-lived content processes, pages/scripts that leak is an issue. But usually not a huge deal, once one of the processes start using 2-3GB I just kill it, and refresh the affected tabs (coughSlackcough).
Would love to switch back to Firefox, but the general UX is just too terrible.
I also lack the trust that even if they fixed the major issues (or even allowed fixing them yourself) I would be able to rely on things working in the mid-to-long-term.
Currently holed up on Vivaldi, where the things you expect out of the box, are in fact working out of the box (vertical tabs, mouse gestures, ...).
Speaking as a user (and early developer) of Safari's content blockers; I have almost never run into an issue with them. What kind of development do you fear will be stifled by Apple and Google not having incentives to improve the blocking (which I find somewhat strange in the former case, anyways)?
* The blocking engine operated by either Safari or Chrome is a black-box and independent devs will have a harder time understanding it, tweaking it, improving it, debugging it.
* Chrome devs are now playing nicely and get feedback and propose some improvements to the APIs but there is no warranty this will happen again, or that they will invest time/energy in the future improving this part of the browser.
* It's harder to work with this API than a JavaScript code-base you control.
* Chrome seems a bit better here but for Safari the documentation is pretty poor.
* You also don't get feedback regarding the rules which matched on a page and this makes it harder to debug or give nice insights to users.
That's only a few points from my personal experience but I discussed multiple times with developers of other privacy-enhancing extensions/apps and we shared similar feelings.
> Chrome devs are now playing nicely and get feedback and propose some improvements to the APIs but there is no warranty this will happen again, or that they will invest time/energy in the future improving this part of the browser.
I think this is especially true. It is somewhat similar to many other Google products like Maps and Translate. They start as a good free product, but as soon as they gain enough traction the rules change.
I think once this declarative Api is the standard for ad blockers in browsers Google will start exercising its control over it for its own benefit.
This is their long game. To me all the push Google did with https, and certificate pinning etc makes much more sense. I was wondering why they were pushing it so hard.
I mean after they essentially blocked ways to use proxy to filter the content, next logical step is to restrict API.
If you want to proxy your HTTPS traffic you add a local CA, and Chrome does not apply certificate pinning. Pinning is only for certs that chain back to the default CAs, specifically so people who need to proxy can do so.
(Disclosure: I work for Google, though not on Chrome)
Sure, but then you're still at the mercy of the browser.
The API change is totally unnecessary, yet is happening despite many protests.
The concern is that it was performance and privacy issue, which looks like a total BS (even according to the link we are discussing).
The extensions are installed by the user, so what not let them decide what to do with their browser? If it's really a concern, I don't think anyone would oppose if google would educate user what API given extension is using.
> The blocking engine operated by either Safari or Chrome is a black-box and independent devs will have a harder time understanding it, tweaking it, improving it, debugging it.
I mean, both engines are open-source, but yeah, I do agree that it would be nice it have this enshrined in a web standard rather than a de-facto one driven by the shins of two large corporations.
> You also don't get feedback regarding the rules which matched on a page and this makes it harder to debug or give nice insights to users.
The sites that display ads, wants to make sure ad blockers can't block them.
Currently both sides adapt, if the way it works is locked down ad blockers quickly will become obsolete.
For example a whole ago most sites were creating popups with ads, after it became bad, browsers started blocking popups, initially by only displaying then when user actively clicks. So sites started opening a popup when user made a first click anywhere on the page, eventually browsers started blocking all popups and just notifying user that popup was triggered, giving them choice whether they want to see it.
This solved the old pop-ups, but because of that a new popups were created, that use CSS to show it within browser window covering the text. In addition to that, the CSS layers were used to implement other attention grabbing mechanisms, like ad that stays in place even when you scroll, or suddenly ad appears between text etc.
This was targeted by ad blockers, which constantly adapt. Most ads are served from different domains, since typically the ad content is provided by different company, but increasingly we see ads being served from the same domain as rest of the website, or website randomizes CSS component id etc.
What chromium authors are doing is to instead provide API for ad blockers to use and list their rules and let the browser do the blocking. Supposedly that's to improve performance. The problem with it is that it will essentially fix adblockers in one place, they no longer will be able to adapt, eventually new kind of ads will start to show up, that adblockers won't be able to block. And now with other changes that Google successfully pushed, such as https everywhere, http/2 and http/3 is nearly impossible to block ads through a proxy.
This is why I personally prefer to stick with a Firefox.
> And now with other changes that Google successfully pushed, such as https everywhere, http/2 and http/3 is nearly impossible to block ads through a proxy.
I agree with everything except this. First, http/2 and http/3 absolutely do not prevent blocking. If blocking proxies don't support them, then they're the ones lagging behind.
Secondly, most blocking software working on the network layer use DNS, which still works just fine and will likely continue to work forever.
Thirdly, you can still, for the most part, MITM https connection on devices you own. You just need to install your own root ssl certificate. The only thing that prevents this from working would be HSTS preloading.
EDIT: Actually, adding your own root cert bypasses HSTS preloading.
> most blocking software working on the network layer use DNS, which still works just fine and will likely continue to work forever.
Don't say that twice. Have you heard about DNS over HTTPS? I'm using XPrivacy on Android and have noticed that applications that use Android System WebView (based on Chrome) started making requests to 8.8.8.8, 1.1.1.1 and other public DNS services. It's still possible to block domains via hosts file, but I bet it's a matter of time when Google decides it's "in our interest" to start using their DNS instead of ISP's one.
As for the MITM https you are at the mercy of the browser, they are depreciating the API they might restrict this as well if it will become the way to do filtering.
The difference is categorical. The current web request APIs can delay approval/rejection of any request for an indefinite amount of time to perform arbitrary computation (including IO or talking to other extensions) to decide. It can also be stateful.
This doesn't just knock the power of your adblocker down from Type-0 to Type-4 of the chomsky hierarchy, it also limits the inputs it can act on.
As an example, if I wanted an adblocker that looks at the DOM or javascript state of a page before allowing a request to load an iframe this would have to happen asynchronously since it means communicating with the page context. You can't do this declarative style.
Or if one wanted to implement a "click to play" style tool for iframes one would have to hold the request indefinitely until the user approves. This probably isn't a good idea for technical reasons, but at least it is a possibility with current APIs.
> The current web request APIs can delay approval/rejection of any request for an indefinite amount of time to perform arbitrary computation (including IO or talking to other extensions) to decide.
I believe this is the reason Safari introduced content blockers. It fits in very well with the traditional computing model on iOS of preventing unbounded, arbitrary computing where possible.
I don't see how that's a good thing. If latency is a concern you can always inform the user what is causing slowdowns and let them decide whether the functionality is worth the cost or not instead of taking the choice out of their hand.
And even if we suppose that it is a reasonable policy for a moment, it's still not all that relevant since we're not talking about the apple ecosystem here in the first place.
Safari's content blockers supported adding rules dynamically from the start.
Google tried to roll this out initially without that obvious must have. It speaks to intent, and probably future prospects for the API.
And, of course, a declaritive API with pattern matching limits what you can do anyway. No heuristics, no behavior based blocking, etc. You are pretty much stuck with cataloging the patterns of a ton of websites as your only approach.
Dynamic rule addition addresses a very small number of complaints with the proposal.
Beyond the ability to block requests conditionally based on arbitrary logic (not just a few pre-decided qualifications like request size), one point I want to keep coming back to is that there are actually legitimate reasons why an extension might choose to slow down requests. I use Tampermonkey scripts on a couple of social networking sites deliberately to slow them down so I'll that I'll be less likely to impulsively refresh them.
I continue to believe that the manifest changes aren't written with the perspective of enabling creative, unseen uses of the API in the future. They're written from the perspective of, "let's decide up-front what extensions we want, and enable specifically them."
The feedback people have given on this is extremely broad, and is mostly ignored by this post. It's disappointing to see a response that at least somewhat suggests the Chrome team is dead set on shipping this, and is only willing to bend so far as it takes for them to enable the most popular adblockers that exist today. If it took that much feedback to get Chrome to even slightly tweak the design, then what possible feedback can people give going forward to make anything more significant happen?
Tampermonkey being limited is huge for us. I used to actually implement a chrome extension for work to automate interaction with some partner sites, but tampermonkey key is so much easier to manage, test on, and update for. I'm dreading having to go back to extensions, especially since all of them will have a review process so there's yet another hurdle.
>I use Tampermonkey scripts on a couple of social networking sites deliberately to slow them down so I'll that I'll be less likely to impulsively refresh them.
Could you use Chrome's simulated network throttling instead?
Do you mean through the dev tools? That would require me to leave the dev tools open while I browsed. I would also need to manually turn it on, which defeats the purpose of it being an automatic thing that interrupts an instinctual behavior.
I don't see an API anywhere that makes network throttling available to extensions, but let's assume Chrome adds one.
In that case, it still lacks granularity -- I only want to slow down some requests on some sites. One thing I've been thinking about doing if I turn this into its own extension is having it respond to your aggregate time on a site. So the more time you spend on a social site, the slower it gets, but if it's been closed for a while it starts to "recharge" and speed up again. In particular I'm thinking about that for sites like Twitter, where I don't mind checking it so much as browsing it.
There's a lot of interesting stuff that's possible with the current API that can't be replicated by just saying, "slow down everything across the board."
> Users need to have greater control over the data their extensions can access.
Ok, so then why
> In particular, there are currently no planned changes to the observational capabilities of webRequest
That's before the fact that you consider that webextensions can run arbitrary code on the page and extract whatever information they want.
Edit: Also
> Increased Ruleset Size: We will raise the rule limit from the draft 30K value. However, an upper limit is still necessary to ensure performance for users.
But, as per the article
> All content-blockers except DuckDuckGo have sub-millisecond median decision time per request.
So that doesn't make any sense as a justification either
I find arbitrary limits like this to be totally idiotic. The number of rules does not need to fit in a 16-bit variable. The "to ensure performance for users" is simply patronising and attempting to divert attention away from the fact that they are trying to neuter this feature as much as they can without raising too much opposition. I should be able to filter millions of rules if I have the RAM and CPU power available.
"Seems they've decided to support dynamic rule addition"
I find it telling that it was left out in the first place. I can't think of any plausible reason to have omitted it, other than to purposely hobble adblockers.
One of the main pain points raised was the lack of any way to dynamically add rules, as well as the low maximum number of rules allowed (30k). Seems they've decided to support dynamic rule addition, as well as increasing the number of rules, though probably not by orders of magnitude by the sound of it.
Proof is in the pudding though.
[1]: https://groups.google.com/a/chromium.org/forum/#!topic/chrom...
[2]: https://groups.google.com/a/chromium.org/forum/#!topic/chrom...
[3]: https://groups.google.com/a/chromium.org/forum/#!topic/chrom...