Well while this is being seen by many, I just want to take the opportunity to reiterate what is said in the README of the project:
> Without the preset lists of filters, this extension is nothing. So if ever you really do want to contribute something, think about the people working hard to maintain the filter lists you are using, which were made available to use by all for free.
Specifically, EasyList, EasyPrivacy, Fanboy Social, Malware domains, and a many more are the basis of many other blockers: Adblock Plus, AdBlock, AdGuard, BlueHell, and many others I am sure.
It seems such a tedious amount of work to maintain these lists, this needs to be acknowledged -- none of the above blockers would do very well without these lists.
I've been using this port instead of Adblock Plus for a few weeks. I like the low overhead, but usability is bad.
- It's hard to list what filters are applied (though not impossible, if you like pain: hunt for the right icon using hovertext, find an empty page, hunt for icons again this time without hovertext; one icon will trigger something that will fill up a log, but it stalls a few seconds first and there's no progress indication).
- The big green “power” icon is the wrong metaphor. IMHO uBlock should just stop using icons.
- ABP has simple and obvious text menus, uBlock fails at making its features discoverable. This despite ABP being much more feature-complete.
- There's no way to enable and disable filters from a page.
- There's no easy way to reach uBlock preferences (though not impossible, if you click random areas of the main panel)
- There's no rule editor. ABP's rule editor makes the simple cases easy and the tricky ones possible.
- The element hiding picker is unusable. The ABP picker is well polished, but ABP also integrates with the Firefox developer tools. That simple feature makes it unnecessary to code a custom picker.
I hear what you're saying but to me the advantage of ABP has been never having to see its preferences. I've used it for many years and I can't remember ever going into the preferences for any reason.
I've said this before but being a habitual user of noscript I feel that all I really need from an adblocker is to skip those annoying youtube commercials. Everything else noscript handles pretty well. So I'll have to see if uBlock can do that for me.
You need to atleast go into the preferences to select extra lists (if you want them) and to disable whitelisted sites (also, if you want to do that since ABP enables "some" ads by default).
(I'd probably enable the whitelisted stuff... IF I wasn't more concerned about adware than anything else over ad-networks)
I decided to dump extension-based privacy stuff and installed Privoxy instead.
I used to have Ghostery, Disconnect, ABP, and some other things running simultaneously. When you think about it, that's a lot of JS running and iterating the DOM multiple times every time you load a page.
Now there's a single, purpose-built, standalone process written in C doing it. Not going back.
This doesn't work with the push to HTTPS.
Either you use proxy connect, and Privoxy is no better than /etc/hosts blocking. Or you'd rely on the proxy's poorer and slower implementation of TLS. Certificate verification will suffer (no certificate pinning, no certificate blacklist, no OCSP stapling, no way to verify incomplete chains), no SPDY, no sunsetting of bad ciphers or bad protocols.
There is no reason in principle, and you can even do it for kicks.[1] The only issue here is a practical one, as the parent poster described: from an end-user perspective, browsers do SSL/TLS better than ad-blocking proxies.
Unfortunately HTTPS makes proxy-based privacy stuff unusable without invalidating your SSL/TLS certificate in the browser. Extensions are the only reliable method AFAIK for browsing ad-free, securely. That is, if you consider the extension secure.
You could move all SSL validation off to your proxy, and generate certificates from your own CA on the fly. Then within your browser or OS you can remove everything except your own CA from the trust store.
That would open up sniffing on your local machine/network would it not? I see the benefit of blocking malicious domains at a proxy or hosts file level. But aren't you sacrificing a lot of usability by blocking at a network level?
I don't recall any problems, but I use Foxyproxy to connect to it. In the worst case, you can just toggle it off quickly for that site, then turn it on again.
I've used Privoxy & friends (Glitterblocker on OS X) since the days of junkbusters.com, but with SSL on the rise this no longer works very well for me.
It really ought to be possible to have a dynamic browser hook in Firefox's parsing phase to allow pre-manipulation of the DOM (or raw HTML) in native code. However actually finding the right spot in the giant codebase to do this is another matter.
What about when you're doing development work on a site that uses content you'd normally block (Analytics, ads, etc)? When I spend a day working I don't want to have to toggle Privoxy every time I want to search for something.
You really probably want to use a separate browser profile with no plugins installed. It really sucks when you're trying to debug a problem and it turns out it's some combination of extensions you have installed causing the problem.
Unfortunately, Disconnect (new version is $5/month for any protection) is not as good as Ghostery and the sketchy part of it is Ghostrank which is easily disabled.
Your answer was unrelated to my question... mdellabitta stated he was running Disconnect with ABP and Ghostery all together which is redundant and I was asking why he would be doing that.
I've found it effective to just watch Skype with Fiddler, and add whatever ad servers it touches to my hosts file, and redirect them to 127.0.0.1 Entries like this work for me:
#BEGIN: Skype ad servers
127.0.0.1 rad.msn.com
127.0.0.1 a.rad.msn.com
127.0.0.1 secure.adnxs.com
127.0.0.1 m.adnxs.com
127.0.0.1 ads1.msads.net
#END: Skype ad servers
I tried this out for a couple of weeks after the "Adblock makes things slower" article came out, and I found it was blocking more stuff - but not necessarily the right stuff. I was finding that sites were breaking, stuff was disappearing and it was because of uBlock. I think I was trying to log into Medium and the Twitter and Facebook had been hidden, literally breaking the functionality of the site. That's not what I want from an adblocker.
Well, as the README says, it's not an "adblocker" but a generic blocker.
Hiding Twitter and Facebook log-in functionality on Medium is a feature not a bug for me. By including those javascripts, both companies can build up a history of the sites I visit.
So I gather this is more of a privacy+ad blocker rather than strictly ad blocker.
Unfortunately, even with EasyPrivacy and "Fanboy's Social Blocking", you will find that blockers (ABP, AdBlock, uBlock, etc) do not prevent 100% connecting to Facebook, Twitter, and whatnot.[1]
This is one of the reasons I see dynamic filtering as a key feature: users have the last word, not the filter lists.[2]
For example, I currently block all Facebook, Twitter, Disqus, and any of similarly ubiquitous domains by default using dynamic filtering, so that I have now 100% certainty that no connections to these domains occur on any page, while such certainty is not possible when relying solely on the filter lists.
Oh, it's you! Great. I'm confused. I've been using HTTPSwitchboard now for the last several months and love it. Is µBlock the successor to that, or are they for different things?
edit: Ah, I followed the link to µMatrix and I see now, I think. HTTPSB became µMatrix, and µBlock is the "easy" version of µMatrix. So I guess I should upgrade my HTTPSB to µMatrix, then. Is that right? Thanks for all your work on these tools!
uBlock will do it for you while keeping the point-and-click ability to un-block on a per-site basis. For example, a site which breaks if it can't connect to Facebook.[1]
This is what i like about ghostery. It will pop up a list of blocked sites in the corner each time you navigate, and you can from this list whitelist on a site-site basis.
Here's a good guideline: uBlock is not yet the right blocker for people that don't want to configure anything. It's not clear to me that it is even trying to be that thing, but it's clear enough that it isn't there yet.
Can't find the article now, but it was posted on HN that these social sites' share/like/login plugins track your online activity, so blocking them is a privacy feature, not a bug. Ghosterity blocks these buttons by default too but allows them be enabled in settings.
Great feature, I don't have twitter or facebook and I have to add separate extensions to block those annoying tracking features from every other sites.
To me any website that rely on facebook or twitter to log in is broken and I just don't use them.
I recently had one that required sending my info to xiti to display a js popup to log in with email. Too bad they lost a potential client for their service because their web page sucks and is designed with them in mind instead of the actual user.
I have been using µBlock for 3 weeks now.(previously adblock plus). There is one irritating thing it has that almost makes me go back to adblock again.
It is often blocking the false positives.
So if developer has named one of their assets file with social media names like "twitter/facebook/etc", it blocks the file right away. Since it does not have custom configuration options for each site, you can either disable µBlock or go with the crashed site.
Not all the sites are written professionally by experts, and this becomes an issue when you are trying to purchase stuff from small online shops. (to support local sellers)
I am too busy for a pull request these days, but maybe this will be fixed soon by someone else. My point for µBlock is for now 5/10.
It's the filter lists. Use the same filter lists with any other blockers which support ABP-filter syntax, and you will get the same result.
> named one of their assets file with social media names like "twitter/facebook/etc", it blocks the file right away
I have removed Fanboy's Social Blocking list from the pre-selected lists since a while now.[1]
I have to say that I had assumed -- wrongly obviously -- that all users knew that what is blocked or not depends completely on what filter lists are selected. So in your case it would have been a matter of un-selecting Fanboy's Social Blocking list.
I wonder what would happen if you defaulted to 0 rules active, with a huge mention in as many places as possible that rules needed to be configured (and perhaps a short list of recommendations).
I expect you would get more complaints, saying it was stupid and didn't work. But it would be interesting to see the outcome.
That's an issue with the filter lists you choose, not uBlock. Also uBlock does allow you to disable on a domain or subdomain, so this is easily worked around.
uBlock is a bit more of a power user blocker than AdBlock and its variants, but I find it to be far better, both in customisability of filters, and resource utilisation.
µBlock enables Fanboy's social blocking list by default. If you want your Twitter/FB buttons and so forth back, you can turn it off in the subscription options.
The problem is that it doesn't just block the buttons, it blocks assets that just happen to have those names that are unrelated to the buttons as well.
Both the Chrome and Opera versions did for me, and quite recently too: only switched on Chrome about a month ago at most, and I've only even had Opera installed for about a week.
I thought this was one of those "used it for <x> <y>s and everything is super duper" and wanted to add my "+1". Anyway, I've been using it for 1-2 months and it's been smooth sailing so far. Faster than AdBlock and 99.9% of the coverage coverage (usually some obscure popup ads usually – they did get fixed pretty fast though). And no false positives for me at all.
> Since it does not have custom configuration options for each site
What's the purpose of such tool really? It might sound weird, but I was happiest with opt-in (ad) blocking. Every single blocked content was (semi)manually entered by me. I really don't care if I see ads on a site I visit once in a lifetime.
Same here. I only use it because it has lower resources consumption but it annoys me when it blocks Disqus even though I have added their domains to the white list.
Embedded Disqus works fine from here. If you have a URL where there is a problem, enter an issue on Github [0] -- and enumerate which filter lists you have selected.
Usually it all comes down to the filter lists, in which case I redirect to EasyList forum [1].
Similarly, it blocks all FB, Twitter buttons across all websites and there seems to be no way to white list those across all websites. Am I missing something?
It's possible in theory. I have customized my ABP rules a ton, switching off rules I didn't want (impossible in uBlock), adding targeted rules for anything from annoying animated avatars to slow domains or bloated mastheads you can't scroll past (shit's almost as bad as modal pop-ups), and I've been unable to do any of it in uBlock because it's way, way too much hassle.
It seems like, if one were an evil web developer, one could thwart blockers like these with a few nasty tricks. Server-side: on each request of a resource, salt all linked resource URIs and encrypt (after the host name) before sending the final document to the client. The resources can still be fetched, because the server can decrypt any requests, but the client has no way of knowing whether a particular resource is an ad or not. That thwarts URI based filtering. To thwart DOM filtering, randomize element classes and ids.
Though impractical, I thought it was an evil/funny idea.
Messing with the URIs isn't going to work because most ads are from ad networks and their domains can be blocked. And no ad network is going to trust some system where they need to rely on the client server to not lie about ad impressions.
If the resource names are still "static" your resources can still be blocked. But if they are altered slightly every time, you may also thwart caching, right?
Right, which is why I added the bit about salting the URIs before encryption. The salt could be per request, per session, or rotated after some period of time (say once per day). If only rotated once per day, it would still thwart ad blocking, but have less of an impact on caching. It would turn into an economic question of whether the extra ad impressions are worth the extra bandwidth.
The idea is that the ads would be served from the original host. Combined with encrypted URIs and the ad resources become indistinguishable from content resources. Thus thwarting blocking. As others have pointed out, yes this evil trick is incompatible with existing ad networks (which use their own domains). But it wasn't my intention to put forward a viable idea. Just an interesting one (namely, ciphering URIs to prevent blocking).
Ad programs such as Google AdSense forbid modification of the ad code through their ToS. So sre, you could encrypt (and javascript-side decrypt) your ads, but Google would (or might) through you out.
That would burn bandwidth and loading time, since the only way to check the hash is to download the content. Part of the point of ad blockers is to stop the request for the ad as early as possible.
Regardless, it's easy enough to salt the ads for each request, thwarting hash checks.
This is awesome! I recently read this ticket[1] mentioning that porting of µBlock seems to be a starting point for gorhill to port µMatrix. In my perfect world, I would be using Firefox with nothing but µMatrix, as it is right now I'm having to use Chrome with µMatrix and sadly can't get rid of µBlock until µMatrix can hide blocked frames. Still, getting closer to that perfect world! The work done by gorhill is amazing.
Thanks for those links. While the earlier versions for Safari (like v0.8.2.0, found here https://github.com/gorhill/uBlock/issues/117) work well on Safari 5, these new versions on your Safari link do not. They crash the browser during installation.
If the author is here, please turn off the counter by default. We're blocking distracting things in our browser, we don't need another pointless distracting thing in our browser toolbar.
Counter opinion: I like the number because if there's a website working not quite right, I can easily see the number and disable uBlock to test out if that's the cause.
But you can't, though. uBlock shows the total number of blocked items since you started Firefox. Not the number for a given page or site. Making it a basically superfluous mostly useless datapoint.
On Safari and Chrome, it displays the number of filters applied on the current site. This is (super) useful. If on Firefox you're getting a global count then that's a bug (so please report it on Github).
Perhaps changing the color from red to something more "off" (grey?) would help distract less.
Useful why? It's blocking stuff, that's great. I don't care how many things it blocks on a specific page. It make no difference to me while browsing. The little counter changing just looks like an annoying animated GIF in the corner of my browser.
I get that a subset of users would derive some pleasure at knowing how many things are blocked on every single website they visit or users who'd like to use it to troubleshoot a specific page that doesn't load when blocking is enabled (in which case having a quick right-click toggle to turn it on so users can use it to troubleshoot would be ideal), but I'd wager that most users do not. Users that want it on can turn it on. (cue subset of users responding that they either 1. enjoy seeing the count or 2. use it to troubleshoot the rare page that doesn't load as a result of blocking)
This is super useful feature for debugging. Happens sometimes that something goes wrong and an ad is regenerated after being blocked. So the ad blocker would block it again, thus generating an infinite loop that can take up all the memory (if the ad blocker saves any information about the blocked ads on the current page in memory) and crash your machine in less than a minute in the worst case. In the best case it would just lock up your browser window and you wouldn't know why. Knowing the number of ads blocked would quickly help debug such situations.
> You can't compare directly the figures between the browsers
Still, yes, Chromium uses more memory. A good part of this is because per-process tab. Once Firefox get the same per-process tab architecture, it will be easier to compare both browsers together.
Yep, Chrome / Chromium are now the #1 reason to bump my ram to 16GB. While using a laptop for coding / VMs etc. I still find my browser eating more ram most of the time - it simply doesn't handle paging out gracefully, and is horrific for my uses with < 4GB ram on any OS.
I'd appreciate details of CPU consumption as well. Minute amounts of memory are of no concern for many people, but everyone prefers snappier browsing, every time.
I was aware of the contents of the front page, having skimmed it again today (and read more carefully before, although with little interest for Chromium extensions). I'm only interested in performance characteristics in Firefox, but am unlikely to test this myself before it arrives in AMO.
One would expect any ad blocker that actually blocks ads to reduce memory consumption significantly. Ad blockers that merely hide ads exist and have succeeded in earning reputations as ad blockers despite only doing half the job.
That second bug is lacking in technical details, but the first one is clearly about the memory consumption of the CSS rules used to hide elements. If you only use rules that match URIs instead and not element ids, ABP is extremely efficient.
It's worth remembering that ABP only implemented the element-hiding feature so that things like inline text ads could be blocked, but now the popular rule lists are using a large number of broad rules to hide all kinds of elements, many of which could probably have been effectively blocked with URI-matching rules.
Additionally, it's very unfair to the ABP devs to be criticizing the performance or memory usage of ABP when the problem is really the memory usage of EasyList. Using a more restrained and targeted rule list makes the problems go away.
True, but when I uninstalled Adblock Edge, I gained almost a gigabyte of memory in Firefox. (I'm now using Ghostery to block ads. It isn't as effective but I'd rather reduce the memory footprint.)
uBlock supports this - you can right click on any element and select "Block element", which then pops up an editor allowing you to refine the CSS selector to only get the bit you want removed.
If you just want to remove elements from the page, rather than block connections to third-party servers, I recommend Stylish. You can just write custom CSS rules and make those elements display:none. Alternatively, write a user script that does the same.
AdBlock (getadblock.com), competitor to Adblock Plus, has a wizard for blocking elements. Right click on the page, AdBlock -> Block This Ad -> adjust the slider for element-level specificity.
That creates a custom filter that you can go edit if you want.
Can someone familiar with the code explain what's the difference between this & adblock/disconnect etc. Why is this lighter? (a little technical algorithm difference would be good to know)
From my experience of using this for a few weeks now on Chrome, I would recommend this over AdBlock Plus etc. I haven't done any specific performance tests and personally no numbers to back it up. However, I do have the subjective impression of improvement.
I'm curious: how do you use it? Do you have to mass-enable stuff using the 'power' icon, for many sites? Or do you just enable the things you know you need, on a per-site basis? Or are your default just good enough?
The reason I ask is that I was curious about it, so tried it out for a couple of days. I found many sites were not usable with the default settings, e.g. Udemy's login box didn't show up until I turned off µMatrix.
I haven't used uMatrix, since I use Firefox, but I can speak for RequestPolicy, which is kind of the same thing basically AFAICS. It's very very very rare that I end up whitelisting everything that's requested on a site. The more you use the extension, the more you learn to recognize what domains you need to allow in order to have the site working as intended, and this way you end up building your own whitelist: allowing requests from a domain is a one-time only thing most of the times (additionally, there's an extension you can install that syncs your whitelist to your Firefox Sync account).
> the more you learn to recognize what domains you need to allow in order to have the site working as intended
Sites using cloudflare are a major issue here, specifically their cryptic subdomains. If someone were to hide an ad network behind those they would be hard to tell apart from the rest, at least for humans.
Yes this is true. If a site is not working and the only reasonable domain to allow is something from cloudflare, I resort to allowing it temporarily. Not many ways around that, I think.
On my slightly older MBP (2011) the plugin caused performance issues. Mainly hanging when opening a new tab and navigating somewhere. Other extensions have had same problems, so I might need to move into filtering ads at network level.
Unfortunately I doubt the auto-updates will work when the add-on's .xpi URL is not stable because it includes uBlock version number. He might be able to workaround that limitation by hosting a stable .xpi URL on a github.io website.
I like it, but so far anyway, I'm not feeling discernible differences in performance. That's just me though!
I would like to see something akin to Ghostery's interface where I can see exactly what trackers/ads are hitting me at a site, and whitelist by domain.
Does anybody know if I really need the "Malware domain list" if I have Chrome's malware and phishing protection activated at the same time? Sounds like they might contain the same sites anyway.
I've never had an (Ad) blocker as I'm relatively happy with most sites but there are some that are not behaving well.
When I last looked (at least two years ago) there wasn't a single blocker that supports a blacklist-only mode. Meaning: Allow everything unless I block a certain domain.
Does anyone have an idea if something like this exists now?
I don't want to maintain any block lists myself. I want to use all the 3rd-party Filters available but I only want them to be active on certain spammy websites. As far as I can tell ublock doesn't let me do that either.
I haven't been able to do performance tests, which is supposed to be the best advantage of this extension. But at least, it seems to block ads properly on Chrome.
Has anybody made some tests outside of the developers of the extension?
Good question. It's a pity that the only response was a flag-killed irrelevancy. Only last week, I downloaded and started using Privacy Badger as an alternative to Ghostery. I'm liking it so far but my current laptop is 7 years old and only has 1GB RAM (running LXDE) so resource consumption is my main concern after safe-guarding my privacy.
Bluhell is touted as a lighter weight solution. I haven't taken measurements, but Firefox Mobile certainly feels snappier with Bluhell than with Adblock Plus/Edge.
It's a replacement for AdBlock. JavaScript is not blocked unless it's part of an ad. The Easy Privacy list is part of uBlock by default, so there's some effective overlap with Ghostery but they're still different.
Blocking at the DNS level produces a lot of requests to 127.0.0.1, which can be quite annoying e.g. if you are using a local webserver for development. Also if you do not have a local webserver on 127.0.0.1 the browser still will wait for answers to your request, what can be an annoying experience. Would be interesting to measure how much CPU and memory actually is wasted with many tabs waitung for elements from 127.0.0.1.
It is the better approach to not let these requests happen right in the browser, what this plugin does, if I get it right.
A warm and big THANK YOU to the developer of this great software, it is so important and good to see that many developers are helping users to protect against the morally challenged who are stealing the privacy of millions every day.
If you choose to send the blocked requests to an IP address where you also choose to put a web server, then what do you expect? There's nothing stopping you from sending blocked requests to 127.0.0.2 and configuring your dev web server to listen on 127.0.0.1.
"the browser still will wait for answers to your request" - Shouldn't have to wait. You should get a near instant rejection if you attempt to connect to a TCP port on your local host and there's nothing listening there... Unless you screwed up your firewall.
DNS blocking works quite well. It's also very effective if you can set it up on your gateway so that it blocks ads/malware for all devices (PCs, phones, tablets ... even your guests will get adblocking for free!). However, while the /etc/hosts trick works I don't think it's the best way. The reason is that while parsing a plaintext file is easy and generally fast, it doesn't scale well especially when you have to traverse several megabytes of it for _every_ DNS request.
The way I have done it is by installing unbound[0]. If you aren't using something else like dnsmasq, you will notice a speedup from the DNS caching alone. While unbound isn't supposed to serve authoritative answers (i.e.: don't use it to manage your zone) the possibility is there. The unbound-block-hosts script[1] can be used to convert Dan Pollocks' hosts file to the appropriate unbound syntax.
To avoid the timeout from localhost, my first approach was to setup a firewall rules that discards the request (the browser receives a connection refused message _immediately_)
Another way of proceeding is to setup nginx to serve a 1*1 transparent GIF for every request it receives. If you find nginx to be too big a dependency, there are alternatives such as pixelserv[2].
The issue is that you can't block a specific element, but you still catch a fair share of ads.
I have explored the possibility of running a http proxy to address this, but I haven't got around to it yet. (In addition, I'll probably need to MITM myself if I want to block elements in HTTPS; still needs some thinking :)).
Am I reading right - serving data for responses on localhost is "cheaper" (for some value of cheap) than letting those requests fail? Or does it just return faster, is it faster than using firewall rules?
Why serve a gif, why not a single byte or something? Does the browser require the data to be parseable?
No, certainly not cheaper (for any value of cheap). I haven't measured but I can't see how it would be possible, as in the first case the network stack takes care of it (in kernel land) and in the other case you have to go all the way back to userland.
The reason I tried the transparent GIF trick is because some sites have frames with ads and whatnot and having the firewall refuse the connection will result in having an error message displayed in the browser. Not really aesthetically pleasing. While I don't care, because I know the reason it is displayed; some less technical people might start thinking their Internet is broken.
In addition to that, the GIF results in a cheap form of "element hiding" since you end up replacing a 5050 banner with a 11 transparent square. Now that you mention it though, I wonder what would happen if I set the server to serve a null byte for instance.
I also replace 127.0.0.1 with 0.0.0.0 and duplicate each entry to have it for IPV6 as well, as in this script except I don't do the iptables rules: https://gist.github.com/teffalump/7227752
I use this on Windows. For a large hosts file (several megabytes), I find the DNS client service hangs until it reads the whole file. Further, there is also a noticeable delay in the order of a few seconds until the hostname is resolved. For smaller files, I find there is no noticeable lag. Can't offer you any numbers though.
Slower DNS responses with big hosts file is something to be expected.
The solution is to feed the data to a DNS caching resolver such as unbound or dnsmasq, as I have explained in reply to UserRights.
Answering here also to point out I originally implemented this on a 5y old netbook (Atom CPU, 1GB RAM) because I tend to have a lot of tabs open in firefox, and ABP's memory usage quickly becomes noticeable, and then comes the swapping. Running unbound+nginx scales remarquably well, without hogging too much memory at startup. (Sorry, I don't have access to the netbook right now to report real numbers).
Personally, I hope that this doesn't get added to addons.mozilla.org. Their arbitrary human-control editor policy would be the best thing to cause stagnation of innovative plugins such as this. If the author goes the self-hosted route, automatic updates are still achievable (and can even occur faster.)
ublock doesn't work on quite a few sites, whereas I've never had an issue with adblock. Couple of examples off the top of my head:
1. giphy: it removes the share and twitter buttons
2. doesn't block ads on hulu
There were others as well that I seem to be forgetting (I remember having to disable it on quite a few sites). Good intention (lowering memory consumption), but the execution still has some way to go.
this is just different filter presets. You can change the filters. In my case, I had ABP doing all that same blocking, only I had to opt-in, whereas with ublock that blocking is opt-out.
fair enough, but if you're going to compare ublock to ABP, as a user my expectations from it are going to be like those from ABP. By default it should work like an ad-blocker, with an option to block other stuff. For users like myself, who don't really care about tracking, just about ads cluttering up a page or delatying a video, blocking out stuff other than ads just 'breaks' the site.
Some of those share/tweet/like buttons can track you without no interaction, just by loading on the page. Removing them might be intentional for this reason.
True, but imho an ad blocker should just block ads by default. I'm not sure if I'm representative of most users, but I don't really care as much about tracking, just about ads cluttering the page.
Where's the law that says I have to view web content as it's creators intend? You send me a stream of data, I decide what to do with it, what parts to render, what parts to ignore.
There's no law. I work at a newspaper. We are digitally aware and head towards a digital distribution. That means infrastructural changes and that costs money. Those advertisers pay the bills. They provide the means for us to reach you. Subscription models are (currently) not enough.
I'm not trying to guilt anyone into anything but this will be a an interesting field to watch in the coming years. How will journalism fund itself and maintain some semblance of integrity? If the ad revenues fall it will be a bumpy ride indeed.
In the December 2014 issue of Le Monde diplomatique there is an interesting article that investigates future possible press models within the French context: http://mondediplo.com/2014/12/13press (in English, but behind a pay wall). The authors argue that for public interest press (news, investigative journalism, etc), a shared publication and distribution institute/company should be funded by the government. In that way, the magazines/newspapers/whatever only have to finance the costs of the journalists and editors. Publication costs are carried by this joint/shared open publisher. The article is focussed on the French situation, and stresses that the government already hands out a yearly €1.6bn subsidy to the press, and end with concluding that funding a joint open publication institute will not be that much different money-wise compared to the current situation.
I agree that journalism is a public good. Having the government fund it sends shivers through my spine. Recently where I live the public broadcasting corp. was threatened with cutbacks due to a perceived slant in coverage. The threats were then carried out after the politician in question was given chairmanship of the parliamentary committee on the budget. That to me is so troublesome I don't consider it viable.
I guess this discussion is way off-topic at this point :-) But I find it interesting enough to keep it going. The real challenge is obviously to have this government funded publication company stay free of direct government interference. The article mentions a sort of tax that is levied before the wage is paid out, and should be delivered directly to the join publication house. Much in the same way as some European social security contributions are collected today. That would keep the funding away from the yearly government budgeting rounds.
No matter what system we think of, the powers at be will always try to influence the press and information. We can also see that in the current system, where large stakeholders can pressure/influence journalists and publications. I don't think the state has a monopoly here, on the contrary.
> How will journalism fund itself and maintain some semblance of integrity?
i suspect journalists can do that by asking for a larger subscription. If society truly values the work of journalists, instead of it being a conduit for advertising, then it will be paid accordingly. If not, then, yes, it's a shame, but it really means that nobody values the work of a true journalist (especially investigative journalism), and it will die out. But that's reality, as sad as it would be if it were to be true.
>> yes, it's a shame, but it really means that nobody values the work of a true journalist (especially investigative journalism)
> I find your fatalism disheartening.
It's not fatalism but rather a political statement.
OP argues from within the superficial logic of capitalist exchange: the value of a commodity is whether and for how much it sells.
Uninterestingly enough, the thing which the OP will not be sad to kill is investigative journalism, which is pretty much the only branch of intellectual work that can undo the corruption that is brought on by the above psychotic logic.
What you think is fatalism is actually poorly disguised pro-corruption propaganda.
I think conflating "viewing ads" with "fighting for good things" is going a little far. Grandparent isn't against people caring, and specifically mentioned the possibility of showing that via higher subscriptions.
So this is a slight digress, but while we're on the subject, how does an advertiser know that an ad has been successfully served or blocked? I've never had that part fully explained to me. If they're just basing it on number of adds clicks, I rather doubt whether I use an ad blocker or not makes much difference, because I can more or less guarantee that in the ~18 years I have actively surfed the web, the number of ad links I have consciously clicked is between 0 and 5.
So I assume, then, that they have some way of tracking whether the ad was presented to the end-user?
Ad servers keep track of the number of times the graphic (or other asset) for a particular campaign was served. Lots of other "pixels" also get bundled into the calls depending on what kinds of ad tech the advertiser, publisher, or middlemen are running.
How many people (living, breathing, checkbook-possessing people) actually see an ad is an entirely different question from how many it is served to, and there is (of course) a whole 'nother area of ad tech companies working in that space.
Not my expertise but to show an ad there is usually a request to the server to send the graphics or similar so I guess they could track if that happens. Not sure if they do in practice. Occasionally a website notices I have an adblocker and moans at me or warns that it may mess up the functionality.
Funding comes from 3 sources: Owners, advertisers and the public (either as donationware or subscription fees). Each one comes with a set of challenges.
Owners tend to exert some sort of editorial control, making the coverage slanted towards their end-goal.
Advertisers tend to be unpalatable to the reader, sometimes editorial control is a problem but that sometimes means there's an editorial problem (advertisers stop using a paper due to consumer pressure).
Subscription means your content is not as widely distributed as it could be. Yes we want to get paid but primarily journalists are in it to be read. Getting that many people to part with money is also a hard problem when compared to the other two sources.
It seems like if you're interested in making money online with advertising with "journalism" there's strong pressure to turn yourself into BuzzFeed.... that is, low effort, cheap tricks, information pornography.
No, I am genuinely interested in how to fund real journalism. How do we go about getting money for people to expose the aggregation of wealth and power against the interests of the plebeians. (note: plebeians is used here not as a derogatory)
To fund real journalism for the filthy peasants you need to seek public and private funding for an endowment (not direct operational funding) combined with a small amount of tasteful advertising and merchandising, subscription fees, and a policy of graduating content to public domain after a certain short period (a few months and not much more).
Merchandising could be prints of artwork, cartoons, expanded versions of articles or source materials as well as branded things like coffee mugs.
The key is being lean to start and developing an endowment so you don't always have to be seeking dollars (NPR fund drives bad). Offer high value, medium to high priced merchandise, keep your journalistic standards high with a low supply of advertising (keeping demand high).
Endowment contributors will be motivated by a history of quality journalism and the fact that you open your content to the public after a certain reasonably short time. If you give people a way to buy it without abusing DRM or being otherwise obnoxious, people will pay.
I'm not sure most of the people exposing the aggregation of wealth and power against the interests of the plebeians are doing it for the money and would probably do it even if unpaid. Though for what it's worth the main guy I read doing that, Krugman, get's paid pretty well by the NYT which makes money from subscriptions which works because some people are willing to pay.
For anyone unfamiliar with Krugman here's an example:
Hehe. Surely some DRM friendly folks dreamed about a personal computer platform where you would be forced to go through commercials as with television/video before accessing content.
Unfortunately, ad blocking has become a part of protection against malware -- especially for non-tech users.
Blame malvertising that loads exploit kits, and ads (on search engines and elsewhere with fake download buttons, etc) that point to pay-per-install malware, the kind of spyware/adware that's super easy to install and monitor-smashingly hard to remove, now the creators of those have gone as far as adding ring-3 rootkits to their dlls that hook into browsers.
That would be like saying a service that blacked out the ads in news papers is theft, its completely rubbish. Plus educate yourself on the definition of theft:
"theft is the taking of another person's property without that person's permission or consent with the intent to deprive the rightful owner of it."
At best its unauthorized modification of a creative work without distributing it, which is not illegal and never will be.
Don't know why parent comment was killed when the article contains a similar assertion (the other way). While not a lot is gained by using loaded words like "theft", it is a simple fact that using ad blockers causes the owners of many of the sites that you visit to receive less revenue than they would otherwise. One can argue that one is perfectly justified in doing this (e.g. many typical examples of this are included below), but one can not truthfully say that they are not taking money away from the sites they visit. (Yes I know about PPC ads and how you never click them. Other than AdSense, the vast majority of advertising out there is sold on a CPM basis.)
That's present in uBlock too, despite some fairly unintuitive UI - the big green power icon isn't a full disable for the extension, it disables it for the current site only.
You could make a case that an ISP or proxy operator that replaces advertising with other advertising is stealing from the operator of the original website. But blocking ads is the same as not looking at ads in the paper. If you're not going to act on them or read them you might as well block them for the bandwidth savings and speed improvement. Besides that, not all advertising has your best interest at heart, there is quite a bit of malware that uses distribution via advertising networks as their vector onto your machine. So now adblocking has the perfect fig-leaf: it will keep malware of your machine. This makes adblocking mandatory within organizations if they're savvy about this, why take the risk of having to re-image a bunch of machines or allow a bunch of drive-by malware to gain a foothold behind your precious corporate firewall if you can simply block it at the source.
When are these ads being forced on you? When you willingly and knowingly visit content providers who make no secret of being supported by advertisements?
Often times, we land on a web page before knowing the history of the website or its funding model, privacy policy, terms of service, etc. This means that, upon visiting a new website for the very first time, consent to view advertising is not an informed decision, it's forced on us.
But aside from that, I've consented for the content provided by the content providers to execute in my browser. I did not extend the same consent to the ad agencies. If content providers wish to host the ad code, images, etc. on their own servers, then your argument starts to make sense. I would still argue that it's up to the content providers to publish whatever they see fit and it's my right as a consumer to control how the information they publish is assembled and presented to me on my machine. Not theirs.
If content providers were to host advertising materials on their own server, it would also make it harder to use ad networks for watering hole attacks and thus the internet would be more secure.
I use Privacy Badger, NoScript, and RequestPolicy to manage my browser's security, and EMET to help keep the rest of my system safe from the browser. I don't rely on AdBlock or any similar tools to achieve this.
I wonder if sites could make it so with EULA or something. Could they say that viewing an article without ads violates something? Or using this service (for example reddit) without ads? Is it legally possible?
the consideration is the content of the website. EULA haven't been tested in court, sure, but i think against the average person, they won't be able to do battle against a large corporation should they decide to enforce it.
Contract Consideration - Something of value given by both parties to a contract that induces them to enter into the agreement to exchange mutual performances.
In your first comment, you seemed to be against altering the presentation of websites; you offer a binary choice: go to the site, or do not, based solely on the content. Seemingly advocating that sites should be accepted or rejected whole-cloth with no modification. Yet here, you seem to be just fine with blocking Javascript and cookies, despite the fact that both these technologies are often used as part of ad services to generate revenue.
I'm not against altering the presentation of websites at all; I use Ghostery myself (plus Greasemonkey etc.). I just think it's dishonest to pretend that this has anything to do with rights violations.
No, silly, they want all of the content and functionality that they developers worked so hard for, they just don't want the developers to be compensated for it.
... I'm not asking about why you'd want one. - I'm asking about why you'd make a new one and who would pay for your work on a new one when others already exist?
But that is the same question. Not everyone is trying to maximize their wealth all the time.
Do you ask "why did Linus write Linux when there were other operating systems available at the time and no one was paying him for it?" or similarly the Atheos.cx guy? Gnome when there was KDE? Gimp when there was Photoshop? Facebook when there was MySpace?
The last one isn't comparable to the others. The original Facebook made by that other guy Aaron Greenspan arguably was just someone doing something because it was useful. Zuckerberg and co. made what is today's Facebook specifically to gain profit and power.
But your point is well-made otherwise. The answer to "why?" for uBlock and others is "because people care enough to do it."
One concern is that the EasyPrivacy list -- which is in my opinion overly aggressive -- is enabled by default. Most pertinently, it entirely disables Google Analytics on every site.
> it entirely disables Google Analytics on every site.
Some might find being tracked across most of the Internet to be a privacy concern.
People put google analytics on their site because of the 'free' data they get from google, and I suspect most don't consider that there is a cost - it's just their users that are paying it.
Yes, lots of people don't use gmail for this reason.
GA does a lot that webmasters couldn't normally do -- specifically: to cross reference those requests with requests to almost every other website, and use that data to build a full profile of the user. (Granted, the full profile is not available to the webmaster, only to Google's ad targeting platform).
>> If you think GA is a privacy concern, you should consider services like gmail to be a privacy concern too. Perhaps they should block that too.
I'm not sure what one has to do with another. GA is something developers (or marketers) decide to pull into their sites, exposing their users to having their activity tracked across all sites that use GA. gmail is a product that someone elects to use. Apples and oranges.
>> Paying for it"? GA doesn't do much that a webmaster couldn't otherwise learn by analysing their webserver log files.
That's the unfortunate truth; and there are also many free packages that will do similar things -- all of which can be done without tracking people across most of the sites they visit.
Still, none of them are as quick and convenient as dropping in a line of javascript. I just wish that more developers would give thought to whose data they're giving away when they drop in that line of javascript.
If most geeks end up blocking GA (in most cases without realising it) then the audience of geek-focused content might end up being severely under-reported to publishers. This will only accelerate the mainstreamification of our culture.
You're right. We should all start reading all our spam messages and actively buying all the products we can tolerate. Otherwise, the spam will only be for crap we don't like…
Not directly, but it will affect the webmasters of non-commercial non-advertising websites if such block-lists become a default.
If you've ever implemented GA, you know that Google has put a lot of effort into making their implementation extremely efficient and asynchronous. I personally think they should be rewarded for being considerate towards users' time and resources.
I see no reason to reward a mugger just because they're efficiently taking the money out of my wallet. Yes, they might get me on my way minutes faster than the less-professional muggers out there, but I still don't like the process or end result.
Arguably the sites they visit will be receiving skewed analytics against their particular use-case, so the site owners won't cater to their needs as much. That said, I still block GA.
> Without the preset lists of filters, this extension is nothing. So if ever you really do want to contribute something, think about the people working hard to maintain the filter lists you are using, which were made available to use by all for free.
Specifically, EasyList, EasyPrivacy, Fanboy Social, Malware domains, and a many more are the basis of many other blockers: Adblock Plus, AdBlock, AdGuard, BlueHell, and many others I am sure.
It seems such a tedious amount of work to maintain these lists, this needs to be acknowledged -- none of the above blockers would do very well without these lists.