When I got down to Wizz Air's statement about "bugs in adblockers" making browsers "act unexpectedly" thereby triggering the robot detection code, I was reminded of the robot detection functionality in a very common enterprise WAF middle-box that injects background JavaScript on the page to detect bots. The code supposedly produced no user-visible change but would participate in some SOAP challenge/response fluff.
We ended up never deploying it because the false positive rate was absurdly high- on the order of 38% or so- with no tuning options available (short of falling back to a captcha). Having said that, I'd expect that this is a very common practice. I also suspect that blaming the ad blockers for lazy middle-box usage (if indeed that's what this particular case proves to be) is _not_ going to age well.
I used to work for an airfare marketing company. We would get our 3rd party scripts onto an airlines booking engine to be able to run our own analytics and gather data for ads. We'd mainly collect things like prices, number of available seats, etc, because it turns out airlines can't really give you those answers through an API without it costing too much, so we piggybacked on real customer searches.
Almost no one in the office ran adblockers, which was weird to me. When our analytics traffic dropped by like 50% one day, I was the only one to notice that our domain made it onto EasyList.
I created a GH issue about it, had a productive chat with a maintainer about what data we collected, and which data they thought was PII. If we wanted our domain unblocked we could remove the PII data from the requests, or create a secondary domain that only received the non-PII data.
We were gathering data that was not legally considered PII by something like GDPR, but I understand why an adblocker would be even more strict than the legal minimums. I brought this up with the executives and instead they tried to threaten the maintainers of the block list and tried to educate them on how "technically this isn't personally identifiable data according to this legal spec".
The maintainers stopped responding (rightfully so) and our data collection was forever halved.
I remember an instance where Lowe's website was broken on the corporate network. Our proxy re-arranged the order of HTTP content headers from the site, and Akamai took it as malicious behavior.
Lowes website has been 100% broken for me ever since I enabled Resist Fingerprinting in Firefox.
I can load exactly one page, but on any navigation or refresh I get:
=====
Access Denied
You don't have permission to access "http://www.lowes.com/" on this server.
Reference #18.cc69dc17.1661724957.fe4ef4
====
Result, unless I use the profile with fingerprinting enabled, I just have to buy elsewhere.
Drupal.org triggers "prove you're not a robot" every few page navigations with Resist Fingerprinting enabled. Walmart.com too.
Fedex package tracking errors (seemingly due to the API server refusing the connection) if resist fingerprinting is enabled. Amusingly if you use the website help bot and say "track XXXXX" that does work to get some basic information.
For the last one, make sure it's not on your end. I've seen order tracking HTTP requests be blocked by uBlock Origin simply because the URL contains `/tracking` or something.
This is in a profile with no addons, and only Resist Fingerprinting enabled. Also had it confirmed by someone else. Should be fairly easy to reproduce. Just create a clean profile and access tracking with Resist Fingerprinting enabled in about:config
I have RFP enabled and it works fine for me. I did get the "Access Denied" error you mentioned on my first try, but after switching VPN servers it worked fine.
On Lowes.com? Retest in a clean profile. Seems that once they trust you you are ok for a while, at least from a friend's test, who was able to reproduce in a clean profile. But maybe it is IP linked and takes a little bit to accumulate. Did you just enable privacy.resistFingerprinting recently?
Also. Doublecheck that it is enabled. Also, I'm using Nightly firefox. It may be the resist fingerprinting is more robust there.
BTW, this isn't using a VPN or anything that might seem suspicious. Just my bog standard US broadband.
> Sorry to belabour this, but by "it" you mean the setting in about:config called privacy.resistFingerprinting right?
yes, it's definitely the about:config option.
>If so, welp, no idea (aside from the Nightly thing). It consistently breaks for me and others though. Guess you're just lucky.
Just for fun I tried with various VPN servers across two different providers and got
5 / 5 working on provider A
6 / 6 working on provider B
One possibility is that they fingerprinted me and determined that my fingerprint was "good" (despite having RFP enabled) and therefore all the subsequent attempts were whitelisted. The other possibility is that RFP spoofs the user-agent to be the latest ESR version, and this causes issues when you're using nightly because it might have different fingerprinting characteristics (eg. TLS fingerprint) compared to the actual ESR release. An anti-bot system might flag that inconsistency as suspicious and therefore ban you based on that.
FWIW, I just replicated the exact same behaviour in Stable in a brand new profile (plus resistFingerprinting enabled). So, maybe it's something special about VPN IPs :) (I thought the Nightly theory was a bit of a long shot since I was pretty sure my friend tested in Stable)
Perhaps they whitelist generic profiles coming from VPN services.
Result, unless I use the profile with fingerprinting enabled, I just have to buy elsewhere.
Have you tried contacting them about it and teling them that you're taking your business elsewhere because their site blocks you? If enough people do that, they may actually do something about it.
I did in fact send them an email. Never got a reply. I'm guessing it went to the web team then straight to the trash.
I suspect possibly a registered letter might help, but to be honest I don't actually care that much, and as much as I like my local Lowes, switching to Home Depot works just fine too.
And you know... I kind of suspect given the fact that FedEx suffers from this too, that it is a generic failure of an Akamai service they both enabled, since they both use Akamai.
We ended up never deploying it because the false positive rate was absurdly high- on the order of 38% or so- with no tuning options available (short of falling back to a captcha). Having said that, I'd expect that this is a very common practice. I also suspect that blaming the ad blockers for lazy middle-box usage (if indeed that's what this particular case proves to be) is _not_ going to age well.