The thing with Access Denied is that these deprived clients retry with some vengeance. So, you're instead draining more resources than you'd like. I run a content-blocking DoH resolver, and this happened to us when we blocked IPs in a particular range and the result was... well... a lot of bandwidth for nothing.
This is what I was wondering. I'm taking a wild guess that maybe they don't have that level of firewall access and it was being done through filtering by the webserver to provide an access denied.
But why bother with deny? Just send a blank text file (or one with as minimal data as needed to satisfy the rogue adblock) to the "blocked IPs" to mitigate the traffic for now. If firewall access exists, just drop the offending incoming traffic entirely.
> Just send a blank text file (or one with as minimal data as needed to satisfy the rogue adblock) to the "blocked IPs" to mitigate the traffic for now.
The sent http body was blank, but I beleive we were still sending http head...
> If firewall access exists, just drop the offending incoming traffic entirely.
True, but the service we were using at the time didn't have a L3 firewall, and so we ended up moving out, after paying the bills in full, of course.
That reminds me of the absolutely insane amount of traffic my mother's Roku TV shits out when it can't resolve/reach its spyware and telemetry services. It's like 95-98% of the blocked traffic on her network.
Is there a clean solution to this problem these days? Like some kind of adblocking router that resolves these addresses correctly but then routes packets destined for these services into a black hole so the requests eventually timeout? That would at least slow the repeat request floods down significantly.