Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing with Access Denied is that these deprived clients retry with some vengeance. So, you're instead draining more resources than you'd like. I run a content-blocking DoH resolver, and this happened to us when we blocked IPs in a particular range and the result was... well... a lot of bandwidth for nothing.


Why serve any HTTP replies to those at all? If you are doing it at the IP level, why not just drop all inbound packets from the L3 address?


This is what I was wondering. I'm taking a wild guess that maybe they don't have that level of firewall access and it was being done through filtering by the webserver to provide an access denied.


We were on Netifly way back then. So, no L3 blocks. Now on pages.dev and workers.dev, but haven't needed to enforce any rules yet.


But why bother with deny? Just send a blank text file (or one with as minimal data as needed to satisfy the rogue adblock) to the "blocked IPs" to mitigate the traffic for now. If firewall access exists, just drop the offending incoming traffic entirely.


> Just send a blank text file (or one with as minimal data as needed to satisfy the rogue adblock) to the "blocked IPs" to mitigate the traffic for now.

The sent http body was blank, but I beleive we were still sending http head...

> If firewall access exists, just drop the offending incoming traffic entirely.

True, but the service we were using at the time didn't have a L3 firewall, and so we ended up moving out, after paying the bills in full, of course.


This is the correct answer, and basically you have to setup round-robin DDOS protection that provides these "wrong" answers.

While still trying to allow valid traffic through.


That reminds me of the absolutely insane amount of traffic my mother's Roku TV shits out when it can't resolve/reach its spyware and telemetry services. It's like 95-98% of the blocked traffic on her network.

Is there a clean solution to this problem these days? Like some kind of adblocking router that resolves these addresses correctly but then routes packets destined for these services into a black hole so the requests eventually timeout? That would at least slow the repeat request floods down significantly.


In this particular case this is not the case and they don't retry. They just really want to download updates REALLY often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: