Hacker News new | past | comments | ask | show | jobs | submit login

At my colo that is exactly the sort of strategy they use, they have bought a bunch of routers that use FPGA based filters for that.

The one scenario I can think of where that might be a problem is if they'd start flooding all the known hosts in a network for the specific purpose of overwhelming the routers. And even those hardware based filtering tools have upper limits.




There's really no need to do that. A large enough BotNet can take out almost any host doing nothing more than connecting and doing a GET / HTTP/1.1. Few websites can handle a sudden surge in traffic from 100,000 bots.


The idea behind an ACL for such an attack is that the same hosts will be used over and over again, taken from a large pool of zombies. So, let's take your example of 100,000 bots, an ACL in the upstream router (as seen from the host) could be used to checked against to identify those packets from the zombies and discard them. After all, a single attack of all 100,000 bots at once will just bring the host to its knees for the time-out of the connections and then it will bounce back up again. So, to increase the effectiveness of the attack they reconnect after every lost connection asking for another resource. If they're smart they'll vary agent strings and other characteristics to make it hard to narrow down who is and who is not legit.

Initially you don't have much to go on during such an attack and packet filtering is a reasonably expensive operation when you want to do it for a large number of hosts. So the strategy is to route all the traffic destined for that particular host through a router that has ACLs that are large enough to hold the total IP list for the botnet that is attacking the host, as these IPs become identified.

You don't want to route all your traffic through there because then you'd have to do the relatively expensive filtering on all of the packets, even those not destined for that particular host.

Now if an attacker were targeting the hosting facility they could thwart this strategy by sending requests to a larger number of hosts in the network in order to make life much harder for the crew fighting the attack. After all, you can't partition the problem anymore in to a portion that is targeted to the host and 'normal' traffic, effectively all the traffic could be bot traffic or it could be normal traffic, for all the receiving hosts.

To be able to partition the problem into a smaller one where you can let say 90% or more of the traffic through unfiltered and only concentrate on the remaining 10% would make solving it a bit easier. On the other hand if the attackers are silly enough to re-use the same bots to attack different hosts they've actually given you a clue as to which IPs are bots.

I hope that makes sense :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: