What's missing is effective international law enforcement. This is a legal problem first and foremost. As long as it's as easy as it is to get away with this stuff by just routing the traffic through a Russian or Singaporean node, it's going to keep happening. With international diplomacy going the way it has been, odds of that changing aren't fantastic.
The web is really stuck between a rock and a hard place when it comes to this. Proof of work helps website owners, but makes life harder for all discovery tools and search engines.
An independent standard for request signing and building some sort of reputation database for verified crawlers could be part of a solution, though that causes problems with websites feeding crawlers different content than users, an does nothing to fix the Sybil attack problem.
I don't want governments to have this level of control over the internet. It seems like you are paving over a technological problem with the way the internet is designed by giving some institution a ton of power over the internet.
The alternative to governments stopping misbehavior is every website hiding behind Cloudflare or a small number of competitors, which is a situation that is far more susceptible to abuse than having a law that says you can't DDoS people even if you live in Singapore.
It really can not be overstated how unsustainable the status quo is.
I think the alternative is to recreate the internet with more p2p friendly infrastructure. BitTorrent does not have this same DDoS problem. Mesh networks are designed with sybil resistance in mind
No, it really isn't. Unless you mean like on the BGP level. But it's p2p in the sense where you have to trust every party not to break the system. It's like email or mastodon, it doesn't solve the fundamental sybil problem at hand.
>BitTorrent is just as susceptible to this,
In bittorrent things are hosted by adhoc users are that are roughly proportional to the number of downloaders. It is not unimaginable that you could staple a reputation system on top of it like PTs already do.
This is already kind of true with every global website, the idea of a single global internet is one of those fairy tale fantasy things, that maybe happened for a little bit before enough people used it. In many cases it isn't really ideal today.
It's not necessarily going through a Russian or Singaporean node though, on the sites I'm responsible for, AWS, GCP, Azure are in the top 5 for attackers. It's just that they don't care _at all_ about that happening.
I don't think you need world-wide law-enforcement, it'll be a big step ahead if you make owners & operators liable. You can limit exposure so nobody gets absolutely ruined, but anyone running wordpress 4.2 and getting their VPS abused for attacks currently has 0 incentive to change anything unless their website goes down. Give them a penalty of a few hundred dollars and suddenly they do. To keep things simple, collect from the hosters, they can then charge their customers, and suddenly they'll be interested in it as well, because they don't want to deal with that.
The criminals are not held liable, and neither are their enablers. There's very little chance anything will change that way.
The big cloud provides needs to step up and take responsibility. I understand that it can't be to easy to do, but we really do need a way to contact e.g. AWS and tell them to shut of a costumer. I have no problem with someone scraping our websites, but I care that they don't do so responsibly, slow down when we start responding slower, don't assume that you can just go full throttle, crash our site, wait, and then do it again once we start responding again.
You're absolutely right: AWS, GCP, Azure and others, they do not care and especially AWS and GCP are massive enablers.
I'm very aware of that, yes. There needs to be a good process, the current situation where AWS simply does not care, or doesn't know also isn't particularly good. One solution could be for victims to notify AWS that a number of specified IP are generating an excessive amount of traffic. An operator could then verify with AWS traffic logs, notify the customer that they are causing issue and only after a failure to respond could the customer be shut down.
You're not wrong that abuse would be a massive issue, but I'm on the other side of this and need Amazon to do something, anything.
I don’t think this can solved legally without compromising anonymity. You can block unrecognized clients and punish the owners of clients that behave badly, but then, for example, an oppressive government can (physically) take over a subversive website and punish everyone who accesses it.
Maybe pseudo-anonymity and “punishment” via reputation could work. Then an oppressive government with access to a subversive website (ignoring bad security, coordination with other hijacked sites, etc.) can only poison its clients’ reputations, and (if reputation is tied to sites, who have their own reputations) only temporarily.
> but then, for example, an oppressive government can (physically) take over a subversive website and punish everyone who accesses it.
Already happens. Oppressive governments already punish people for visiting "wrong" websites. They already censor internet.
There are no technological solutions to coordination problems. Ultimately, no matter what you invent, it's politics that will decide how it's used and by whom.
Good points; I would definitely vouch for an independent standard for request signing + some kind of decentralized reputation system. With international law enforcement, I think there could be too many political issues for it not become corrupt
The web is really stuck between a rock and a hard place when it comes to this. Proof of work helps website owners, but makes life harder for all discovery tools and search engines.
An independent standard for request signing and building some sort of reputation database for verified crawlers could be part of a solution, though that causes problems with websites feeding crawlers different content than users, an does nothing to fix the Sybil attack problem.