Must or mustn't they filter customers is a matter of law.
However, putting the responsibility to mitigate this problem in its entirety is very inefficient and ineffective. If Cloudflare would have a team dedicated for this effort, bad actors would simply switch providers, beating $200k/year effort by couple clicks.
Notice that the malware ultimately takes effect when the user executes the file.
This sounds more like an interaction design problem that should be solved in the OS level; the OS interface is one of the logistical bottleneck for the malware delivery path.
Everyone running a service on the internet has a responsibility to prevent abuse of that service. They should all have and monitor an abuse@ address where they accept notifications about problems they're causing others and they should act on those notices within a reasonable amount of time. When someone fails in that responsibility they should/will get blocked.
I hadn't heard of trycloudflare.com before, but it's blocked on my network for now. If I need to, I can re-evaluate that later.
Anyone running a service online can get caught off guard and be taken advantage of by scammers and assholes. It's an opportunity to shore up your security and monitoring. The bad actors will eventually move on to abuse easier targets and that's fine. When they do that doesn't invalidate the work someone put into making sure their service wasn't being repeatedly/routinely used to harm others.
That responsibility only goes as far as other people are willing to block them for not doing it. There's no law of the internet that says you have to, but if your customers can't access your service because their ISP or whatever blocked you, that's when it's your responsibility to yourself to clean it up. If you're too big to block, then it's OK to ignore abuse.
The internet is a community. Some people in a community feel that they have no responsibility to anyone but themselves, which is why we need laws and regulations.
We want service providers on the internet to police themselves and make sure that they're not turning a blind eye to crimes taking place right on their own servers because the alternative is that laws and regulation come into play. There's an argument that internet companies that are too big to block could still be negligent, an accessory to crimes, liable for the very real and significant damages the poor management of their service enabled just so that they could save a little money, etc.
Just like with banks, there are people who would say that if a company is too big to fail/be blocked then they are too big to exist and should be broken up.
Personally, I'd rather that a service provider just do a better job keeping their corner of the internet clean, keeping the people who use their services safer, and preventing their services/equipment/IP space from being used to carry out criminal acts.
In the end it'd improve their service, improve their image, make the internet a safer place, and as a bonus it would force criminals to waste their time looking for the a new company who'll be too cheap/lazy to kick them off their services. Hopefully they'll eventually end up only being able to find ones that the rest of us feel we can block.
The internet _was_ a community. Now it's a wall of commercial property, riddled with victimising criminals and advertisements that watch you. There are still some communities in there, but the bulk of it is a set of actors with no social interests in common with the users.
The abuse mechanism you describe exists in theory, but... commercial.
There is community between the NOCs of tier 1 ISPs, but they mainly care about routing.
In your picture, I'm imagining, say, CenturyLink stomping on a retail ISP, and I question whether this pans out like swatting. Can I get someone taken down by abusing abuse reports?
> I question whether this pans out like swatting. Can I get someone taken down by abusing abuse reports?
Not generally, no. Typically, abuse departments at ISPs don't blindly cut off people's internet access just because someone complains. They require evidence (server logs, message headers, etc) and there will be an investigation as well as multiple communications between an ISP and a user being accused of violating the ISP's terms of service. The same is true when the issue is between ISPs and their upstream providers. Keep in mind too that for both ISPs and upstream providers, everyone is naturally and strongly incentivized to not cancel the accounts of the customers who pay them.
There is one situation where false reports can get someone taken down. DMCA notices have this potential. ISPs can face billions in fines if they refuse to permanently disconnect their customers from the internet based on nothing more than unproven/unsubstantiated allegations made by third party vendors with a long history of sending wildly inaccurate DMCA notices. So far, media companies have been winning in courts and ISPs have been losing or (more often) settling outside of court. Everyone is still waiting to see how the case against Cox ends (https://torrentfreak.com/cox-requests-rehearing-of-piracy-ca...)
There is a solution for this at the OS level. It's domain names, validated through DNS. Those let the user decide if they trust the other side of a connection.
Here cloudflare is showing they should nt be trusted, but because they are so big, we can't act on that. Blocking them would be bad, mocking them is the second best option.
It isnt really "putting the responsibility to mitigate this problem in its entirety" on them so much as it is "putting the responsibility to mitigate this problem * on their service * "
Large software companies seem to enjoy passing the buck in recent years if it might impact their profitability which is fine but to say the could not do anything about it incorrect. It may not be feasible to do so an still operate the service but that doesnt mean it isnt possible.
That said, they're also using the "utility argument" - just as your phone provider won't screen you at every call you make, your electricity provider won't lock your supply until you authenticate use for non-nefarious purposes , your ISP won't content-filter, Cloudflare also says they won't police per-use other than when under explicit legal mandate (court injunctions). That's fair enough, at least to me.
Sure, but in this instance, they're offering an anonymous service. Just require a sign-up and a captcha, like you do for all of your other products, FFS. Are they on drugs? Do they want more botnets, to drive DoS mitigation sales?
Either discontinue the service, or serve each pipe from a subdomain that encodes the original source. Something that lets security tooling block known bad sites, without having them block a lot of legitimate sites.
(note that I don't necessarily agree but that statement is loaded)