> What you're doing is called externalizing costs. It's generally recognized as antisocial behavior. So if you're going to claim benefits to yourself at the expense of other people, at least recognize that you're doing it.
Remember his preface - cranky old-school network operator.
Let's say you have a hundred networks all connected together into some sort of "inter-net" system. If one AS starts sending out malicious traffic, what makes more sense:
1. That AS starts policing their users.
2. The other 99 ASs have to deal with the malicious traffic.
You're expecting the other 99 groups that are being targeted by the one group to bear the cost of dealing with that group's malicious users. Who exactly is externalizing costs here?
In a system without any real rules or authority, I think "those adversely effected choosing to block the bad actor" is a fairly democratic solution to the problem. You either play nice or you get voted off of the island.
> In a system without any real rules or authority, I think "those adversely effected choosing to block the bad actor" is a fairly democratic solution to the problem.
That's the part which is adverse to the rest of your argument. You're not voting off the bad actor, you're voting off everyone in the bad actor's country.
We know how to deal with this problem. You go to a website, you sign up for an account, it can be pseudonymous but to get it you have to put up some collateral. Money/Bitcoin, proof of work, vouching by an existing member, whatever you like. Then if your account misbehaves you forfeit your collateral.
But this isn't a CloudFlare-level problem. They're trying to solve it at the wrong layer of abstraction. Identity isn't a global invariant, it's a relationship between individuals. Endpoints identify each other with persistent pseudonyms. The middle of the network should have nothing to do with it.
> That's the part which is adverse to the rest of your argument. You're not voting off the bad actor, you're voting off everyone in the bad actor's country.
The bad actor is the organization or person responsible for administering the network where the abuse is originating.
When I'm being attacked by someone's VPS, I report them to their host. After the fourth time I report them only to have their host pass along my report but take no further action, the host becomes, maybe not a bad actor but, a "bad citizen".
My choices are to allow them to externalize the costs of their lack of enforcement (or decision not to enforce) and attempt to find a way to block the specific actor under their purview, or just to block that host and accept whatever collateral damage that occurs. (And yes, sometimes that "bad citizen" may end up being most of a country - it doesn't change the equation for me.)
It's the only method I have to exert any pressure on the host to act responsibly. If enough people agree with me, then it quickly becomes "their problem" rather than "my problem" as they get blackholed from everywhere on the internet.
> The bad actor is the organization or person responsible for administering the network where the abuse is originating.
The bad actor is the individual who acts bad. The Post Office is not a bad actor for delivering letters.
> allow them to externalize the costs of their lack of enforcement
Tor is not an enforcement agency. Neither is CloudFlare. The costs of bad actors are your costs. You have the technical ability to retaliate against common carriers for not allowing you to push those costs onto them, but that doesn't make you right to do it in any sense other than might makes right. And you should realize that in doing it you're knowingly hurting innocent people.
Of course they're not an enforcement agency, that's kind of the point - there is no enforcement agency. We all have to contribute by being good citizens.
I subscribe to the idea that it's an ISP's responsibility to police its own network for abuse and my responsibility to police mine.
You apparently subscribe to the idea that it's my responsibility to just accept whatever shit you fling at me and it's my problem to deal with and yet somehow I have a responsibility or moral obligation to still provide services to you and your customers.
There are two ways to deal with badness. The first is to try to identify bad people and then stop them from doing anything whatsoever. The second is to identify bad acts and stop anyone from doing bad acts.
The second one is the only one that works without massive collateral damage.
Identifying bad people is only an abstraction over identifying bad acts and it leaks like a sieve. A reformed thief is entirely capable of buying an apple without incident, because not all acts by bad people are bad acts. But a thief has no reputation as a thief until after they steal for the first time. The only way to stop bad things is to detect bad things.
But the true failure of reputation systems is that as soon as multiple people share an identity they disintegrate entirely. Innocent people get blamed for malicious acts of other people through no fault of their own and with no ability to prevent it. The only way reputation systems can work at all is if people can prevent other people from using their identities.
Which means that IPv4 addresses can't be identities, because we don't have enough of them for them not to be shared.
And forcing common carriers to stop doing business with anyone who has ever done anything bad has another problem. It imposes the death penalty for jaywalking. You send spam once -- or get falsely accused of sending spam -- and you're blacklisted. It puts too many innocent people into the same bucket as guilty people and then the innocent people fight you alongside the guilty. It creates the market for these VPN services because too many servers are wrongly using IP addresses as identities. Then the bad people also use the VPN services and bypass your "security" because it was never security to begin with, so you block the VPN services which destroys those and they're replaced with others you haven't blocked. Meanwhile the real bad people also use botnets which are unaffected, so you aren't actually blocking the bad people, you're only blocking the one IP address that they share with the good guys.
You don't want this fight. Most of the people you're fighting are innocent. People need to learn to detect bad acts, not "bad IP addresses."
"The bad actor is the individual who acts bad. The Post Office is not a bad actor for delivering letters."
If they actively ignored death threats/didn't send them off to the police when it came to their attention, they become responsible.
"The costs of bad actors are your costs."
Cloudflare is sick and tired of getting attacked by people from the TOR network. I don't blame them for the ban. It's costing them money..and TOR isn't going to raise the funds to pay them for the lost bandwidth and customer revenue.
We used to have a bigger problem with mail spam because server operators constantly would leave anonymous relaying open. How did we stop a great deal of it? By blacklisting the IP until the problem is fixed. It has worked out pretty well.
"And you should realize that in doing it you're knowingly hurting innocent people."
I guess we have to determine which 'innocent' people are more important: The ones getting their websites attacked and hacked anonymously, the people that can't access those websites because they are down/DOS attacked, or the random people that want to use the TOR network.
> If they actively ignored death threats/didn't send them off to the police when it came to their attention, they become responsible.
The Post Office reads your mail?
> We used to have a bigger problem with mail spam because server operators constantly would leave anonymous relaying open. How did we stop a great deal of it? By blacklisting the IP until the problem is fixed. It has worked out pretty well.
Only if you're willing to disregard innocent people.
> I guess we have to determine which 'innocent' people are more important: The ones getting their websites attacked and hacked anonymously, the people that can't access those websites because they are down/DOS attacked, or the random people that want to use the TOR network.
The "random people" who use Tor aren't doing it because it's trendy. They're doing it because it's the only way they can access the internet. Or because not using it would get them stoned to death by religious fundamentalists or imprisoned by an oppressive government.
We're not dealing with 99 ASs blocking another bad actor. We're talking about one service that sits in front of many popular services on the internet deciding to block another for dubious reasons. Cloudflare has near monopoly power here.
I guess I look at it differently - every site using CloudFare made the decision to delegate their web security to them. I don't see it as "one entity blocking another" but "all of those individual sites blocking a single network".
In that context, it's a lot of votes for Tor to find a solution to this problem.
Really, I'm surprised at CloudFare's restraint here. A lot of their customers probably couldn't care less about Tor, but they've been putting a lot of effort into trying to avoid blocking Tor users (actually blocking, not inconveniencing) or compromising their anonymity.
That's exactly the problem: customers who don't care.
Based on what I've seen in the thread of the bug report that spawned this debate, most of the browsing activity that these CAPTCHAs get in the way of is read-only. The only ways (nominally) read-only requests can cause harm are DDOS and exploiting vulnerabilities in the server software. Tor doesn't have enough bandwidth to be a big contributor to a DDOS, and sticking CAPTCHAs before some users is at best a probabilistic solution, as it only avoids exploits that:
(a) are untargeted (scanning the whole internet - if a human attacker cared about your site in particular then they could easily switch to one of the following methods to circumvent the CAPTCHA);
(b) use Tor rather than going to more effort to get access to less tainted IPs (VPS, botnets...) - assuming that the attack itself doesn't gather bad reputation (in cases where CloudFlare can detect malicious traffic by inspection, it can do better than IP blocking);
(c) don't use a service that farms CAPTCHA solving out to humans - which increases the attacker's cost but not by much.
Since the harm reduction is so minor, I suspect that for most sites, if the administrators had even a small incentive to support Tor users and the time to think about it, they would not choose CloudFlare's coarse-grained CAPTCHA approach. Rather, they'd make sure to have their own CAPTCHAs before anonymous write actions and before user registration - which they should be doing anyway - and leave read-only access alone. And the benefits of Tor to users in repressive countries should be enough to provide that small incentive, if they cared.
But they don't care. They don't want to change anything (like adding CAPTCHAs) unless there's a problem, and if CloudFlare can reduce that problem without their having to think about it, then that's the path of least resistance and they'll go with it even if there are consequences. I suspect most site owners, if asked about Tor, would say "just block it", which is why CloudFlare has - admirably - gone out of its way to doing so in its UI difficult. This is (a large portion of) who CloudFlare represents and I agree with you that they're showing restraint.
But here's where I differ: I don't think mass apathy counts as "votes for Tor to find a solution to this problem". While the magnitude of harm is of course completely different, that's like saying that in the case of discrimination against a minority group, since most majority group members just want the issue to go away, they're voting for the minority to "find a solution" - when the only real solution is for the majority to change and stop discriminating. I mean, maybe they will vote that way in actual elections, but apathy votes don't reflect the "wisdom of the crowd" as much as others do; the minority shouldn't just consider themselves overruled and give up.
CloudFlare is already "defying" those votes to some extent, and if there is no good solution that can make both parties happy, I'd say it would be the right thing to do for them to go a little further and open up a little more for Tor users, even if it's not what their customers would decide in a knee-jerk reaction. I hope that this blinded CAPTCHA idea will turn out to be such a solution, though. It's not ideal, since having any kind of CAPTCHA blocks potentially-legitimate automated traffic, but I think it's a good enough compromise for now - sites that care could still turn it off entirely. I hope the Tor developers won't let the perfect be the enemy of the good.
Oh, and - I think there is one act for which CloudFlare deserves some blame: signing up those customers in the first place with the promise to provide "security" at the CDN level. It's not that what they do is useless, but given the fundamental limits of (all) "web application firewalls" that only see the application from the outside and thus can only guess heuristically what is an attack, less technical customers are probably misled somewhat about the necessity and benefit of them. Most people, including less technical site administrators or owners, don't even understand the difference between DDOS and "real" attacks, let alone what WAFs do or, say, what concerns apply to Tor in particular. I'm not sure what CF could do to fix this short of not advertising security at all, and that would undersell what they do provide. Even so...
Remember his preface - cranky old-school network operator.
Let's say you have a hundred networks all connected together into some sort of "inter-net" system. If one AS starts sending out malicious traffic, what makes more sense:
1. That AS starts policing their users.
2. The other 99 ASs have to deal with the malicious traffic.
You're expecting the other 99 groups that are being targeted by the one group to bear the cost of dealing with that group's malicious users. Who exactly is externalizing costs here?
In a system without any real rules or authority, I think "those adversely effected choosing to block the bad actor" is a fairly democratic solution to the problem. You either play nice or you get voted off of the island.