Hacker News new | past | comments | ask | show | jobs | submit login

Maybe I'm a cranky, old-school network operator, but this is a very cut and dry problem. Tor runs a network that is rife with abuse and fraud. Tor needs to clean up and police its network. If it doesn't, it will be put on blacklists and customers will take active measures to block traffic from it.

This is no different than a network or AS that is spammer friendly, botnet friendly, carder friendly, etc. All of those networks eventually end up on blacklists or Spamhaus lists and their efficacy goes down. Eventually, the network dies out and the criminals move somewhere else. Yes, it's a game of whack-a-mole, but it's proven to work well.

I know Tor doesn't want to be in the network regulation business, but they need to be if they want their product to thrive. Otherwise, good bye Tor.




This is exactly what is wrong with this form of idealism. People create these things which remove accountability/reputation, it works great for awhile and is lots of fun (just like a mask party), and then the leeches move in and use it for spam/trolling/illegal stuff. It's usually the leeches who are the real long-term beneficiaries of these kinds of networks. However, the idealistic people who originally created it don't want to admit that their experiment failed and they actually created something which is now serving the interests of something not so idealistic and perhaps even quite sinister.

Bitcoin has the same issue: there are lots of legit uses of it, but to make it a good widely used currency, a reputation system is going to emerge, and from there you've already erased half the benefits of using Bitcoin. However, in the mean time Bitcoin is used by a bunch of people purely interested in speculation or as a way of avoiding taxes/money laundering laws/etc. There are people, just like Tor, using it for legit reasons, but my bet it's for mostly reasons nobody in the Bitcoin community likes to admit.


Exactly. But I strongly disagree.

You don't blame mask manufacturers for malicious people wearing masks.

It's like city guards banning everyone with a mask from entering and issuing IDs to them. Then they're using those IDs to determine what they should and shouldn't see in the city, tracking them everywhere "across cities" etc.

In the interest of privacy, it is best to instead use the dynamic nature and types of the requests to figure out what the behavior is like.

Going with the mask analogy, they should instead check if a person is brute forcing lock combinations. Maybe even condition on the fact that they're wearing a mask.


> Going with the mask analogy, they should instead check if a person is brute forcing lock combinations. Maybe even condition on the fact that they're wearing a mask.

That's what they're doing. They are seeing brute forcing come from a bunch of IPs and they're blocking those. What do you expect them to block on? The people using the anonymous service voluntarily identifying themselves on every request (cookies, browser fingerprinting, or pretty much anything else coming from the client side that can be faked)?


Instead of having IP-based reputation system, that persists for quite a while, they could have a time limit per IP for specific kinds of requests.

Like if you fail to log in to a site, 2^(attempts) timeout from that IP for that page only. Can also integrate a combination of request headers. Sure, it's still IP-based reputation, but it doesn't persist and is much less intrusive.

Most sites require specific cookies on consecutive requests, and such blocking should be on the app side only.

There are solutions in each case and all of them are harder than IP-based blocking. However, in the interest of privacy, they should adopt these more nuanced solutions.


So a single IP address can DDoS each page of a website for a little while before CloudFlare blocks them? That makes the whole protection pretty useless. I guess it would stop someone from brute-forcing password attempts, but that's not the only thing they're trying to protect against here.


Not necessarily. These work in combination.

If they're requesting specific type of content like images or some weird request that queries DB, these would be grouped together.

What I'm saying is gather more information for each request and use it more wisely to expire IP reputation quicker - within minutes as opposed to months.

The DDoS problem is actually easier than the rest because you need a large volume of requests to do anything. Usually these requests are very similar, come in rapid succession and come from the same bunch of IPs.

Edit:

Going with the mask analogy again, it's like you see 1000 masked people rush into a bar and block the entrance with their bodies.

Is the solution really to ban wearing masks everywhere?


A single IP can't "DDoS" anything.


Ha! Seriously though if some set of IPs is DoSing then they have to take action against at least some of the IPs in the set.


> It's like city guards banning everyone with a mask from entering and issuing IDs to them.

The flaw in this analogy is that in this case the mask makes every person completely indistinguishable from every other person wearing the mask. In this case, one ID is issued to every person wearing the mask.

When 90%+ of the people with this ID are criminals and vandals, blocking anyone with this ID is a pretty obvious and effective way to prevent crime and vandalism. It's seems pretty reasonable to me when presented this way.


That's a crazy thing to do. Why would you block everyone? This would completely erode privacy online.

As I said elsewhere, if you see 1000 masked people rush into a bar and block the entrance with their bodies, is the solution to block all masked people from going to all establishments?

Clearly, if this happened IRL, people would just put a limit on the number of masked people entering that bar until there wasn't a group of 1000 of them trying to get in.


>Clearly, if this happened IRL, people would just put a limit on the number of masked people entering that bar until there wasn't a group of 1000 of them trying to get in.

IRL, the bar would call the police and anti-riot forces would move in with crowd control equipment. Tear gass would be launced at the masked people and a lot of the masked people would be hauled to the police station where their identity would be recorded and a background check would be performed. It's not pretty but it's reasonable.

I have used Tor out of a legitimate wish for privacy. I have cursed Cloudflare and Google in passing to myself for their captchas presented to me when I've browsed through Tor.

Captchas in general are a royal pain in the butt, but they are among the most effective at protecting sites from abuse, so even though they annoy me at times, I hold the view that they are a net positive.

If you want to help preserve anonymity, I think the best course of action is not to focus on Clouflare, but instead to help maintain one or more communities on onion sites. The change must come from within. Once it has been shown that an onion site is able to provide useful services over time with privacy but with the same level of protection from abuse and bad people, then, in my view, it is time to reach out and educate the wider 'net on how this can be done.


Except tell the individuals apart and so you can't limit it to 1,000 individuals. You can't just let some in if you can't tell them apart. There is no door that works that way in this case.

It's best to imagine it's a walk-up bar rather than one with a door.


> As I said elsewhere, if you see 1000 masked people rush into a bar and block the entrance with their bodies, is the solution to block all masked people from going to all establishments?

Try wearing a mask into a petrol station or convenience store. They've already performed the assessment of 'potential sale vs getting robbed', and decided the risk factor they'd like to accept.


That's because IRL you're already pseudo anonymous.

Now imagine a convenience store that demands you to tell them where you've been this month and doesn't let you in otherwise.


It's a problem inherent in any system offering pure anonymity in an unrestricted way.

It's really a shame. An opinionless platform offering anonymity cannot flourish in an opinionated world. At some point if these things want to succeed, they need to play by the rules of the world that they exist in. But I don't think anyone's figured out a common set of systemic restrictions that Tor, 4chan, etc. can implement that avoid taking away their primary affordance: freedom.


The main point of Tor is that nobody knows where the traffic comes from. Realize you're asking them to break their own service.

Your premise seems to be that you can't be bothered to protect your networks so you want to put that responsibility on someone else. It's called intermediary liability and it's terrible because the intermediary has all the wrong incentives.

You demand that the intermediary eliminate malicious traffic but they suffer much less than individual users if they also eliminate non-malicious traffic, so they set up a system with a high rate of false positives and harm many honest people. YouTube does this with Content ID. Spam registries do this with innocent small mail servers. CloudFlare does this with Tor.

What you're doing is called externalizing costs. It's generally recognized as antisocial behavior. So if you're going to claim benefits to yourself at the expense of other people, at least recognize that you're doing it.


> What you're doing is called externalizing costs. It's generally recognized as antisocial behavior. So if you're going to claim benefits to yourself at the expense of other people, at least recognize that you're doing it.

Remember his preface - cranky old-school network operator.

Let's say you have a hundred networks all connected together into some sort of "inter-net" system. If one AS starts sending out malicious traffic, what makes more sense:

1. That AS starts policing their users.

2. The other 99 ASs have to deal with the malicious traffic.

You're expecting the other 99 groups that are being targeted by the one group to bear the cost of dealing with that group's malicious users. Who exactly is externalizing costs here?

In a system without any real rules or authority, I think "those adversely effected choosing to block the bad actor" is a fairly democratic solution to the problem. You either play nice or you get voted off of the island.


> In a system without any real rules or authority, I think "those adversely effected choosing to block the bad actor" is a fairly democratic solution to the problem.

That's the part which is adverse to the rest of your argument. You're not voting off the bad actor, you're voting off everyone in the bad actor's country.

We know how to deal with this problem. You go to a website, you sign up for an account, it can be pseudonymous but to get it you have to put up some collateral. Money/Bitcoin, proof of work, vouching by an existing member, whatever you like. Then if your account misbehaves you forfeit your collateral.

But this isn't a CloudFlare-level problem. They're trying to solve it at the wrong layer of abstraction. Identity isn't a global invariant, it's a relationship between individuals. Endpoints identify each other with persistent pseudonyms. The middle of the network should have nothing to do with it.


> That's the part which is adverse to the rest of your argument. You're not voting off the bad actor, you're voting off everyone in the bad actor's country.

The bad actor is the organization or person responsible for administering the network where the abuse is originating.

When I'm being attacked by someone's VPS, I report them to their host. After the fourth time I report them only to have their host pass along my report but take no further action, the host becomes, maybe not a bad actor but, a "bad citizen".

My choices are to allow them to externalize the costs of their lack of enforcement (or decision not to enforce) and attempt to find a way to block the specific actor under their purview, or just to block that host and accept whatever collateral damage that occurs. (And yes, sometimes that "bad citizen" may end up being most of a country - it doesn't change the equation for me.)

It's the only method I have to exert any pressure on the host to act responsibly. If enough people agree with me, then it quickly becomes "their problem" rather than "my problem" as they get blackholed from everywhere on the internet.


> The bad actor is the organization or person responsible for administering the network where the abuse is originating.

The bad actor is the individual who acts bad. The Post Office is not a bad actor for delivering letters.

> allow them to externalize the costs of their lack of enforcement

Tor is not an enforcement agency. Neither is CloudFlare. The costs of bad actors are your costs. You have the technical ability to retaliate against common carriers for not allowing you to push those costs onto them, but that doesn't make you right to do it in any sense other than might makes right. And you should realize that in doing it you're knowingly hurting innocent people.


Of course they're not an enforcement agency, that's kind of the point - there is no enforcement agency. We all have to contribute by being good citizens.

I subscribe to the idea that it's an ISP's responsibility to police its own network for abuse and my responsibility to police mine.

You apparently subscribe to the idea that it's my responsibility to just accept whatever shit you fling at me and it's my problem to deal with and yet somehow I have a responsibility or moral obligation to still provide services to you and your customers.

I suspect I'm never going to agree with you.


There are two ways to deal with badness. The first is to try to identify bad people and then stop them from doing anything whatsoever. The second is to identify bad acts and stop anyone from doing bad acts.

The second one is the only one that works without massive collateral damage.

Identifying bad people is only an abstraction over identifying bad acts and it leaks like a sieve. A reformed thief is entirely capable of buying an apple without incident, because not all acts by bad people are bad acts. But a thief has no reputation as a thief until after they steal for the first time. The only way to stop bad things is to detect bad things.

But the true failure of reputation systems is that as soon as multiple people share an identity they disintegrate entirely. Innocent people get blamed for malicious acts of other people through no fault of their own and with no ability to prevent it. The only way reputation systems can work at all is if people can prevent other people from using their identities.

Which means that IPv4 addresses can't be identities, because we don't have enough of them for them not to be shared.

And forcing common carriers to stop doing business with anyone who has ever done anything bad has another problem. It imposes the death penalty for jaywalking. You send spam once -- or get falsely accused of sending spam -- and you're blacklisted. It puts too many innocent people into the same bucket as guilty people and then the innocent people fight you alongside the guilty. It creates the market for these VPN services because too many servers are wrongly using IP addresses as identities. Then the bad people also use the VPN services and bypass your "security" because it was never security to begin with, so you block the VPN services which destroys those and they're replaced with others you haven't blocked. Meanwhile the real bad people also use botnets which are unaffected, so you aren't actually blocking the bad people, you're only blocking the one IP address that they share with the good guys.

You don't want this fight. Most of the people you're fighting are innocent. People need to learn to detect bad acts, not "bad IP addresses."


"The bad actor is the individual who acts bad. The Post Office is not a bad actor for delivering letters."

If they actively ignored death threats/didn't send them off to the police when it came to their attention, they become responsible.

"The costs of bad actors are your costs."

Cloudflare is sick and tired of getting attacked by people from the TOR network. I don't blame them for the ban. It's costing them money..and TOR isn't going to raise the funds to pay them for the lost bandwidth and customer revenue.

We used to have a bigger problem with mail spam because server operators constantly would leave anonymous relaying open. How did we stop a great deal of it? By blacklisting the IP until the problem is fixed. It has worked out pretty well.

"And you should realize that in doing it you're knowingly hurting innocent people."

I guess we have to determine which 'innocent' people are more important: The ones getting their websites attacked and hacked anonymously, the people that can't access those websites because they are down/DOS attacked, or the random people that want to use the TOR network.


> If they actively ignored death threats/didn't send them off to the police when it came to their attention, they become responsible.

The Post Office reads your mail?

> We used to have a bigger problem with mail spam because server operators constantly would leave anonymous relaying open. How did we stop a great deal of it? By blacklisting the IP until the problem is fixed. It has worked out pretty well.

Only if you're willing to disregard innocent people.

> I guess we have to determine which 'innocent' people are more important: The ones getting their websites attacked and hacked anonymously, the people that can't access those websites because they are down/DOS attacked, or the random people that want to use the TOR network.

The "random people" who use Tor aren't doing it because it's trendy. They're doing it because it's the only way they can access the internet. Or because not using it would get them stoned to death by religious fundamentalists or imprisoned by an oppressive government.


We're not dealing with 99 ASs blocking another bad actor. We're talking about one service that sits in front of many popular services on the internet deciding to block another for dubious reasons. Cloudflare has near monopoly power here.


I guess I look at it differently - every site using CloudFare made the decision to delegate their web security to them. I don't see it as "one entity blocking another" but "all of those individual sites blocking a single network".

In that context, it's a lot of votes for Tor to find a solution to this problem.

Really, I'm surprised at CloudFare's restraint here. A lot of their customers probably couldn't care less about Tor, but they've been putting a lot of effort into trying to avoid blocking Tor users (actually blocking, not inconveniencing) or compromising their anonymity.


Developers care and they are very important to Cloudflare. It is in their best interest to actually do it properly.


That's exactly the problem: customers who don't care.

Based on what I've seen in the thread of the bug report that spawned this debate, most of the browsing activity that these CAPTCHAs get in the way of is read-only. The only ways (nominally) read-only requests can cause harm are DDOS and exploiting vulnerabilities in the server software. Tor doesn't have enough bandwidth to be a big contributor to a DDOS, and sticking CAPTCHAs before some users is at best a probabilistic solution, as it only avoids exploits that:

(a) are untargeted (scanning the whole internet - if a human attacker cared about your site in particular then they could easily switch to one of the following methods to circumvent the CAPTCHA);

(b) use Tor rather than going to more effort to get access to less tainted IPs (VPS, botnets...) - assuming that the attack itself doesn't gather bad reputation (in cases where CloudFlare can detect malicious traffic by inspection, it can do better than IP blocking);

(c) don't use a service that farms CAPTCHA solving out to humans - which increases the attacker's cost but not by much.

Since the harm reduction is so minor, I suspect that for most sites, if the administrators had even a small incentive to support Tor users and the time to think about it, they would not choose CloudFlare's coarse-grained CAPTCHA approach. Rather, they'd make sure to have their own CAPTCHAs before anonymous write actions and before user registration - which they should be doing anyway - and leave read-only access alone. And the benefits of Tor to users in repressive countries should be enough to provide that small incentive, if they cared.

But they don't care. They don't want to change anything (like adding CAPTCHAs) unless there's a problem, and if CloudFlare can reduce that problem without their having to think about it, then that's the path of least resistance and they'll go with it even if there are consequences. I suspect most site owners, if asked about Tor, would say "just block it", which is why CloudFlare has - admirably - gone out of its way to doing so in its UI difficult. This is (a large portion of) who CloudFlare represents and I agree with you that they're showing restraint.

But here's where I differ: I don't think mass apathy counts as "votes for Tor to find a solution to this problem". While the magnitude of harm is of course completely different, that's like saying that in the case of discrimination against a minority group, since most majority group members just want the issue to go away, they're voting for the minority to "find a solution" - when the only real solution is for the majority to change and stop discriminating. I mean, maybe they will vote that way in actual elections, but apathy votes don't reflect the "wisdom of the crowd" as much as others do; the minority shouldn't just consider themselves overruled and give up.

CloudFlare is already "defying" those votes to some extent, and if there is no good solution that can make both parties happy, I'd say it would be the right thing to do for them to go a little further and open up a little more for Tor users, even if it's not what their customers would decide in a knee-jerk reaction. I hope that this blinded CAPTCHA idea will turn out to be such a solution, though. It's not ideal, since having any kind of CAPTCHA blocks potentially-legitimate automated traffic, but I think it's a good enough compromise for now - sites that care could still turn it off entirely. I hope the Tor developers won't let the perfect be the enemy of the good.

Oh, and - I think there is one act for which CloudFlare deserves some blame: signing up those customers in the first place with the promise to provide "security" at the CDN level. It's not that what they do is useless, but given the fundamental limits of (all) "web application firewalls" that only see the application from the outside and thus can only guess heuristically what is an attack, less technical customers are probably misled somewhat about the necessity and benefit of them. Most people, including less technical site administrators or owners, don't even understand the difference between DDOS and "real" attacks, let alone what WAFs do or, say, what concerns apply to Tor in particular. I'm not sure what CF could do to fix this short of not advertising security at all, and that would undersell what they do provide. Even so...


So tor should be a democracy that let's people vote spammers off the island? But again, the promise of tor is anonymity, and this breaks anonymity.


Google does this with Linode servers. I route my HTTP traffic through a proxy on a Linode server. Google blocks me all the time for no reason other than that some other IPs in my range are doing bad things. I've tried to contact Google about this but they could care less about a handful of users.


I've experienced the same issue from multiple service providers, including but not limited to Google, Akamai, Cloudflare, and more.

My Linode IP address was assigned to me years ago. I do not use it maliciously, do not share it with other people, and have never used it for tor. Yet find myself regularly blocked for no logical reasons when I proxy my web-request through it.


Complain to a Google dev here on HN, I had the same issue with my OVH proxy.


The problem with this kind of thinking is that yes, while content ID does catch a lot of false positives, and yes, it results in creators being dinged for no reason, the reason it was created in the first place was that too many people were abusing YouTube to distribute pirated material. It's the same cause/effect here with CloudFlare, too many people are abusing the anonymity Tor provides to do shitty things to their network.

It's not as if CF set out to screw over Tor users, by the nature of Tor they'd have no way to do it with any kind of ease. Tor traffic just happens to have a whole lot of bad actors using it and that causes the reputation of those IP's to down.


Content ID was created because Google wanted Hollywood content and Hollywood wanted to externalize costs.

Copyright in the context of the internet has costs that can only be paid by innocent people. It will either have many false positives or many false negatives. So the question is whether those costs should be paid by the innocent people who benefit from the system that created those costs or the innocent people who don't.


"...you can't be bothered to protect your network..."

Huh? Isn't this exactly what CF is attempting to do? And Tor traffic tends to be abusive so the good is caught up with the bad, but it's all in the name of protection.


> Isn't this exactly what CF is attempting to do?

No. The real bad people have botnets and can cycle through a million random IP addresses every time you block one. A real solution needs to be secure against someone you don't yet know is bad.

This is going to get a lot worse as we've run out of IPv4 addresses. ISPs are going to start to NAT many users behind one IP address. Some already have. Then you have the same issue as Tor where one user is malicious but shares the same IP address as a thousand innocent users. IP blocking isn't going to work anymore so you might as well find an alternative solution now.


TOR exit nodes could run an IDENT-like service, that offers only a hash based on the source of the traffic... it would still be psuedo-anonymous, but easier to filter by the target of said traffic.


At a point in the recent past, around 90% of all E-mail traffic was spam. Now it's down to around 50% or so [1]. What happened? It could have been due to thousands of ISPs simultaneously cleaning up and policing their networks. But it also could be due to blocking tools getting better. Maybe the spammers moved away from E-mail to more profitable spam channels. Or is there just more legit traffic now, and the percentage is down because the denominator is larger?

1. http://www.bbc.com/news/technology-33564016


It's definitely because of the major botnets being taken down. I remember the McColo bust specifically, spam volumes never recovered to their previous levels after that bust. (PDF) https://www.rsaconference.com/writable/presentations/file_up...


Email didn't outrun the bear. It outran the other hiker.

The honeypot space for spam is now systems other than email. Social media. Blogspam. Web advertising. Dating services. SMS.

(I'm not sure precisely which, but pick from among there and you'll likely turn up the issues.)

Email is fairly well defended at this point, though not without considerable collateral damage (small / self-hosted email is quite difficult, most of us rely on a small number of high-volume providers who may present a considerable privacy risk).


> I know Tor doesn't want to be in the network regulation business, but ....

That is exactly why there is a Tor. Tor is for enabling anonymous communication. Now deciding who can do what or why would limit use and that would limit its ability to anonymous communication.


Definitely, but they shouldn't complain when the public Internet (Cloudflare) blocks them or views their traffic differently. Anonymity comes at a price, and this is one of them. I think TOR is an important project, but this blog post by them is completely ridiculous and ignores reality.

Why should TOR get a pass on this when network operators need to protect their network from abuse? The choices are either 1) let the abuse continue or 2) try to block the specific attack traffic in question by heuristics which is often difficult or impossible or 3) block abusive IP addresses.


Complain is exactly what they should do. People should care about privacy even when it's other people's privacy.


No one's privacy is being violated here.

The right to privacy is not the same thing as the right to access. If someone doesn't want to allow you anonymous access it is their right to block you. Anyone using CF and not whitelisting Tor is effectively saying. If you want to visit my site you need to verify that you aren't a bad actor. If you don't want to do that because of privacy reasons then you can't visit.


Haha if CF get this kind of response to their gentle observations I'd hate to see what a frank honest complaint would get...


It's not a binary thing. You can regulate out fraud, abuse, and DDoS attacks without harming the legitimate use cases.

It's like selling alcohol (cigarettes, porn, gambling), but not to minors.

You can say "it's OK for this crowd not OK for this crowd". Otherwise you'd probably claim that said regulation would go against the very thing that the merchant is trying to do: make as much money as possible.


"It's like selling alcohol (cigarettes, porn, gambling), but not to minors."

It's not like that at all, because selling to everyone other than minors requires identity. Sure, you just need to verify the subject is over 18, i.e. you don't need to know name, birth date or address. But you DO need to know identity to issue the token that provisions the token that proves an age > 18.

And that breaks Tor's raison d'être.


It's very easy to say that one can filter out certain things, but in practice I think it's far more difficult. Especially in Tor's case where the goal is to provide truly anonymous access.

Do you have a proposal for filtering out that kind of traffic while allowing other traffic? I can't think of a way off the top of my head and I'm sure the people behind Tor would at least consider a logical solution.

Edit: This sounds kind of confrontational, but I don't mean it like that. I honestly would like to hear of a potential solution to this because I really can't think of one.


>You can regulate out fraud, abuse, and DDoS attacks without harming the legitimate use cases.

How? If you think this was possible without completely compromising the Tor protocol, don't you think it would have been done already?


> It's not a binary thing.

Well the thing is how do you regulate totally anonymous users? How would you know what the binary 0 and 1 are doing in tor?


> Tor needs to clean up and police its network

I think you don't understand what Tor is or how it works. Tor is a way to anonymize its users. You have no way to analyze a packet until it reaches an exit node, and you have no way to analyze that packet if it's done over https, and you have no way to block an ip from that exit node because it comes from another node where plenty of other ips are coming from. If you start blocking this node, then the spammer can choose a different path and come from another node, or just change his exit node.

tl;dr: what you are saying goes against Tor's principles.


I think it's clear he understands how it works.

> what you are saying goes against Tor's principals

That's why it will probably never be cleaned up. That's also why more and more people will probably block access from Tor. CloudFlare says they get a 95% attack rate from it. A blog post the other day said FotoForensics gets about 91% attacks from Tor.

No one is going to put up with 91% attacks for long. And if that means Tor becomes it's own walled garden that doesn't 'interact' with the public internet, so be it.


> I think it's clear he understands how it works.

Agree to disagree :)

> if that means Tor becomes it's own walled garden that doesn't 'interact' with the public internet, so be it

Most websites are not using Cloudflare and have no idea how to block a range of ips. So no Tor is not going to become its own walled garden.


>> I think it's clear he understands how it works. > Agree to disagree :)

I know very well how it works thank you very much. It doesn't mean I have to like it.

Since you're apt to throwing around unfounded accusations, I'll join the party. I have more experience than you do at running large networks, dealing with fraud, dealing with abuse, dealing with malware, and dealing with law enforcement/government agents. The Internet has enough problems with the well-run networks that actually care. We could care less about Tor users that might be blocked.


I work for a company that makes appliances to secure companies' networks.

I'm promise you that CloudFlare isn't the only group that offers to block TOR as an option (or just has it blocked by default).


Filtering of outbound traffic from Tor exit nodes can only be done by Tor exit node operators. The Tor project can kick out some defaults (like, say, blocking SMTP) but ultimately it wouldn't matter...the bad actors will just find ways around the filters, or devise new ways to abuse the system. Static filters will never work. It's like hoping everyone will do source address verification on egress.

Cloudflare just have to get smarter here. It's their business to do dynamic filtering and balance this stuff, not Tors.


"Cloudflare just have to get smarter here. It's their business to do dynamic filtering and balance this stuff, not Tors."

It's not their business given Cloudfare doesn't make shit off Tor: it looses bandwidth and time fighting malice instead. It's actually their business to block it. Tor needs a reputation system or some other method to deal with this stuff.


The main point I took away from the article, that from one exit node many users originate. Some users are spammer. They contaminate the exit node IP. CF blocks an IP for spam, but does not remove the block after some time (when the spammer moved on).


That's definitely a legitimate point, but not the main point IMHO. The main point is that Tor makes zero effort to clean up the problem and uses the legitimate Tor users as helpless, scapegoated victims and a bullying tactic. "But think of the oppressed users!" Sorry, not buying it.

The blacklisted IP lifetime problem is real though. It's a problem I've had to raise several times with our product and network teams. People would see an abusive IP and just ban it...without a TTL or lifetime. This really upset me as they seemed to think that was OK not just for the time being, but that it was good enough. When I describe IPv6 to them, their faces just melt as it sinks in that they can't just keep banning IP's and must do something higher up the stack to detect and block fraud.


Interesting point, what would "Tor cleans up" mean?


What will happen when IPv6 takes off and anyone can just go get a unique, random, throwaway IP address each day? Do "IP reputation" systems work in such a world?


I guess reputation will be based on /64, that's the most practical solution.


Maybe coalesce neighboring bad IPs and subnets into larger bad subnets.


The somewhat-faster way to reduce this annoyance (I've been hit by this) is to 'block'/captcha the offending IPs for a time (depends on whatever metrics CloudFlare think is best) then unblock it.

At least this will reduce legitimate user's annoyance, instead of being blocked indefinitely

My own experience: Tried accessing wikialpha from work LAN, 'blocked' by endless captcha (for a few months already) and opening the said wiki from home network, all is perfectly fine...

well at least now I know why, still does NOT makes it OK from end-users perpective


Given how much of TOR traffic is malicious (94%), what you propose would get all TOR traffic always considered malicious.


Care to share a source for "TOR traffic is malicious (94%)"?

What does malicious percentage of traffic in this case mean? Malicious sessions? IPs used? Packets? Users?


The number is in the CloudFlare blog post that started this.

> Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious. That doesn’t mean they are visiting controversial content, but instead that they are automated requests designed to harm our customers.


Actually, I think the real problem is the idea that networks are responsible for policing their users, rather than the idea that servers should be responsible for policing their clients. The former is what people want (because it's easy: blame an IP, ban it, be done), but the latter is the reality.

CloudFlare's CAPTCHAs are an attempt to deal with that reality, but they're heavy-handed. Worse, they're at the wrong level: the protected site may have already verified that the user is legitimate, but CloudFlare imposes its block when the user's source IP changes again.

CAPTCHAs belong at the application layer, not the transport layer.


You've made one of the most compelling points I've seen in this discussion so far. Really - why mix an application security mechanism into the transport layer? DDoS attacks will happen whether Tor is blocked or not. Add to this some other interesting points I've read here regarding: IPv4 exhaustion, NAT and IPv6 growth. All signs seem to point to the need for an application level solution.


>they need to be if they want their product to thrive. Otherwise, good bye Tor.

The ironic thing is actually that by applying any kind of "network regulation" the Tor project would abandon its own primary purpose. The only way it can continue to exist is actually if it doesn't practice any kind of censorship of its users.


This is a very simplistic, binary view of the world. Just because the Internet's view of a product differs from the author's, it doesn't make the Internet wrong. It means the author probably needs to step back and re-evaluate things.

By your logic, gun owners would be able to shoot whatever they want. The primary purpose of a gun is to shoot things. To paraphrase: "To regulate what you can and can't shoot would go against the primary purpose of a gun and the gun manufacturers can't exist that way."


Something working or not is a binary.


As is cloudfare. I have lost count of sites that pirate our software that are using cloudfare.

Cloudfare know of the problem and refuse to do anything about it.


Thankfully. They're already the "Wifi Captive Portal" of the internet, be glad that they're not also the police of the internet.


I hate to admit it, but I think you and Tor both have fair points.

The facilitator of much abuse isn't anonymity per se, but impunity. The ability to act without consequence.

Finding a way to imbue reputation across an anonymised connection seems to be one way to operate. Not an easy problem. There's some work toward solutions, though none are yet widespread.


> I know Tor doesn't want to be in the network regulation business, but they need to be if they want their product to thrive. Otherwise, good bye Tor.

I'm not sure if you know what you're talking about. From your comment, it's crystal-clear that you don't understand what their service is.

Plus, calling a free service based largely based on volunteers, university and non-profits a product is derogatory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: