OP is wrong. Weaver is (as usual) jumping to conclusions and sensationalizing (not that he did any of the work to turn up these search warrants or is deeply familiar with the topic matter... He predicted all the blackmarkets would be dead by now; how's that going?).
What we know, after I and Moustache and DeepDotWeb have gone through the warrants & complaints, is that a handful (~78) IPs were deanonymized as late as July 2014 after accessing the Silk Road 2 vendor .onion. This is almost certainly due to the UC account 'Cirrus' in some form, who was perfectly placed to de-anonymize the SR2 servers and insert a payload into the landing page. A former SR2 employee says they were even using crummy commercial software like 'DeskPro' on the SR2 servers.
In other words, right now the evidence is consistent with a Freedom Hosting redux scenario: the FBI has gotten another Tor Browser or other browser exploit and then phoned home. It could also be the big mysterious attack, but why would they do that when they had an insider who was involved in hiring additional employees? And also given that the SR2 vendor onion should have been seeing a lot of traffic from vendors who almost all preferred to use it due to better uptime, a mere 78 IPs sounds low.
The conclusion is being jumped to because the timeline of the attack described in the July Tor blog post matches perfectly with the timeline of the investigation described in the SR2 court docs.
He's saying that warrants and filings show that FBI had an insider positioned to subvert SR2's servers. If you have that vantage point, a worldwide traffic confirmation attack on Tor is a oddly elaborate way to scoop up SR2 vendors. Why not just use the SR2 servers to trick the vendors?
Observe that the relay-early traffic confirmation attack would be a technical effort unprecedented for the DoJ, whereas implanting a backdoor into a web app is already par for their course.
The Reddit threads 'gwern links to talk more about the implications of the timing. At first glance, it's an awfully big coincidence to write off, but there are mitigating factors.
> Why not just use the SR2 servers to trick the vendors?
Thank you.
We must assume that the FBI is a rational and intelligent adversary (even if we have a difference of opinion on specific public decisions or press releases that impact our perception of their intelligence).
They're not going to expend the resources (or, as The Grugq might say, burn a valuable capability) to compromise a server when they already had a man on the inside (Cirrus).
I would be willing to accept that it's possible they were on a dragnet fishing expedition around the same time to see what they could sweep up, and they just happened to use this information to confirm the efficacy of their Tor attack. This would also explain Doxbin going down, but so would Doxbin and another target of theirs being on the same bare metal server.
> At first glance, it's an awfully big coincidence to write off, but there are mitigating factors.
Agreed. Given what is already public knowledge, it's far from the most likely explanation.
Academics would have no such motivational constraints unless working for the feds. They burn capabilities with wild abandon and still serve their own self-interest.
Given a global deanon attack against Tor, the DoJ absolutely would have rounded up the members of the dozens of child abuse related hidden services. The fact that only a single site was compromised indicates it was most likely an insider attack against the SR2 servers.
What about the "parallel construction" scenario whereby NSA uses their alleged attack against Tor to deanonymize SR2, and the Cirrus UC is simply a construction?
We can't rule it out, no. Just comment that it does not seem necessary at this point: Moustache's doxing of scout/Cirrus shows that she could have been easily busted, we know that she was subverted early on (it was rumored at the time, in fact - 'emailgate'), once the account was taken over, then it is perfectly capable of de-anonymizing a server (de-anonymizing is really easy; I've de-anonymized or helped de-anonymize at least 2 blackmarkets myself, and actually, just today someone has found another way SR1 could have been de-anonymized easily, although I can't go into details), and currently the seller/employee busts are consistent with the Freedom Hosting attack, which no one needs to ascribe to parallel construction (since we know it was an obsolete NSA attack which got reused).
If the operators seem reasonable, you contact them privately with the IP and point out the implications. If they don't seem reasonable, you post it on Reddit and destroy their reputation. Unfortunately, quite a few operators are fools, knaves, or both.
Based on the current public info, nothing can be confirmed completely or discounted completely.
Currently there are only two pieces that points towards the Sybil attack being used against Tor:
1. Some Tor Hidden Services were uncovered
2. The dates just happen to match dates a research project were running
But there is a lot more pointing away from it: the main points are:
1. The FBI already had a source in SR 2.0
2. It is easier to uncover hidden services using endpoint hacks rather than global deanon
3. Given the ability to deanon parts of the Tor network, the feds would have targeted the larger markets and got a lot more arrests. It isn't a coincidence they just happen to take down the sites with the worst software stacks, security architectures and operational techniques.
My technical summary is that we now think it is the feds who initiated this 6-month long attack [1] which consisted in them using "a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack." They ran many (115 to be exact) non-exit Tor relays on 50.7.0.0/16 and 204.45.0.0/16 (Sybil attack) to increase their chances of controlling both ends of a Tor circuit: the first relay (entry guard) reached by the SR2 server and the last relay used as a hidden service directory where the service is published. The feds' relays then actively modified traffic to inject a signal into the Tor protocol headers (bits encoded as a sequence of "relay" and "relay early" commands) to help them correlate traffic from one end of the circuit with the other end (traffic confirmation attack). So whenever the SR2 hidden service was being published (which happens whenever the server reconnects to the Tor cloud?) the last relay knew it was for the SR2 service (but didn't know the server IP), and could correlate it with the entry guard which knew the IP address (but didn't know the service name). Once they knew the SR2 server IP, the game was over.
I remember seeing a Def Con presentation about this by the guy who founded Derby Con. Can't find it right now, but will post it if I can find it again.
I'm getting some surprisingly mixed messages on here because I've come into the discussion with a couple of assumptions, so I'd like to try to clear those up.
1. Do we have the right to private, secure, anonymous communications?
2. If yes, do we have the right to be outraged when our government attempts to subvert our private communications, most especially with broadly-scoped warrants or tactics that can expose your communications to any potential listeners?
If nobody is trying to subvert your private communications, you don't need secure private communications in the first place. If you assume you need secure comms, you should assume people will try to subvert and break your secure comms.
I think his point is you can never be sure when someone is trying to break in to your house/subvert your communications, so it makes sense to take precautions.
I don't want the government actively working to weaken house lock standards for a lot of the same reasons I don't want them working to subvert privacy technology.
Admittedly, in this case the metaphor is kind of flawed since SR2 is the equivalent of a drug dealer's home, which the Feds would have no qualms about breaking in to.
Iceland has an interesting mechanism related to this. Some driveways have a gate with one vertical post on each side, and three removable horizontal beams. If the beams are moved aside, you may enter. If all three beams are hung in place, you must not enter. If two beams are up, you should seek permission to enter. That's how I remember it, anyway.
The point being, yes, you do need a lock or some mechanism to say "outsiders should stop here." You need this for two reasons: first so that honest people know where they are not welcome, and second so that dishonest people can be shown clearly to have broken a social contract.
We know that a global adversary exists, and that Tor is vulnerable to certain types of attacks by such an adversary (e.g. [1,2]). Given that the list of relays is public by design, that their (albeit encrypted) communication can be monitored by the adversary, that popular hidden services will be moving a significant amount of traffic to and from a single static IP address in a manner modifiable by any user of it, and that there are known methods of disrupting hidden services to effect a global change in this [3] - it follows that such hidden services are rather more vulnerable to deanonymisation, compared to most other users. Seems like running one is an incredibly risky thing to be doing.
So the FBI knowingly attacked a civilian communications network, potentially causing a great deal of harm to the entire network in order to catch a few bad apples.
Like nuking a city because you're pretty sure you'll get a few bad guys.
If the network can't ensure anonymity against bad actors, it was already fatally broken. If the consequence is improvements in Tor that break the attack, and the cost is a prosecution against someone breaking black-letter law, the result are a net-positive for civil liberties.
We can't ignore the potential collateral damage. Even the attempt, due to its inherently indiscriminate nature, is a violation of citizens' fourth amendment rights.
By your reasoning we should expect that they can kick in every door in a city looking for a suspect, then blame the doors for being too weak.
Can you be more specific about the collateral damage you're referring to? I don't understand the argument you're making. Perturbing Tor routing isn't in any way equivalent to kicking down doors. It seems that by your logic, even a title 3 wiretap "kicks down doors", since modern wiretaps happen on packet switched networks and involve filtering out other people's traffic.
If the collateral damage you're referring to is loss of trust in Tor, that's not damage: that's new information.
> According to a Tor blog post, someone during that period was infiltrating the network by offering new relays, then altering the traffic subtly so as to weaken Tor's anonymity protections. By attacking the system from within, they were able to trace traffic across the network, effectively following the server traffic back to their home IP.
This is network-wide. They did not and could not target just Silk Road 2 and its users, they sabotaged the anonymity of every user and service on Tor.
A title 3 wiretap requires trust in both the government and network operator, which users of Tor may be avoiding for entirely legitimate reasons.
I'm not seeing the moral distinction between title 3 wiretaps and Tor infiltration. It can't simply be that people on Tor have expressed an unwillingness to be tapped. Every rational actor has that preference in both scenarios.
Wiretaps conducted by the FBI apply to installations in the US. Let's say I'm a dissident in my own (not the USA) country. An FBI wiretap does not compromise me wrt. my own government. Tampering with Tor on a large scale, however, has very much the potential to do so.
Every rational actor has that preference, but not that expectation. I'm well aware that my ISP can eavesdrop on everything that happens over their network, and can extend that ability to any party they choose.
I agree that there is no moral distinction between a title 3 wiretap and Tor infiltration, however a title 3 wiretap is a passive listener while this Tor infiltration is not. The nature of the Tor infiltration caused anonymity to be stripped and readable by anyone aware of the flaw. They used resources unavailable to others to expose that information not only for themselves but to everyone.
The equivalent would be streaming a title 3 wiretap sans filter to everyone on the internet.
If you go through the exercise of building a mental model of exactly what a title 3 wiretap facility on a packet-switched telephony network looks like, and then carefully study the covert channel traffic confirmation attack the Tor team disclosed and everyone presumes the FBI is using, you'll see that the technical differences are not that great. It's certainly not as black-and-white as "passive" versus "active".
I'm not sure I follow the point about how the FBI could have done grave damage to everyone's privacy. It's (hypothetically, assuming this is how the FBI did it) the relay-early traffic confirmation vulnerability that did that. The FBI didn't create that vulnerability.
I'm not sure if this is what Zykes is referring to. But this particular attack on Tor is different from a wiretap because it made traffic readable by the whole network, not only by the attackers.
In what I think you mean by a perturbation attack, the attacker would deanonymize traffic by influence timing of packets on one end and observing the other end. Only the attacker learns anything. But in this attack, the hidden service directories found a clever way of broadcasting the name of the requested service, in plaintext, to the rest of the circuit. The attackers could read the message, but so could anyone else running a Tor relay.
Given that the message could have been trivially encrypted, that does seem like pointless collateral damage.
Global = can view all network traffic.
Partial = can view some portion of network traffic, but not all.
Active = willing/able to modify data as it transits the network.
Passive = unable/unwilling to modify data as it transits the network.
The gold standard here would be breaking a specific user's anonymity without modifying the data, i.e. a passive attack by a partial adversary. The smaller the percentage of overall traffic that the system needs to observe, the better.
I feel like "global passive adversary" is kind of like a True Scotsman. There doesn't seem to be a fixed definition; rather we work backwards from random attacks and determine whether we think they were global adversary worthy or not, and if so then that makes the perpetrator a global adversary.
Could Lizard Squad have executed this attack? (I assume anybody with a botnet could start new Tor relays, so yes.) Is Lizard Squad a global adversary?
A global passive adversary is anyone who can execute a Sybil attack, basically. So the required size scales with the size of the network. A global passive adversary for a network with only ten nodes would only have to have five nodes itself.
The "global" adjective is just used, I think, because cryptographers presume a production deployment of the cryptosystems they discuss would be something like the Web: large enough (millions of nodes) to require globe-spanning resources (millions of other nodes owned by a single group) to execute the attack successfully.
Seen under that lens, neither Tor nor Bitcoin nor any other modern cryptosystem needs a "global" passive adversary to break it. Just a regular "passive adversary."
That's really interesting. So if there were multiple "global passive adversaries" then the network would become stronger and stronger? At least until one gives up and removes all their nodes at once.
Imagine if China (used for population reasons) managed to send 300 million spies to the US to socially-engineer their way into all US citizens' personal lives. Now imagine India (again, for population reasons) simultaneously trying the same thing: now, one half of the time, the Chinese are just spying on "American citizens" who are really Indian spies, the Indians are just spying on Chinese spies, and one half of Americans go unmonitored.
It's sort of the same game-theoretic advantage you get from participating in a battle royale competition over participating in a 1v1 competition: for each new adversary you face, that adversary is also dragged down by all the other adversaries and becomes that much easier to deal with.
This really only applies specifically to Sybil attacks, though.
It didn't teach us anything we didn't already know about the FBI, but it also does not excuse their actions any more than if we were to attack any of their networks.
This doesn't make sense either. LEOs routinely force entry into buildings to execute search warrants. It is not hypocrisy that you'd be prosecuted for doing the same thing without the legal authority.
As I understand this, the CMU team allegedly deanonymized SR2 in the course of their research, and then shared the information with the FBI. Once they had done that, they were allegedly forced to withdraw their Black Hat presentation. There's also the issue that the CMU team had NSA funding. I haven't seen any claims that the CMU team was initially focusing on SR2 and other "illegal" sites. But I wouldn't be surprised by that.
If that's what happened, it wasn't a wiretap. It was a tip.
The association of SR2 deanonymization with CMU's withdrawn Black Hat presentation is indeed speculative. But there's arguably more to it than innuendo. Time will tell.
It's well known that the DoD has funded Tor from the start. But it's at least decent of them to independently fund CMU to compromise it ;)
I'm thinking mainly of EFF, Bamford's books, and the Snowden releases. Are you claiming that parallel construction isn't SOP? Reaching way back, why do you think that the Weathermen got a pass in Chicago?
What about the citizens that dont have 'fourth amendment rights' just because they're not US citizens? As far as I know, Tor is not something that 'belongs' to US, nor 'exists' in US soil so it is very interesting what about the rest of the world.
I agree with your last sentence, as long as the black-letter law that is being violated is itself moral. I question the morality of many drug laws, however.
There was a DEFCON talk about TOR where the speaker advised "All of my friends and contacts in Israel & the Middle East who didn't use TOR -- I no longer hear from them".
This isn't about the performance of the network. They could've brought it offline and that wouldn't have been as egregious an affront. They willfully subverted the network to invade the privacy and anonymity of the people using it. Regardless of the purpose, the potential damage to the people using it for any reason is too great to ignore. Political dissidents, whistleblowers, journalists, lots of people rely on Tor, and who knows how much of their information was exposed as a result.
Billions are "collateral damage" in US government's "cyberwar". We even know now that NSA routinely captures botnets and "repurposes" them - anyone remember the Wikileaks DDoS? That one was quite obviously done by the NSA/FBI, but they may do it against other targets as well, in a much more subtle way or even blame it on someone else.
Over 80% of Tor hidden service visits are related to child pornography.* A 'few bad guys' is stretching things a bit. Additionally, they didn't nuke anything, Tor continued to function, albeit with a wide open security flaw that multiple actors besides the ones mentioned here were exploiting.
The title of this Wired article is somewhat over the top. It is about a (then) upcoming Gareth Owen's presentation on 31C3.
I just watched it [1] yesterday, and Gareth Owen himself says that this number is the count of a certain kind of hidden service request that should not be confused with "visits" or "visitors", for a number of reasons. The main ones I remember are:
- The specific kind of request measured is the first step towards connecting to the service, but may not always represent a complete connection that can be mapped to a individual
- Various anti-childporn organizations crawl the dark web constantly searching and indexing child pornography hidden services
A more accurate analogy would compare it to DNS requests instead of visits. I don't know that there's any scientific research out there correlating percentages of DNS requests to visiting a website. That said, it's not an unreasonable assumption that the overwhelming majority of requests for any given domain hosting a web server are for actual users to fetch the content hosted there.
And over 70% of emails are spam[1], so we should route every email we send through the FBI/NSA?
That other 20% of Tor usage may very well be political dissidents, whistleblowers, or average Joes that just don't want half the world watching everything they do and say online.
"That other 20% of Tor usage may very well be political dissidents, whistleblowers, or average Joes that just don't want half the world watching everything they do and say online."
Or me, browsing my local government website to check when the next recycling pickup day is. Because I feel some kind of duty to do my bit to make the entirely innocent portion of the haystack bigger...
Two points. First, the comparison of spam to child pornography is a bit... lopsided. Second, every email we send is made available to the FBI/NSA already.
Your analogy would be more apt if the poster said '90% of nodes in the tor network routed child pornography'.
If it is true that 80% of hidden service visits are for child porn, then that would be more akin to saying 'X% of cars on the highways of america are drunk drivers or drug dealers'
Tor is hardly a civilian network. It is quick and dirty secure communications for intelligence agents, developed by the Naval Research Laboratory. The civilians on tor are doing the useful job of adding routing complexity, which is probably why it was publicly released in the first place, as if it was only agents on there it would be useless for the job.
What we know, after I and Moustache and DeepDotWeb have gone through the warrants & complaints, is that a handful (~78) IPs were deanonymized as late as July 2014 after accessing the Silk Road 2 vendor .onion. This is almost certainly due to the UC account 'Cirrus' in some form, who was perfectly placed to de-anonymize the SR2 servers and insert a payload into the landing page. A former SR2 employee says they were even using crummy commercial software like 'DeskPro' on the SR2 servers.
In other words, right now the evidence is consistent with a Freedom Hosting redux scenario: the FBI has gotten another Tor Browser or other browser exploit and then phoned home. It could also be the big mysterious attack, but why would they do that when they had an insider who was involved in hiring additional employees? And also given that the SR2 vendor onion should have been seeing a lot of traffic from vendors who almost all preferred to use it due to better uptime, a mere 78 IPs sounds low.
Further reading:
- https://www.reddit.com/r/DarkNetMarkets/comments/2sppy0/sr2_...
- https://www.reddit.com/r/DarkNetMarkets/comments/2t30hs/the_...