If the leaker visits this page before opening the Tor Browser from a regular browser to copy the onion url, the whole thing is as safe as SSL as there will be a trail of the SSL connection just before the visit to SecureDrop. And they don't even explain to avoid it.
(Securedrop dev here) This is a really good point. Unfortunately, we're "as safe as SSL" no matter what, unless the source has a separate way to verify the .onion address on the SSL-protected page. They can use the SecureDrop directory for that (and we're working on other schemes as well), but it's not automated so only a handful of very cautious sources would likely do this.
I'm not sure how we could explain to avoid it - where would the explanation go? Visiting that page would be just as much of a correlation, no? It's kind of a chicken and egg problem, unless the source is already using Tor.
Avoiding the "trail of the SSL connection" also suggests we should be doing something to combat website fingerprinting, which we have discussed but do not have a clear solution for yet.
Our current thinking is that just visiting the landing page is not enough to prosecute a source. We can do better, and are working on it, but it's difficult.
> Include an iframe for all (or a random subset of) visitors, loading this particular url (hidden).
Or, since the content of this page is mostly text, it could be included in the HTML of all washingtonpost.com home page requests with very small overhead, and shown with a non-tracked javascript action (link/button), so it is all client-side and indistinguishable from a normal request to the home page.
Definitely! The challenge is getting the news orgs to change their entire site, which often involves a lot of complex, entrenched infrastructure and sometimes involves reluctant third parties such as ad networks.
We're working on a best practices guide for deployments [0]. I'll make sure these suggestions go in there. Feel free to take a look and comment if you're interested!
We've been working on this with some of our deployment partners for a while now :D Great idea! I didn't know anybody else did it, it's cool to hear about c't.
> I'm not sure how we could explain to avoid it - where would the explanation go?
You could put the instructions on pages that many people visit regularly, true security through obscurity. For example, put the instructions in abbreviated form in a box in the footer of your front page (or in the footer of every page).
Print a QR Code for SecureDrop in every issue of the newspaper. Hell, feature it as part of a story announcing SecureDrop the first time you print it. Then just print it in a consistent position with minimal explanation from then on.
This may be one of the rare cases where the use of a QR Code is justified.
Only if they visit the page just before. Seems plausible they would read about it, set it up and then drop their documents at a later date as a default behavior.
I agree it would probably be a good idea to put a warning about such a problem though.
There's this hard tradeoff that most people are willing to make, between making things more 'secure' and making things useable by the general public. I just wish that attention would be paid to the security side of things.
Ultimately, we can write descriptive documentation - but getting it read and understood is hard. Cryptoparties, are again a great idea, but getting the non-technical user involved is damned hard.
IMHO these things always come down to "how do we make it easy for the public, whilst keeping it REALLY secure". How does security become a general piece of education, much akin to math, or at least history?
I don't see how that would help. The threat model here, the reason to use Tor is that they could be compromised and forced to log, and through Tor they would not know the leaker's IP.
You only need the two "leak at time X, IP Y loaded this page at time X-5" datapoints to break this.
Either you misunderstood me, or I don't quite understand how that would not help.
My suggestion is to embed an iframe to the posted URL on every page on www.washingtonpost.com. Every article, everything. I'd assume this would blast the logs enough that if you look at "time X-5" you'll have too many data points to actually make something out of it. Because everyone who reads an article on wapo will have also visited that page. So yes, that embedded page would be loaded by every single viewer of any page on washingtonpost.com.
Edit: I just realized that there is a huge unfixable flaw in this approach. The request for an article in the logs will always show up shortly before the request for the SecureDrop page. Even if you would iframe a random article on the SecureDrop page too you could see from the logs that is was loaded before the actual article. Essentially rendering this thing useless :/
(Securedrop dev here) We often suggest ideas like this to deployment operators, and others as well. For example, we encourage deployments to mirror the Tor Browser Bundle so sources don't have to go to Tor's (monitored) website to get it. We encourage them to use SSL everywhere so the "trail to the landing page" is harder to spot. We encourage the exact "hidden iframes" idea you propose here. And we encourage them to deploy on a path, not on a subdomain (because hostnames are visible even with TLS). At least WaPo is doing the last one right!
Generally, it is very difficult to convince the operators of sites like the Washington Post to do things like this, but we're working on it!
Uuuh, hi there! Thanks for the effort you all put into making leaking safer for sources.
Other possible approach: load the landing page everywhere and show it with Javascript when the user clicks their way to it. I think it's an improvement on the iframe without drawbacks. How does it sound?
It shouldn't matter where you're downloading the TBB binary, since you're going to verify the signature before trusting it, right? Surely you wouldn't just assume it was legitimate, and then install it.
How about some simple cookie tracking an iframe that loads a random number of seconds after the page loads (like 10 - 60)? That might spam the logs randomly enough so that it couldn't be tracked. However, I think measures such as including the Securedrop page as a part of the root domain only under ssl would be the simplest solution in this case.
Wouldn't matter, the GET for a particular IP for the article would still show up before the GET for SecureDrop, the actual timing is irrelevant here, if there's always an article visit, and then a SecureDrop request.
I guess you could randomize if you load the iframe or not. Then you couldn't be sure if a visit was an actual visit or an iframe that was randomly triggered (with a random delay).
But for this to be useful you'd still need to instruct sources to randomly browse the page before going to SecureDrop. Which might work if you force them to click a link on the main-page to get to the SecureDrop page.
But if they go directly to /securedrop it will fail again because the GET /securedrop will show up as the first request from that IP, giving away that the visit was intentional.
So my current idea would be to randomly generate the actual /securedrop path in a non-predictable matter per client. Maybe something simple like securedrop-sha1(...). Then link to that from WaPo's main page. Forcing everyone to go trough WaPo.com.
But then you still have the problem that you must make sure sources don't access this link from history or something.
Please correct me if I'm wrong but, right now, at home, I visited that site. Hardly suspicious at all, since it's on HN front page. I could write down the .onion url on a piece of paper (or just print the page, as reference) and then later follow the instructions posted there, at a semi-anonymous Internet cafe, without having to visit that page, right?
That's like saying John Smith went to a bank withdrew money at 1pm on Jan 1. Then the bank was robbed at 1:10 Jan 1 therefore John Smith robbed the bank.
I don't think you can connect visiting the info page and the very next SecureDrop file upload.
The threat here isn't only proof that is acceptable in court:
* Your actions could put you on a shortlist of people to be more thoroughly investigated.
* Your actions could tip off the people whom your information threatens; maybe they stop communicating with you (or worse) to shut off the leak.
* Per the Snowden release, the NSA tracked the communications of people within something like 3 degrees of their targets. With standards that low, it's not a stretch to think someone would track everyone visiting the Washington Post's secure drop box.
That is a poor analogy of the threat. Basically the problem is about attracting adversarial resources. Any suspicious activity will attract more attention and thus make it more likely the adversary will find real evidence.
A Tor user at Harvard was successfully tracked when he sent a bomb threat, since he was the only user on the Harvard LAN using Tor at the time the threat was issued.
That wasn't proof, of course, but it didn't need to be proof, just a good lead for law enforcement to kick-start their investigation.
If memory serves, there were several people who had been or were using Tor at the time the threat was sent. When he was questioned by the police, however, he confessed.
That's possible, but doesn't really change the point. By bootstrapping a associations of identity-masking technologies with possible identities you allow "normal" law enforcement investigative techniques to unmask the identity.
OPSEC is hard.