Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the leaker visits this page before opening the Tor Browser from a regular browser to copy the onion url, the whole thing is as safe as SSL as there will be a trail of the SSL connection just before the visit to SecureDrop. And they don't even explain to avoid it.

OPSEC is hard.



(Securedrop dev here) This is a really good point. Unfortunately, we're "as safe as SSL" no matter what, unless the source has a separate way to verify the .onion address on the SSL-protected page. They can use the SecureDrop directory for that (and we're working on other schemes as well), but it's not automated so only a handful of very cautious sources would likely do this.

I'm not sure how we could explain to avoid it - where would the explanation go? Visiting that page would be just as much of a correlation, no? It's kind of a chicken and egg problem, unless the source is already using Tor.

Avoiding the "trail of the SSL connection" also suggests we should be doing something to combat website fingerprinting, which we have discussed but do not have a clear solution for yet.

Our current thinking is that just visiting the landing page is not enough to prosecute a source. We can do better, and are working on it, but it's difficult.


A few things that may be helpful:

1. Make the entire site available under `ssl.washingtonpost.com` (ideally without the `.ssl` prefix).

That way, the domain won't be as suspicious as it is right now. I suspect that this is more or less the only content hosted on the domain.

2. Include an iframe for all (or a random subset of) visitors, loading this particular url (hidden).

By artificially generating traffic to the endpoint it will be harder to distinguish these from other, 'real' requests.

Use a random delay for adding the iframe (otherwise the 'pairing' with the initial http request may distinguish this traffic).

3. Print the link, url and info block on the dead trees (the paper), as other has suggested.

4. Add HSTS headers (http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security)


Also, if you can swing https://washingtonpost.com?page=securedrop, the request will just look like it's to https://washingtonpost.com since query parameters are encrypted with ssl.


So is the rest of the URL, it could just as well be https://washingtonpost.com/securedrop


Oh right, paths are too. Sorry!


> Include an iframe for all (or a random subset of) visitors, loading this particular url (hidden).

Or, since the content of this page is mostly text, it could be included in the HTML of all washingtonpost.com home page requests with very small overhead, and shown with a non-tracked javascript action (link/button), so it is all client-side and indistinguishable from a normal request to the home page.


Definitely! The challenge is getting the news orgs to change their entire site, which often involves a lot of complex, entrenched infrastructure and sometimes involves reluctant third parties such as ad networks.

We're working on a best practices guide for deployments [0]. I'll make sure these suggestions go in there. Feel free to take a look and comment if you're interested!

[0] https://securedrop.hackpad.com/SecureDrop-Deployment-Best-Pr...


> Unfortunately, we're "as safe as SSL" no matter what, unless the source has a separate way to verify the .onion address on the SSL-protected page.

Print it in the physical newspaper. The German computer magazine c't prints their PGP key fingerprints in the masthead.


We've been working on this with some of our deployment partners for a while now :D Great idea! I didn't know anybody else did it, it's cool to hear about c't.


Clever


> I'm not sure how we could explain to avoid it - where would the explanation go?

You could put the instructions on pages that many people visit regularly, true security through obscurity. For example, put the instructions in abbreviated form in a box in the footer of your front page (or in the footer of every page).


Glad you are doing this. You should just stick this link/info in the footer of all Washington Post pages.


Good idea, but, like many of these ideas, easier said than done.


Print a QR Code for SecureDrop in every issue of the newspaper. Hell, feature it as part of a story announcing SecureDrop the first time you print it. Then just print it in a consistent position with minimal explanation from then on.

This may be one of the rare cases where the use of a QR Code is justified.


Only if they visit the page just before. Seems plausible they would read about it, set it up and then drop their documents at a later date as a default behavior.

I agree it would probably be a good idea to put a warning about such a problem though.


There's this hard tradeoff that most people are willing to make, between making things more 'secure' and making things useable by the general public. I just wish that attention would be paid to the security side of things.

Ultimately, we can write descriptive documentation - but getting it read and understood is hard. Cryptoparties, are again a great idea, but getting the non-technical user involved is damned hard.

IMHO these things always come down to "how do we make it easy for the public, whilst keeping it REALLY secure". How does security become a general piece of education, much akin to math, or at least history?


I don't think SecureDrop is designed to be usable by the general public.


I'm happy to agree with you. Equally I feel that with a (small) amount of love, it could be used by whistleblowers! To me it's almost ready for that.



Embed the page as iframe and scrub the referrer on every page a viewer visits.

That should make it hard enough to correlate any data, I guess they have enough visitors.


I don't see how that would help. The threat model here, the reason to use Tor is that they could be compromised and forced to log, and through Tor they would not know the leaker's IP.

You only need the two "leak at time X, IP Y loaded this page at time X-5" datapoints to break this.

An embedded page is not fetched by someone else.


Either you misunderstood me, or I don't quite understand how that would not help.

My suggestion is to embed an iframe to the posted URL on every page on www.washingtonpost.com. Every article, everything. I'd assume this would blast the logs enough that if you look at "time X-5" you'll have too many data points to actually make something out of it. Because everyone who reads an article on wapo will have also visited that page. So yes, that embedded page would be loaded by every single viewer of any page on washingtonpost.com.

Edit: I just realized that there is a huge unfixable flaw in this approach. The request for an article in the logs will always show up shortly before the request for the SecureDrop page. Even if you would iframe a random article on the SecureDrop page too you could see from the logs that is was loaded before the actual article. Essentially rendering this thing useless :/

So... Nervermind... I guess.


(Securedrop dev here) We often suggest ideas like this to deployment operators, and others as well. For example, we encourage deployments to mirror the Tor Browser Bundle so sources don't have to go to Tor's (monitored) website to get it. We encourage them to use SSL everywhere so the "trail to the landing page" is harder to spot. We encourage the exact "hidden iframes" idea you propose here. And we encourage them to deploy on a path, not on a subdomain (because hostnames are visible even with TLS). At least WaPo is doing the last one right!

Generally, it is very difficult to convince the operators of sites like the Washington Post to do things like this, but we're working on it!


Uuuh, hi there! Thanks for the effort you all put into making leaking safer for sources.

Other possible approach: load the landing page everywhere and show it with Javascript when the user clicks their way to it. I think it's an improvement on the iframe without drawbacks. How does it sound?


Hard to verify that there are no ajax shenanigans.

It's a hard problem :/


Downloading Tor from an inofficial source sounds like a recipe for trojans though... I don't think most people will have Erinn's signature to verify.


It shouldn't matter where you're downloading the TBB binary, since you're going to verify the signature before trusting it, right? Surely you wouldn't just assume it was legitimate, and then install it.


Business idea: signature database with web interface. So download from anywhere, and look up the signature on the database to verify its authentic.


How about some simple cookie tracking an iframe that loads a random number of seconds after the page loads (like 10 - 60)? That might spam the logs randomly enough so that it couldn't be tracked. However, I think measures such as including the Securedrop page as a part of the root domain only under ssl would be the simplest solution in this case.


Ah ok, I didn't get the "on all pages" bit, sorry.

I still believe it would not be enough, since such a thing could be silently disabled by WaPo if ordered to do so.

The point in SecureDrop is that they could not deanonymize the source even if they tried.


Maybe you could use JS to randomize the timing of the iframe load after the article load?


Wouldn't matter, the GET for a particular IP for the article would still show up before the GET for SecureDrop, the actual timing is irrelevant here, if there's always an article visit, and then a SecureDrop request.

I guess you could randomize if you load the iframe or not. Then you couldn't be sure if a visit was an actual visit or an iframe that was randomly triggered (with a random delay).

But for this to be useful you'd still need to instruct sources to randomly browse the page before going to SecureDrop. Which might work if you force them to click a link on the main-page to get to the SecureDrop page.

But if they go directly to /securedrop it will fail again because the GET /securedrop will show up as the first request from that IP, giving away that the visit was intentional.

So my current idea would be to randomly generate the actual /securedrop path in a non-predictable matter per client. Maybe something simple like securedrop-sha1(...). Then link to that from WaPo's main page. Forcing everyone to go trough WaPo.com. But then you still have the problem that you must make sure sources don't access this link from history or something.

Quite a lot of work, for still flawed security.


Please correct me if I'm wrong but, right now, at home, I visited that site. Hardly suspicious at all, since it's on HN front page. I could write down the .onion url on a piece of paper (or just print the page, as reference) and then later follow the instructions posted there, at a semi-anonymous Internet cafe, without having to visit that page, right?


I accidentally clicked the down arrow for your comment. Up-voted another to (somewhat) make up for this. Sorry.


That's like saying John Smith went to a bank withdrew money at 1pm on Jan 1. Then the bank was robbed at 1:10 Jan 1 therefore John Smith robbed the bank.

I don't think you can connect visiting the info page and the very next SecureDrop file upload.


The threat here isn't only proof that is acceptable in court:

* Your actions could put you on a shortlist of people to be more thoroughly investigated.

* Your actions could tip off the people whom your information threatens; maybe they stop communicating with you (or worse) to shut off the leak.

* Per the Snowden release, the NSA tracked the communications of people within something like 3 degrees of their targets. With standards that low, it's not a stretch to think someone would track everyone visiting the Washington Post's secure drop box.


That is a poor analogy of the threat. Basically the problem is about attracting adversarial resources. Any suspicious activity will attract more attention and thus make it more likely the adversary will find real evidence.

I wrote up an analysis of exactly this problem last year: http://grugq.github.io/blog/2013/12/21/in-search-of-opsec-ma...


It all depends on traffic.

And "a group of 100 IPs including a coffee shop near NSA employee John Smith's home" is enough.


It's not about proving that John Smith robbed the bank, but raising suspicion so that he will be investigated.


The difference between a court of law and a court of force.


A Tor user at Harvard was successfully tracked when he sent a bomb threat, since he was the only user on the Harvard LAN using Tor at the time the threat was issued.

That wasn't proof, of course, but it didn't need to be proof, just a good lead for law enforcement to kick-start their investigation.



Enjoyed, thanks. Particularly liked "Let's call it half a win."


If memory serves, there were several people who had been or were using Tor at the time the threat was sent. When he was questioned by the police, however, he confessed.


That's possible, but doesn't really change the point. By bootstrapping a associations of identity-masking technologies with possible identities you allow "normal" law enforcement investigative techniques to unmask the identity.


Or if the submitter accidentally leaves their cell phone on en route to or while at said public location ...


The leaker can always visit the SSL site via Tor, which would solve the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: