Distilling tips down for regular users who don't use SSH or are intimidated by compiling KeePassX for Linux themselves, my tips would be:
1. Use a user-friendly password manager like Dashlane or 1password with a long unique password and a second factor (that isn't SMS based). Password re-use is the #1 way accounts are being compromised at the moment and there are now good password managers that are easy to use with a low barrier to entry
2. Use an extensive ad blocker like uBlock Origin and use multiple profiles in your browser to separate your serious accounts like webmail and banking from general web browsing. The other common way of being exploited is drive-by malware and web-based exploits. A combination of blocking third-party content and separating your browsing profiles will prevent a lot of it. Don't feel guilty about blocking ads - most publishers are extremely negligent with what they allow on their sites via ad networks. Bonus: switch to Chromium[0] (firefox isn't sandboxed and exploits are too common) (but alert yourself to Chromium updates with an IFTTT of the release blog to <pick your notification method>) or alternatively remove Google, Flash, Java etc.
3. Get a VPN subscription and set it up on your laptop & mobile devices. Seriously, don't use open WiFi networks or shared networks without wrapping your connections in encryption. sslstrip is extremely effective and many apps either don't verify/authenticate SSL connections or don't pin certificates. IVPN, PIA, the Sophos VPN product - take a pick.
4. Most home routers are super shit and full of holes. Upgrade to a router that supports open firmware and pick one of openwrt, dd-wrt, monowall, pfsense etc. bonus: run an UTM like Untangled (commercial) or Sophos (free up to 50 CALs iirc)
5. Encrypt your stuff - VeraCrypt is a decent TrueCrypt fork but most operating systems now have support for volume encryption - your local disk, USB sticks[1], or a file-based volume. Backups should be to encrypted media
6. Be anonymous - create a disposable email with a fake name to signup to services with. even better sinkhole a random domain name you register. No service outside of banking, insurance, health, etc. really need to know your actual identity details.
Your comment is the first time I've ever read someone recommending a browser over Firefox (when discussing security and privacy). I find it even more surprising because you're recommending possibly highly unstable Chrome/Chromium releases. I'd like to hear more from you and the HN community on this topic.
Firefox seems to be the only browser in which one can maintain privacy and security (e.g. all the privacy tweaks from privacytools.io). Chrome doesn't allow for most of the tweaks, for example WebRTC can't be disabled.
not to mention, many Chrome extensions are completely compromised by adware/malware and sniff your traffic. The only one I trust is uBlock Origin. Firefox addons have somehow managed to avoid this fate. Also, Firefox has the best security addon, NoScript.
It doesn't appear to work properly. I read a thread on Gorhill's Github page and the whole thing seemed really convoluted. I activated the function but when I tested it my UA wasn't spoofed. Also, the list from which to choose/randomly assign is pretty short, though I think Gorhill made it so by design.
Chromium for security - for privacy it isn't great because aside from the WebRTC issue[0] it also doesn't respect proxy settings. You need to run it in a VM in an isolating proxy setup to avoid the privacy issues.
For privacy the Tor browser - but even then only in a VM because of the prevalence of exploits. Regular Firefox will just get you fingerprinted in any case.
> unstable Chrome/Chromium releases
The build site I linked to lets you switch between trunk/stable
The second difference and a large advantage Google has is the security team they've put together to find and fix bugs. Google between project zero and engineering are probably the best team in the world. Firefox don't really have an equivalent.
Then there is the legacy code that Firefox is built on and the problems that had lead to. In Chrome a successful exploit requires the combination of 5-6 diff bugs/exploits to bypass all the controls and sandboxes, while in FF many straight forward bugs become exploits.
This is most reflected in two places: First the pwn2own contest where FF does poorly[0] and second in the price of 0day between Firefox and Chrome. The Chrome exploit price has never been less than $100k and at the moment $1M+ is being turned down, while OTOH Firefox started at $5-10k and at the moment is $25-30k and are common (common in a browser exploit sense).
The idea situation would be a Chromium fork that is built with the Firefox UI / extensions / settings / profiles etc. built on top of it. I've wanted to build this project for a long time and have a privacy/security specific browser but have never had the chance to do it. I hope at some point somebody does - it's really complicated today to recommend both a secure and private browser.
Curious, who is "compiling KeePassX for Linux themselves" when it's in the Ubuntu and Debian repos? I haven't checked Arch or other distros but KeePassX is a widely used and active project, and I suspect it is available in many package managers. I wasn't aware that people were having to compile it from source (and I agree this would be more intimidating to new users).
> Curious, who is "compiling KeePassX for Linux themselves" when it's in the Ubuntu and Debian repos?
It's in the OP:
"I highly recommend using KeepassX as a password manager, secured using a key file and not a password. Also, you should download the source code, compile it (using a Linux machine) and always look over the source code for rogue functions, you CANNOT afford a vulnerability inside the password manager."
Thanks, I saw that but was hoping people would not actually bother - that's deep into 'tinfoil-hat' territory. If you are already running a Linux distro and trusting the repos for OS updates, it isn't a big stretch to assume their build of the password manager is exactly as safe as the OS updates, and trusting one but not the other is pure folly.
> always look over the source code for rogue functions
I don't really understand the reasoning behind that - so rather than trusting a proven OS and its packages with lots of eyes on the critical code, I should rather read the code of a cryptographic product and then make a decision based on that? Common advice is not to write crypto code yourself, but reading it and deciding that there might be a backdoor based on gut feeling is right? And having identified some stuff I don't understand, then decide for another program that is goofy but easier to read, and compromise security?
It's more about looking over the source code of the KeepassX app and just that (not all applications you use, of course we have to trust some of them), the code changes are (when there are changes) ~10 lines a month, not a huge amount to check.
What about all its dependencies? What if someone made some unsuspicious changes in a UI library that it uses that have a side effect on the security of KeepassX?
But that's the same argument that I was making about KeepassX itself before. So if I'm trusting for the dependencies to get watched well for not being compromised, why then look into KeepassX itself, wouldn't that apply for the app itself as well?
Because KeepassX is way less used (and its source code checked) than ... QT for example, so rogue functions added into KeepassX will be have less visibility than one added into zlib.
Ok, fair enough. Sorry, didn't mean to troll, was just curious. Still sounds like bad advice for the majority of users who can't really make that call, but for those who can it's consistent.
Don't worry about it :) Not all the steps are mandatory and not all are suited for the majority of Internet users. Everybody picks a base and starts from there.
When it comes to security, I usually worry about people creating complexity that scares people away, and therefore weaken the overall security of all. Over-complex and annoying password procedures are one everyday example that force many people to just go for some stupid password that follows the rules but is easy to crack.
So if there are rules to improve the privacy of users, I would worry about making it look to complex or extreme. I suspect rather than picking some parts that work for them they will go "ok this i can do, this I can do, and .. wait compile it myself and read the source?! - ok, I'll stick to post it notes".
Why is it a good idea to use a key file instead of a password? Any malware I catch could use the key file to access all my passwords or is there something I'm not understanding?
I'd recommend both, keepass supports that. Keepass supports using a secure desktop to prevent keyloggers[1], but you never know with some of the usb keyboard exploits[2], and good old fashioned looking over your shoulder. While keepass has protection against dictionary attacks[3], why not use a keyfile? you can put it on a flash drive, and now noone can access your passwords without that usb key. (obviously make sure you have multiple copies of your keyfile :)
I would personally use a combination of both. If you keep the key file on a USB thumb drive that you only insert when you need to unlock your files I guess it works out well then.
I use a combination of password and key file so that I can worry less about someone shoulder surfing or otherwise observing the input of my password.
My password database is stored on a USB key that I carry with me, with a regular copy made and securely stored.
Key file is stored on devices I use, in a directory restricted to my own access and on a drive which is encrypted. An encrypted copy is also stored on the USB key with the password database; this can be decrypted using a GPG, key stored on a yubikey and also carried; if a device can be trusted enough, this is how I move the key file around.
Access to the database requires 3 things rather than two. A long passphrase could be recorded by an observer, who could then take my USB key. The key file ensures that they still do not have all that they need.
That's a good idea - you can also configure a local bind/dnsmasq/unbound server to block based on these lists with ACL's (sure if you google each you'll find tutorials, like this one: https://github.com/jodrell/unbound-block-hosts)
Some of the better home router distros will also do this at the local network level
1Password relies upon a single, strong password for your "vault", which is vulnerable to sniffers and intermittent surveillance.
I don't understand why they don't offer (at least) 2-factor key for the vault.
Also, they support TouchID on iOS devices, very useful. But in the US, at least one case of someone being legally required to unlock via TouchID, whereas offering up a passcode is still debatable.
So at least offer a short PIN-and-TouchID, and support some 2-factor like Googly Authenticator.
They must have at least considered these things. I don't understand any security issues with this. Implementation is work, and perhaps a support hassle for them.
This is pretty much the nature of password managers. That password is only ever entered locally. If an attacker can grab local keystrokes, it's game over anyway.
>they don't offer (at least) 2-factor key for the vault
Neither TOTP nor any kind of push/SMS token can be used to secure data at rest. These are mechanisms to authenticate to a server. You could have "2 factor" for data at rest by storing part of the key separately, but there'd be nothing dynamic about it; copying the key material once would be sufficient to use it forever.
LastPass offers 2-factor to authenticate to the LastPass website, but your vault is cached encrypted on the client side, and such a cached copy can be opened using only the master password. (IIRC there is an option to disable this, which works by erasing the cached copy at the end of a session. Hardly bulletproof, and precludes having any sort of backup resilient to the failure of LastPass itself).
Instead of a VPN subscription, I'd recommend a self-managed VPN solution. One can easily fire one up thanks to Streisand (https://github.com/jlund/streisand)
It depends on your goal. For privacy, a self-hosted VPN is a nightmare. To prevent sniffing and/or modification of content, a self-hosted VPN is great (your VPS company is less likely to target VPN traffic than a dedicated VPN provider).
I think a reputable VPN provider offers the better tradeoff, but there are legitimate reasons for self-hosting a VPN.
With a VPS-self-hosted VPN all your connections to the outside originate from a static, unique, unshared IP, making it trivial to track and correlate your behavior across all protocols. Instead of containable identifiers like cookies, your IP has become a guaranteed unique identifier.
To leverage this, it's fairly easy to detect you're on a self-hosted VPN: your IP is in an IP range assigned to a hosting/colocation provider, is not a TOR proxy (there is a public list of those) and doesn't belong to any remotely popular VPN (easy to enumerate for a little money, lots of lists exist).
In exchange for that you have eliminated your ISP (or public wifi) as a threat but instead added the hosting provider to the list of threads. And for any adversary that stands above the law, the routing infrastructure of your hosting provider is already a valuable target.
thanks. so basically there is no way to hide the addresses i access through my vpn from the provider of my dedicated server? would it help if i used non logging encryped dns server?
Because you're moving the "exit" from your non-anonymous local ISP to your non-anonymous colo provider. If you want to hide your traffic or at least make your adversary work a little to determine who you are, shared VPN endpoints are better.
How is that any different? ISPs don't let just anybody know who the subscriber is at a given IP (though if you do reverse lookups, many ISPs so leak a lot of locality information, so still a good idea to use some VPN instead of no VPN).
My Streisand hosted on AWS looks to the outside like anybody else's Streisand hosted on AWS, doesn't it?
Similarly, my f-secure egress looks like anybody else's f-secure egress, so what's the difference?
I don't really know, I don't use a VPN. Really asking.
And reputable VPN vendors resist efforts by nation states to procure information about subscribers? I would expect to have to pay a handsome fee for that.
(I'm not saying you're wrong; again I've not really thought about having to thoroughly anonymize my own traffic.)
"Its important that you dont host any domains on the VPS you run the VPN on."
Why is that is I may ask? I have the impression that the great firewall blacks a lot of domains by default and they are allowed/blocked after a review after somebody tries to access them the first time. I may be paranoid but often I try to open an obscure site. It is blocked. I have to use a VPN. A few days later the site can be accessed. Why not use a VPN sever with a nice website that makes it look harmless?
- You might forget private whois and expose your identity.
- There might be issues with the private whois that exposes your identity.
- The contents of the website might expose you.
> even better sinkhole a random domain name you register
What does this mean? I've tried to figure it out from context, the article, and a quick google search, but It's not clear how dns sinkholing is going to help me stay secure.
Pretty sure he means take a domain you've registered and use it for anonymous e-mail thus making it so you can't use it for anything personal in the future since it can be connected back to your anonymous email.
Adding the URL (or application name), or rather part of it, to a secure password also makes for a good way to not use the same password everywhere, imo. In case of not wanting to use a password manager.
But it's still easily derivable. If someone buys the LinkedIn dataset and finds that your password is "123456linkedin", the next thing he will try is "123456facebook", "123456googlemail" and "123456bankofamerica". It might help against highly automated attacks that don't look at the actual password string, but anything involving a human hacker will be as bad as the same password reused.
Is it likely that someone is actually going to look at your password in a breach that may contain several million accounts? On top of that, if it even gets cracked (if it was stored and encrypted properly in the first place).
If you're really personally targeted, then I agree with you. But for a casual person it's probably an easier thing to convince them of doing this instead of installing a password manager.
>Password re-use is the #1 way accounts are being compromised at the moment and there are now good password managers that are easy to use with a low barrier to entry
What accounts? At least for financial fraud this is certainly not true, phishing remains #1 by far.
I'd also hazard to guess that botnet logs result in far more hijackings than password reuse.
Actually you're right - it's malware #1 and then password resuse somewhere down the line. The reuse issue was just fresh on my mind because i've been dealing with it the past few days with the MySpace and LinkedIn dumps.
Safe from who? I think smartphones are some of the least safe things when trying to avoid law enforcement/3 letter organizations. I don't feel they're particularly safe at all really, but I'm sure someone can elaborate more.
> Use unique SSH keys for each service (sharing a SSH key on your GitHub/Gitlab account, network router and AWS/Azure instance is a very stupid idea); use ssh-keygen -t rsa -b 4096 to generate a 4096 bit RSA SSH key.
I tried this. Turns out to be a bad idea. SSH will walk through each private key and attempt to authenticate with it in order. That means a lot of bad login attempts which in turn leads to getting locked out. SSH public keys are public for a reason.
What attack is this even preventing - that someone will be able to reverse ssh public keys and get the private? A better approach is to generate a unique key per client so that if you lose access to a device you can remove only its public key.
> Also, you should download the source code, compile it (using a Linux machine) and always look over the source code for rogue functions
So I becoming an Underhanded C Contest judge is the price of admission to using the internet? Can anyone really be expected to do that? Can we blame anyone who gets owned because they didn't?
The solution to that is to use the IdentityFile directive in your ~/.ssh/config with username / hostname expansion.
I use:
Host *
# Disable SSHv1
RSAAuthentication no
# Only use a key explicitely provided by an IdentityFile directive
IdentitiesOnly yes
# %h expands to the hostname, and %u to the username
IdentityFile ~/.ssh/%h/%u.key
This ensures that at most one key is used, and prevents me from having to modify my config every time I generate a key for a new host.
I think the thought is the security practice of compartmentalization. If you lose the private key you use for GitHub, Amazon, DigitalOcean, your home servers, etc... you've effectively given root away.
Now if my laptop is compromised, it doesn't matter if I have one key or ten, I've lost them all. But if there's something heartbleed-esque that allows individual private keys to be stolen when pushing commits to GitHub, I've at least isolated damage to my GitHub account.
But the private key is (of course) never sent to GitHub, so it's hard for me to imagine what kind of vuln this would help with. I can think of a few, but they're odd:
1. Some sort of remote memory leak that leaks the current private key, I guess.
2. Some sort of relay attack where you can impersonate the legit host.
In both of these cases, it seems like at a minimum you would need to, on the client, set up an ssh config that limits each identity to each host so as to prevent the client from trying each key in sequence (and thus potentially exposing it). That's a huge hassle!
So I guess tl;dr: I can think of a few cases where this might be useful, but if you're always SSH'ing from the same laptop, this step can probably be pretty far down your list of things to do.
> What attack is this even preventing - that someone will be able to reverse ssh public keys and get the private? A better approach is to generate a unique key per client so that if you lose access to a device you can remove only its public key.
I don't think this is about security. Just about privacy.
Some people don't like that they can be identified by their public key. eg (I think) github allows public viewing of a specific users public key, and that allows other services you use with your public key to know your github account etc.
It's not a mainstream privacy concern, but there are some privacy oriented people that worry about it.
I highly recommend using KeepassX as a password manager, secured using a key file and not a password.
I like KeePassX as well, but prefer to unlock using a password. I have a Yubikey programmed to output a 32 random password that I generated and I append to that a 16 character password that's in my head. I keep the Yubikey and the SD card on which I have the password vault separate. The SD card itself is encrypted* and the version of KeePassX I run is on the card and is one I compiled myself.
Not sure I'd be getting additional protection with a key file. But perhaps I am wrong.
*I did that so that someone couldn't just copy the KeePassX database off it when I wasn't looking and run some offline attack against it. The SD card also has a kind of social engineering defence mechanism on it to dissuade the curious from playing with it... I wrote the word INFECTED on it.
I have found my YubiKey quite nice for password sercurity, but I use it in a slightly different way.
I use password-store, which is a git repository of GPG encrypted passwords. While I'm on my main laptop, which has Qubes, I can access passwords using a key stored in my keys vault using Qubes split GPG. Encrypted passwords are synced through SSH to my serwer. On my other computers, I can decrypt passwords with my YubiKey as a gpg smart card. This is probably way overcomplicated, but it works.
I am security conscious but not as conscious as jgc, I am doing the same with the database on the drive and the drive is encrypted. I have a "smaller" password in the head plus a Yubikey password which is appended to my smaller password. For each website I am using a randomly generated password.
What is important is that in my daily life, this is working perfectly well and I do not feel at all the annoyance of the added security against using the same dadada password on all the websites.
I really recommend a head stored + hardware generated password too, this is working wonderfully.
Sounds like a good system. Having something easy that you will actually use is the most important thing.
There is no one-size-fits-all solution and it should clearly depend on the threat model. I can imagine why someone who could be expected to have the keys to CloudFlare's infrastructure might want to take extra care.
I thought this might be the case, but it doesn't stop people from believing you may be a high value target. So the good security practices are very prudent.
I suppose I'm even less security conscious, as I'm storing my file in ownCloud and sync it also to my Android. I assume this use case prevents using a Yubikey password for enhanced security?
You can always put your files in OwnCloud in a TrueCrypt container, and anything that's unlocked by a password can use a Yubikey password with head-stored additions.
I'm not aware of ready-made solutions to locally decrypt cloud-stored data on mobile phones though, I don't think you can mount TrueCrypt volumes on your phone. Anyone know of a way to do this?
Just had a look at KeePassX, as it looks as though it has a sleeker UX, when compared with KeePass2.
It may be considered a faux pas, but I have come to like the http plugin, for KeePass2, which allows Firefox to reach into my database when I come to sign into an online account.
> Also, you should download the source code, compile it (using a Linux machine) and always look over the source code for rogue functions, you CANNOT afford a vulnerability inside the password manager.
I'm not sure that this actually possible in any reasonable sense. Its not that hard to throw in an obfuscated back door into source code, especially in a complex system (ignoring the build chain and the whole trusting trust thing.)
Even if there are a small number of people who have the time and expertise to audit such systems, it just doesn't scale.
Of course doing constant code reviews for every single piece of software you use is preposterous. I have trouble keeping up with my employees' code reviews.
This is why security-conscious folks prefer open source software.
No one wants to audit every line of code they use (nor is that possible).
But if one relies on relatively popular open source software, just the fact that someone else could have audited it helps a lot. Add on to that the fact that you can use a linux distribution which keeps an eye on the vulnerabilities reported in the wild and updates the packages for you, and you are much better off over someone who only uses closed-source software and hopes and prays.
This is overboard and paranoid for the average user. You are almost certainly not a target for your government and probably not a criminal and so don't need to worry about full disk encryption, your google search history, a judge compelling you to unlock your phone, etc.
Most people should just use an adblocker and strong passwords.
People get their electronics stolen all the time. I use FDE despite knowing that if law enforcement ever seized my computer, they'd probably force me to unlock it or throw me in prison for contempt for the rest of my life. I use FDE simply because I don't want to worry about some asshole stealing my stuff and getting access to all my files.
It is scary to realize that there is no realistic real-life way to be at least close to keeping information secure. We are just closing holes in a sieve.
When I was a kid, watching american media, I thought it was incredible that people in TV shows would just leave their houses unlocked and trust their neighbours.
I don't know whether there is any place where people still do this, but in a community where everyone feels they belong and aren't driven to desperation, I could imagine an "open lock" policy working really well.
Everyone locking up their own stuff and blaming people who did not lock theirs down if they get robbed is in itself a form of arms race, which aren't usually optimal.
The idea behind locking doors is to make things slightly challenging for a casual burglar.
My parents live out in a rural area, and they never lock their doors, house or car. The odds of someone driving to their house and burglarizing it are just too low to worry about it - and if someone were specifically targeting their house, they could just break a window and get in that way.
In denser areas, however, that logic doesn't make sense; it's trivial to case dozens of houses in five minutes just by driving down a street.
Locking doors isn't just to prevent burglars, especially if you live around bars. I don't know how many times a group of aggressive drunk football hooligans has got off the elevator on the wrong floor, then tried to get into my apartment thinking it is theirs or a friends. If my door wasn't locked I'd be confronted with 6-7 violent goofs in my living room @ 3am.
If you're seriously concerned that someone will break into your house and remove the screws on your laptop to mess with it, you have problems way beyond what strong passwords and ad blockers can solve.
A lot of people regualary take their laptop through American airport security, where there are multiple reported cases of laptops being messed with.
Regarding hibernation/locking: many people leave laptops unatteded in more risky situations than at home and at the office. As a trivial example, imagine somebody going around a university library, infecting any unatteded laptop with a virus.
I'd rather a fingerprint to lock my phone and always lock on screen blank, than a pin so complex I'll hardly ever lock my phone.
If you're living as some kind of enemy of the state maybe it's just time to stop developing software. And do you really need to holiday in North Korea?
Or may be it's just time to consider the state for the enemy that it really is. They are the main targets of such security measures, "hackers" are really not a concern, state actors on the hand have proven to be major threats.
I will not let their fear tactics get in the way of my freedom of doing what I please without fear of leaks, theft or spying, be it directed toward my person or as a simple passive measure.
Ideally you have both: Unlock the phone with the fingerprint and unlock more private data with a passphrase.
Same for password managers: Are there any that allow you to split your data into two categories: Protected by fingerprint and protected by passphrase? I'd love to see that feature.
A fingerprint cannot protect data, since it's both public and low-entropy. It can authenticate identity to someone who holds the data.
I.e., you cannot securely encrypt something with a function of your fingerprint: anyone can cycle through fingerprint representations and eventually get decrypt the data (or the key to the data). You can, however, authenticate yourself to someone (or something) which holds a plaintext encryption key, and once you have been given the key, decrypt the encrypted data. This only works if you can trust the person or thing to never give the key to an unauthenticated part. That only works with hardware, since any software which holds a key in plaintext can be examined to extract the key.
A fingerprint can offer some protection for your data, many times it will be sufficient protection and sometimes it will provide better protection than a weak password.
Otherwise you could allow fingerprinting only within 10 min after the last unlock.
This would cover the case where you use your phone a lot and need to lock/unlock faster, while forcing a password entry when your phone gets stolen or used behind your back. You can still be forced to unlock it right after usage, but at this point you might have bigger problems.
I tried it but it doesn't like my ip (vpn) and makes me do a captcha before every single search. I have switched to duckduckgo, I can understand that they have reasons for doing that but it make it unbearable.
Hadn't heard of it before. You basically trade a bit of search speed for more privacy. Trade off I'm willing to make it's my new default for now.
It basically gives Google results with a better layer of privacy if I understand it correctly...which I guess means they could be cut out by Google eventually?
I'm not convinced this localization argument holds so much water. Consider the following:
Case 1: If you're using a search engine not based in the US, and you're not a US person, then the NSA probably can't use any legal tools against you (depending on country). However, the NSA is allowed to use the full range of its capabilities to collect against you (PPD28 notwithstanding). They can infiltrate that service by technical or human means and carry out espionage activity without legal restriction (Title 50/EO12333). Further, they can retain the data unredacted for a long time.*
Case 2: On the other end of the spectrum, if you're a US person and you're using a US-based search engine, surveillance activities against you are far more complex. Warrants, NSLs, and/or other legal paperwork is involved, and there are strict rules on data retention, sharing, and minimization. That's not to say that there isn't surveillance, just that it comes with substantially more overhead. Meanwhile, most of the NSA's technical exploitation approaches are off-limits, and any collection/exploitation activity must be carefully managed.
Case 3: The intermediate case, where you're a non-US person using a US service, is a bit more hairy but still is better than the first case. While the NSA/FBI can utilize a range of legal tools (again, warrants, NSLs, etc) against you, because your data is likely entangled with US-persons data, it must also deal with all the overhead of minimizing and redacting that data (same as case 2). Similarly, the use of technical means against US providers is heavily restricted, so you won't be fighting against the same capabilities as you would be in case 1.
At the end of the day, which do you think is easier for the engineers at NSA: exploiting, entering, and just taking everything (case 1) or filling out a huge amount of paperwork and carefully handling the redacted scraps of data that comes back from the provider eventually (cases 2 and 3)?
I think you can make an argument for either side, but I tend to believe that technical exploitation is easier than legal, for now.
*Caveat here is that this intelligence data is hard[er] to use in US law enforcement activity against you. It's worth noting, however, that NSLs and FISA data are also non-trivial.
I'd recommend this as well. When I did my big migration to LastPass about a year ago (i.e. logged in to every site I ever remember having used and changing the password to a randomly generated one), I thought I was all set. But the site has reminded me at least three times that I registered to alot of BBS-style message boards and maybe only made one or two posts before abandoning them forever and forgetting I ever registered.
Those are concerning, because I'm positive that something I registered for in 2006 and never used again probably used a weak, re-used password.
It was interesting when I found my name there the first time on service I did not even remember using. But I did and I just forgot about it. Luckily at time when I was already using different passwords everywhere.
Originally it was glitter nail polish http://motherboard.vice.com/blog/itll-take-more-than-glitter... ... the idea wasn't just to mark the screws to show if there'd been physical access, but to make a mark that's easily verified but very hard to reproduce, so you also know if your whole laptop has been replaced by the 'evil maid'. You need to photograph it to check!
Use unique SSH keys for each service (sharing a SSH key on your GitHub/Gitlab account, network router and AWS/Azure instance is a very stupid idea)
I don't see how this makes sense. Assuming your private keys all live on the same machine (presumably with 0600 in /.ssh), then if your machine is stolen and your user password compromised, access to one private key is the same as access to all of them.
I suppose, then, it's for those who don't want to be tracked, and not a "very stupid idea" per se.
But then again, if you don't trust the remote to know who you are, then why do you have an identity with them? I mean, the remote service is SUPPOSED to know who you are. That's kinda the point.
Paid services necessarily require a higher level of trust (since you are handing them money) than random internet services. So we are off-topic from ssh keys and identity.
If you don't want someone knowing your personal payment details (CC #, billing address), then pay in cash and use services don't deliver things to your home. And if you can't, then just don't use a service.
But that's living in way too much paranoia for most of us.
And this is not what I'm saying: I say imagine somebody can force you to give him your private key for one asset. He will get a key for all of them, unless you've already maintained separate keys.
I was actually almost involved in one of such cases, I haven't invented it out of the thin air. If you can't imagine such a scenario happening to you, you're of course lucky and you'd like to use one private key for everything. But the scenario is real.
I can't imagine situation where I would be forced to give up private key. And if I'm forced to give up one, I guess they can force me to give up rest of them.
The scenario is simply: you perform some action on one service and then some entity has the right (or might) to demand from you the private key with which that action was performed, but not "give us everything you have."
The equivalent when the scenario is an attack, and not a legal game: some entity manages to hack your computer with which you access the service A and on which you have only the private key for A, but not your another computer with which you access the service B, with the another key.
Separate keys: just your access to the service A is compromised, one key: all accesses are compromised at once.
It's not about leaking your private keys (although courts very much do work like that....think of it as keeping separate keys to each room in your house, as opposed to a single master key).
The real goal is privacy, given that your public key is available on github and sent via plaintext when authenticating.
There are some added benefits to managing separate ssh keys for each server: it forces you to use tools to manage your keys, which make it easier to mitigate disaster when the time comes to rotate your keys due to compromise.
This is not settled. Colin Percival, for instance, is firmly on the side of using RSA with many decades of cryptanalysis until EC solutions also have many decades of cryptanalysis.
The sad thing about this and other otherwise good privacy guides is that it can be properly applied only by a small fraction of all people who really need this privacy in their everyday work and life. Especially I like the "look over the source code for rogue functions" part.
What about mobile privacy? which OS? which Phone? which app? the author forgot there is even more privacy info we could lose via mobile with its built in sensors and features.
CyanogenMod without Google apps (gapps) is good enough (for normal life, not a Snowden-style situation, obviously). microG is a great open source replacement for gapps.
If you do want Google apps, at least turn off all the creepy features like Google Now, location history, etc.
I assume everything is hacked/unsecure and any information put on the net will be able to be accessed by all sorts of bad actors.
I laugh when websites etc ask for a phone number to help secure. My first thought is great idea so now when you get hacked you can give up my phone number too!
Internet has been and always will be Mos Eisley spaceport to me.
About full disk encryption for Windows: what is the safest bet here? I mean, what if a single sector of my disk gets corrupted, will I lose my entire data because of that? What kind of encryption is less prone to data corruption?
I'm worried about this. And how about .tar.gpg backups, if I lose a single byte I lose the entire file?
I'd add apparmor or selinux or virtualization (or all at once) for untrusted closed-source crap like Skype. Well, for things with large attack surface, like web browsers, it's important, too.
This is hard to recommend to everybody, but I use SELinux and this way I am more sure that my private keys won't get stolen.
Yes I do. And yes I know that clipboard content hijacking is piece of cake with Xorg. By this reason I launch original Skype only when I need it to have a call with my chief. Fortunately, such occasions are rare now. So I just avoid copy-pasting anything sensitive when Skype is launched :)
For the rest of time, I use XMPP-Skype transport (gateway) to stay connected with ~100 of my skype contacts. This XMPP-Skype gateway handles 1:1 and groupchats, which is ok for me. I host this system as a public service, so if you are interested, feel free to check http://decent.im . This is a work in progress on deployment of powerful open source stuff in a supercharged and easily reproduceable way, so no slack killer yet, things are dirty, just a handy tool for me (and few other account owners) to aggregate all one's messaging into one, and very flexible, mechanism.
Does anyone know of any good hardware password managers?
I'd love to switch from a software to an offline, open source, and self maintainable solution that will work for everything, not just websites/when I have my browser open.
I'm assuming OS X's FileVault is fine for full-disk encryption? It only sends your key to Apple if you choose to, and it's completely transparent from the end-user's perspective.
Personally I'd never trust any encryption provided by the OS - absolutely no one is in a better position to be compromised by bad guys (c.f. Recent Apple FBI scare).
Would rather use a third party solution that's not so easily coerced.
It's not about being competent enough to implement it properly. Microsoft's BitLocker is surely implemented just as competently. But because of their ubiquity, these are the most likely to have government-mandated backdoors in them that Apple/Microsoft employees are not allowed to tell you about because of gag orders.
If your looking for a tool which has a ton of easy security guides all in one place, you might like to try Umbrella App. It has lessons and checklists on everything from how to send a secure email to how to deal with a kidnapping. Built by the human rights and tech community, it's open source and available on Android.
Given that virus checkers only catch about 50% of any malware and recently there have been zero days in some household name virus checkers anyway. It might be good advice.
Some people will click on exe's because they believe the virus checker will protect them.
Actively scanning for malware in the background doesn't sound like a good idea to me tbh. The only malware detection anyone ever needs is Google Safe Browsing. And not running random crap executables :)
| Is it even possible to use the web nowadays without JS enabled?
Yes, but results may vary. I can do 99% of my daily browsing without JavaScript enabled, and for the sites where it's needed, NoScript can be told to always allow it (one specific script, or everything on the page). This is why you constantly see NoScript being recommended, it allows you to toggle JS on and off, as needed, which is invaluable.
I've been using NoScript for years and how much is blocked never ceases to amaze me. 99% of the script that most sites run has nothing to do with viewing content, or usability, and everything to do with tracking (there are usually multiple instances, sometimes dozens, on a single page; it's astounding).
Another nice feature in NoScript that I just picked up on is the shift+left-click option in the script list. This allows me to investigate what that particular script is for, and choose to permanently block/allow it. Very handy, and also eye-opening in regards to privacy.
Trying to save privacy is like trying to save horses for transportation, or bows and arrows for warfare.
We should figure out how to build a society that thrives on transparency instead!
Almost positive one of uMatrix or uBlock has that functionality, because I get redirected to https when available and those are two of my very few extensions.
How do you know it's your browser doing the redirect rather than the server? The distinction is important because if it's the latter, you're vulnerable to sslstrip
it can be configured to use strong TLS, making it at least as good as a regular browser. That configuration isn't particularly straightforward, unfortunately.
The browser configuration here (disable various features) seemed too complex and reminded me of another, simpler, approach: do not power on your computer ever.
I get it, you probably want to be private and rather not have someone read everything you do. But if you follow this checklist to the letter, you'll have a big fat "SUSPECT" warning on your file in no time.
Hiding non-suspect behavior is, for everyone watching, the same as hiding very suspect behavior. If you do this and make a single mistake (anything really, speeding could be enough) there could be a red flag on your file that makes sure your possessions will be searched (and possibly taken) and be prepared to spend some time in jail.
I get it, everyone should be hiding all their activity online so that hiding your activity online isn't suspect behavior. But I really don't think that will ever happen and I'd rather be an open book about all my behavior then try to hide as much as possible while becoming a target.
Then if you believe in the rule of law (in germany we say "Rechtstaat") and the presumption of innocence, it is kind of a civic duty to follow these steps. If only to oppose this line of thinking. Or even better, bait those who would like to control us into openly acting. Especially if you are a lawyer, have enough money to pay a lawyer etc.
I will probably piss myself and cry if I ever really "become a target" as it happens in China, cartel controlled parts of south america, dictatorships etc. But I will be damned if I don't make some kind of token resistance to us going down that path if all it costs me is keeping my privacy and maybe having legal hassle+ cost of replacement if my stuff gets seized.
Even if you think you're doing everything right, once you're targeted by LEAs they'll find something.[1] Even if they find nothing, they can make your life miserable pretty much indefinitely or until you run out of money for lawyers.
It's not just techies or criminals that want this level of security: companies also want to keep things secure, trust me. In the oil rig industry we used TrueCrypt to secure (1) employee/vendor lists, and (2) location scouting information. Given how difficult it is to obtain tis information, competitors would pay big dollars for this information, regardless of how obtained
1. Use a user-friendly password manager like Dashlane or 1password with a long unique password and a second factor (that isn't SMS based). Password re-use is the #1 way accounts are being compromised at the moment and there are now good password managers that are easy to use with a low barrier to entry
2. Use an extensive ad blocker like uBlock Origin and use multiple profiles in your browser to separate your serious accounts like webmail and banking from general web browsing. The other common way of being exploited is drive-by malware and web-based exploits. A combination of blocking third-party content and separating your browsing profiles will prevent a lot of it. Don't feel guilty about blocking ads - most publishers are extremely negligent with what they allow on their sites via ad networks. Bonus: switch to Chromium[0] (firefox isn't sandboxed and exploits are too common) (but alert yourself to Chromium updates with an IFTTT of the release blog to <pick your notification method>) or alternatively remove Google, Flash, Java etc.
3. Get a VPN subscription and set it up on your laptop & mobile devices. Seriously, don't use open WiFi networks or shared networks without wrapping your connections in encryption. sslstrip is extremely effective and many apps either don't verify/authenticate SSL connections or don't pin certificates. IVPN, PIA, the Sophos VPN product - take a pick.
4. Most home routers are super shit and full of holes. Upgrade to a router that supports open firmware and pick one of openwrt, dd-wrt, monowall, pfsense etc. bonus: run an UTM like Untangled (commercial) or Sophos (free up to 50 CALs iirc)
5. Encrypt your stuff - VeraCrypt is a decent TrueCrypt fork but most operating systems now have support for volume encryption - your local disk, USB sticks[1], or a file-based volume. Backups should be to encrypted media
6. Be anonymous - create a disposable email with a fake name to signup to services with. even better sinkhole a random domain name you register. No service outside of banking, insurance, health, etc. really need to know your actual identity details.
[0] https://download-chromium.appspot.com/
[1] http://www.theinstructional.com/guides/encrypt-an-external-d...