Hacker News new | past | comments | ask | show | jobs | submit login
Linux workstation security checklist (github.com/lfit)
376 points by signa11 on Aug 28, 2015 | hide | past | favorite | 160 comments



Not the soundest security advice I've read recently:

> We recommend that you use the same passphrase for your root password as you use for your LUKS encryption (unless you share your laptop with other trusted people who should be able to unlock the drives, but shouldn't be able to become root). If you are the sole user of the laptop, then having your root password be different from your LUKS password has no meaningful security advantages.

Your root password is much easier to steal than your disk encryption password. Trick the user into running a program that does 'alias sudo=evil-sudo' >> ~/.bashrc, or sniff it from an unrelated X11 window, or use a microphone. A microphone is far more likely to pick up your root password than a password typed once at boot. If the root password is sniffed with a microphone, the attacker might not even have root access to your system over the network. If stolen with evil-sudo or via X11, you might realize you've been compromised before all of your data is exfiltrated. Neither scenario should let the attacker then steal your disks and be able to decrypt all of your data. Unless you follow the advice.


I'm struggling to understand any situation where you should ever be typing either your LUKS password or your root password into a fully operating/running system.

If you're using sudo correctly, you do not enter your root password. Frankly, the only time I can remember being prompted for a root password in years is when the system has failed to boot and offers entry to a recovery/maintenance shell.

If you're using LUKS correctly for your boot drive, you enter it only during startup before the system has booted.

If someone has access to your disk and has your LUKS key, the root password is merely a formality.


You are entirely correct and it is too bad I did not delete my comment in time. Here I am with an intense dislike of bogus criticism on HN and yet perpetuating some. Sleep before you post...


If you have unrestricted sudo, which is how most single-user systems are set up, your password is effectively the root password.


> If you have unrestricted sudo [...] your password is effectively the root password.

Yes, but it keeps an audit trail of who's account was compromised, etc.


With root privilege you can erase all of the audit logs. A malicious attack probably would do so if he/she cares about that.


You can't erase remote logs, or logs written to a write-only logging device (the one's I've seen seem essentially to be a serial port/usb device that emulates a line printer).


True, but that seems tangential to the question of whether using the same password for both the root account and LUKS encryption is bad advice.


What I found problematic (unless I'm missing something) was the suggestion that your day-to-day user should be in the wheel group: if that's the case, once your user is compromised the attacker can edit .bashrc to change $PATH and use a patched sudo to take over the machine.

IMHO "sudo" (or "doas") should be used from a different user, by switching to a different VT. Or at least only allowing certain commands for you day to day user.


You don't need wheel to modify your terminal environment, $PATH or .bashrc. If your user account is compromised, your user account is compromised. Anything else is lipstick.

And I agree that the typical sudoers config that allows opening terminals as root isn't a good idea.

What I prefer lately is using a yubikey neo to ssh access a root shell. You can configure sudo and authorized_keys to allow directly opening a root shell and you never enter any passwords anywhere except the pin to unlock the yubikey's openpgp applet. The yubikey will lock the key after six unsuccessful pin attemps and you can just yank the key out when it's not being used.

Now all I need is some sort of phone app that reaches out to me on a side-channel to pops up asking me to authorize each login (something like libpam-askmeonmyphone)--(I really like the way the Microsoft Account 2FA phone app works in this regard). If I could change one thing about the yubikey neo, it's that I'd require it to make me physically interact with it to decrypt anything (beyond entering the PIN).


You misread my post, what I meant was that your day-to-day account is vulnerable, since you run your applications with it. If that's the account belonging to wheel, once your user account is compromised the attacker is root as well. One should use a different account, hopefully harder to compromise since is used only for sudo.

That said, I like your use of yubikeys, even better than the phone that's the sort of stuff that should be triggered using a smartwatch.


Basically, this is a GNU vs BSD misunderstanding. On BSD, only wheel members can use su to become root after entering the root password. On GNU/linux anyone can su to root by entering the root password regardless of whether they belong to wheel or not.[0]

The confusion with respect to these guidelines is because some BSD-inspired GNU/linux distributions confugure their sudo to use a "wheel" group.

> once your user account is compromised the attacker is root as well

... but only if they also know the root password.

Anyway, the long and short of it is that on GNU/Linux, wheel is only relevant on distributions that use wheel in their default sudo config.

[0] https://unix.stackexchange.com/questions/4460/why-is-debian-...


Right, I kept using "wheel" to mantain the terminology in the github page, but clearly this created confusion.

But my point was that, regardless of whether you use sudo or su, on GNU/Linux or on *BSD, if the account used to elevate priviledges is compromised, the next you elevate you priviledges you should expect the attacker to follow you.

This is what I meant by "the attacker is root as well": it's just a matter of waiting for the next time you use su or sudo.


sudo'ing as your day-to-day user is dangerous for another reason, too: the default 15 minute password timeout lets subprocesses in your shell use your sudo credentials, even if you didn't intend them to. e.g. sudo command, then later ./some-installer.sh - you didn't want to give some-installer.sh root, but it opportunistically uses 'sudo' and succeeds.

(I disable credential caching with "Defaults timestamp_timeout=0" in /etc/sudoers.)


Sure, it always bothered me that guides show whole lists of commands prefixed by sudo, instead of teaching people to disable the timeout and use "sudo -i" or "sudo -s" to get their stuff done. See also the fact that OSX ships with tty_tickets disabled.

Still, unless one is sure that the sudo being invoked really is sudo, disabling the timeout won't be enough.


it always bothered me that guides show whole lists of commands prefixed by sudo, instead of teaching people to disable the timeout and use "sudo -i" or "sudo -s" to get their stuff done

That gets to the philosophy of what sudo is used for in the first place. Back when sudo was experiencing a big upswing in popularity, it was a way to avoid having a root shell open at all. Forcing the user to type sudo before every command reminds them they're acting as root.

If you advocate for only using sudo as a different way to authenticate for a root shell that's cool and all (I often use it like that too), but you're going to run into a disagreement around the nature of sudo.


Sure, although I never found that reason to hold much ground given that in reality most distros ship with accounts that can open a shell (instead of having a list of allowed commands) and have the timeout not set to zero, so any security benefit that would come from not using "sudo -s" is pretty much voided.

And I'm skeptical that having users type sudo a bunch of time when they add a PPA and install a program somehow results in greater awareness, but this is purely an opinion of mine.

The other big reason for using sudo is the ability to change policy wrt which users can be root without changing root password everytime, and that clearly is true.


When you have multiple commands to run, chances are that only some of them require root privilege. Prefixing only those with `sudo` instead of executing all of them in a root shell allows you to reduce damage, should the "normal" commands are typed wrong or have bugs.


> That gets to the philosophy of what sudo is used for in the first place. Back when sudo was experiencing a big upswing in popularity, it was a way to avoid having a root shell open at all. Forcing the user to type sudo before every command reminds them they're acting as root.

I do not think that is accurate. As I remember it, sudo gained popularity because it provided better control over access to the "root password" for teams of administrative staff. sudo was lightyears easier and provided much more fine-grained control than figuring out how to design POXIX groups.


Sudo provides:

1) A mechanism to provide better control over access to root privileges where administrative access must be shared between multiple users.

2) An auditing mechanism to record both operations completed as the root user, as well as failed attempts to perform operations as root (usually authentication failures).

3) A way to remind the user that commands are running as root, as well as a way to avoid accidentally running commands as root, since you're never (without "sudo -i" or "sudo -s" or similar) at an actual root shell.

If you're using "sudo -i"/"sudo -s" all the time, you don't lose anything for #1, and losing #3 is mostly just an inconvenience (it's an extra layer of protection, but it's not even really foolproof even with sudo). Losing #2 (auditing) can be a big deal, though, because, if everything's happening at a root shell (rather than via the sudo command), you lose the audit log that traces what's being done at root. Gaps like that in audit logs should be an indicator that something bad might've happened, but you can't make that assumption if you're routinely doing it yourself, too.


Well, yeah. But that's not why sudo became popular.


Sudo is more often than not a security risk. If a program is not made to run as root it is usually because it would give too much power to a user (ie sudo mount is supid).

Add that to not having a root password and you have a single (weaker) point of failure (wheel group).


Frankly, I find it suspect that they even suggest having a root password. The only situation where this is useful is when sudo is somehow broken (e.g. a typo in /etc/sudoers). This happens rarely enough that I'm OK with a more heavyweight process (like booting Knoppix) to fix it.


It's the annoying times when your system forces you in to single user mode for repairs, and you have to enter something... or else boot up a rescue disk (the proper solution) to fix it, that make having a root password at all a sometimes good idea.

Really though, that rescue disk sounds like a better idea every time I think about the situation.


Depending on the breakage (e.g. if your initrd got borked), you'll probably need the rescue disk anyway in order to have access to the necessary tools to activate RAID arrays, decrypt disks, activate LVM VGs, etc.

Even without, IIRC, Ubuntu (one of the few systems that actually gets root passwords right by defaulting to it being randomly-generated garbage that nobody knows) will prompt for the credentials of a sudo-capable user even for single-user mode.


>Trick the user into running a program that does 'alias sudo=evil-sudo' >> ~/.bashrc

That's only going to get you the user's password, not the root password.


Good point. I've toned down my comment because that root password would be getting typed in less often.

An attacker might still bring an evil-su in addition to an evil-sudo, though. And even if you're logging into that root user only in an another tty, it seems like an unnecessary risk to share the password with LUKS.


You actually tend to never use the root password on a modern workstation -- just sudo. The situations where you have to use a root password are usually if something has gone wrong and you have to log in via tty.


What do you mean by microphone?

Is audio analysis of keyboard chatter a thing or are we leaking passwords when we verbally communicate them to a co-worker?

I don't think this is related, but I found this keylogging technique combining microphone, camera, accelerometer data to approximate finger/thumb positions for PIN numbers: http://www.cl.cam.ac.uk/~rja14/Papers/pinskimmer_spsm13.pdf


Audio analysis of keyboard chatter is a thing that had a proof of concept years ago. I have no idea how often it is done, but it is certainly plenty possible.


Maybe prepare a pullrequest for this? ;)


Not to mention this gem:

> it is fine to write down your passphrases and keep them in a safe place


I don't see any problem with that. Physical security is much easier to manage than digital security.


The age-old "don't write your password down!" mantra is not meaningful without clarification.

That is essentially, of course, how password managers work!

There's no reason not to write a password down as long as it's securely stored. You probably need embarrassingly little physical security to store your password more safely than many dodgy website solutions!

I _still_ frequently get my _already set_ password emailed to me in plaintext either on sign-up, or when I click "forgotten". (The site is always treated to an angry email encouraging a thorough redesign - though not of course, a 'reply', which would give them even easier access to my password!)


You left out the beginning and ending of that sentence:

> "Unless you have concerns about physical security, it is fine to write down your passphrases and keep them in a safe place away from your work desk."



Why not? Write them all on a piece of paper and lock it in a safe. Probably much safer than a password manager.


Absolutely. I use a password manager for everything except a handful of ultra-critical sites, mostly things involving money or attack vectors to get access to my email. For those sites I don't trust to store in LastPass, I write the passwords down on paper. But I also do something I haven't seen others recommend:

Have a (logical) salt for all of the passwords. Don't write down that salt.

So, if you found my piece of paper with passwords, you might see something like this:

Etrade - I1999IbmfsaaymIwIbmoi

Gmail - D9cjeawfocsIdkwhtfts4r

But my actual passwords are something like this:

Etrade - I1999ibmfsaaymIwibmoi804WMainStreet

Gmail - D9cjeawfocsIdkwhtfts4r804WMainStreet

804WMainStreet is tacked on to the end of all of them, but you wouldn't know that from looking at the sheet of paper. Only my spouse knows the salt, and it's easy for us to remember, e.g., maybe 804WMainStreet is the address of the first place we lived together. In theory, this is reducing randomness, which might make it easier to crack one knowing the others, but I'm not super concerned about that.

The two most important elements of security for regular consumers are: 1) Use different passwords for everything. 2) Use multi-factor auth when available.

Whatever you have to do to achieve that is better than not doing it.

*And I actually use initialism for these passwords so I don't have to pull out the piece of paper often, only when I forget. In this example, the Etrade password might be derived from "In 1999 I bought my first stock as a young man. I wish I bought more of it."


> *And I actually use initialism for these passwords so I don't have to pull out the piece of paper often, only when I forget. In this example, the Etrade password might be derived from "In 1999 I bought my first stock as a young man. I wish I bought more of it."

Ideally, you'd just set "In 1999 I bought my first stock as a young man. I wish I bought more of it." as your actual password :)


Can't really argue with that. I guess got this in habit of using initialisms, because a lot of sites had limits of 32 characters for passwords.

But that's probably less true these days. Since they should be hashing the password anyway, why not allow something huge, say up to 1000 characters.


You can break into a safe with sufficient physical force. You can't break into a LUKS-encrypted hard drive with that physical force ().

() Unless you either turn that force into the energy required to brute-force the passphrase/keyfile or use it in a way that invokes xkcd #538.


To all those writing the critical comments: I'd love to read a rebuttal to this written by someone who is a Linux and security professional, and explaining not only what is wrong here but why in addition to best security practises. Thanks.


Encrypt the root emails instead of sending them cleartext, can easily write a small script to do this and not give away internal security information.


FireWire is a vulnerability for Linux only because the kernel maintainers want it to be. There's a register in FireWire controllers which controls the address range for which remote memory accesses are valid. It can be set to 0, which locks out that function. The last time I looked, years ago, it was set to allow access to the first 4GB of memory, because the code pre-dated 64 bit systems.

I once proposed setting it to 0. This was rejected because there are kernel debuggers which use it.


Linux has, in fact, supported a configuration option CONFIG_FIREWIRE_OHCI_REMOTE_DMA (default n) since 2.6.26 released 13 July 2008. [0][1] In kernel 3.14, this was changed into a module parameter (default n). [2] I did not further investigate the status of firewire modules before then.

I was unable to locate any relevant mailing list posts referencing "firewire" or "DMA" by the author "John Nagle". [3][4]

[0] http://cateee.net/lkddb/web-lkddb/FIREWIRE_OHCI_REMOTE_DMA.h...

[1] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.g...

[2] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.g...

[3] http://search.gmane.org/?query=firewire&author=John+Nagle

[3] http://search.gmane.org/?query=dma&author=John+Nagle


It's been over a decade since I looked at this. I wrote a FireWire camera driver for QNX around 2003, and looked at Linux to see how they did it. At that time, the upper limit for external memory requests was a hard-coded constant. The config parameter came later. I think this was discussed on USENET, but I can't find it right now.

FireWire doesn't really have "DMA", anyway. It has some auxiliary functions to let the network controller get and set single memory words, one word per packet request/response. All bulk transfers use a different mechanism, which works more like an Ethernet controller, shipping blocks around into receiver-designated buffers. The "memory access" function is in there as a way to send commands to devices. This thing was designed by people who thought that storing bits into a control register was the proper way to talk to a dumb device. In practice, dumb devices now have some small processor, and the "memory access" function ends up being processed by a switch statement, to do "turn camera on" or some such function. There's no real reason for a PC-scale machine to need the DMA functions enabled, except for kernel debugging. That should be permanently disabled in all production systems.

Thunderbolt, on the other hand, apparently really exposes memory on an external cable as a normal function. That's how it extends the PCI bus to external devices. Not good for security.


Are these protections active before the kernel has booted?

Does DMA default to be off until enabled?

If firewire (and thunderbolt, expresscard) aren't DMA-free by default, then there's a time window before/during boot in which an attack could happen.

Full disk encryption/TPM/Secure Boot could help mitigate this though.


I am wondering myselves too.

  The new module parameter "remote_dma" (default = N, enable
  unfiltered remote DMA = Y) replaces the former build-time 
  option CONFIG_FIREWIRE_OHCI_REMOTE_DMA. (This kernel 
  configuration option was located in the "Kernel hacking" 
  menu, item "Remote debugging over FireWire with firewire-
  ohci".) It is therefore now possible to switch on RDMA at 
  runtime on all kernels with firewire-ohci loaded or built-in, 
  for example for remote debugging, without the need for a 
  custom build option.
from: https://ieee1394.wiki.kernel.org/index.php/Release_Notes#Lin...

I am not an expert in these matters, but could it be that OP is wrong with regarding to firewire? From what I am reading here is that dma is off by default, and can only be activated at runtime.

If the OP is right about the need to disable firewire, I hope someone could explain why so…


Got a CVE or lkml link?


That's a stupid excuse, they should put the flag to flip that bit in the kernel-hacking menu of menuconfig.


Or default to that being disabled unless "debug" is passed in the kernel boot parameters.


Nice checklist, signa11. But there are few moments which I should point:

1. TPM on recent Intel hardware is controlled by Intel Management Engine (http://libreboot.org/faq/#intelme) which basically acts as a hardware backdoor which cannot be disabled or controlled in most cases.

2. About firewalling: It's good to filter out even ping from Internet (it's almost always fine to keep it enabled for lan segment) to make automatic detection slightly harder (LOW). BTW, installing coreboot instead of manufacturer-provided firmware (if possible) also could be good improvement (PARANOID).

3. As for browser(and skype and all the rest of Internet applications) It's good thing to block and audit strange actions such as attempts to access ssh or pgp/gpg keys. By audit I mean set up quite visible and persistent notification. (MEDIUM)

4. Also, It would be great to add links to NSA Linux Configuration guide (http://www.nsa.gov/ia/mitigation_guidance/security_configura...) and CIS Security Benchmarks (http://benchmarks.cisecurity.org/downloads/browse/index.cfm?...).


> 1. TPM on recent Intel hardware is controlled by Intel Management Engine (http://libreboot.org/faq/#intelme) which basically acts as a hardware backdoor which cannot be disabled or controlled in most cases.

Seems kinda like this point is conceded: "plus there is a pretty high degree of certainty that state security agencies have ways to defeat it (probably by design) ..."

Other than perhaps misplaced faith, you're no worse off than you would be without TPM?


So you mean that having device with unlimited network, memory and TPM data access with encrypted firmware and separate processor should not be considered as a huge risk factor?

Due to targeted attack or leak from Intel potential malware can use it to elevate privileges, hide from any type of audit, survive complete system reinstall and even be used to silently infect systems by remote entities.

And lack of TPM module allows to steal encryption password by application running with system privileges, which already have all required access anyway.


Meh. I have encrypted /, /home and swap. I've disabled Secure Boot, and the TPM, and use legacy boot. I don't really trust my laptop manufacturer to get all this stuff right. I like to keep things simple (which is why I use syslinux instead of GRUB as a bootloader. GRUB2 is ugly as sin to configure)

On the FF extension front I'd like to add: Proxy Selector, Self-Destructing Cookies, and RefControl as recommendations.


> Meh. I have encrypted /, /home and swap. I've disabled Secure Boot, and the TPM, and use legacy boot. I don't really trust my laptop manufacturer to get all this stuff right.

Without secureboot, how do you know your kernel hasn't been modified to log all your keystrokes (including the passphrase to your encrypted partitions)?


If someone has physical access, they can remove my keyboard and install a hardware keylogger anyway. Updating my kernel image wouldn't be very effective given that it gets updated by Arch more often than it gets booted from.


And yet, the primary use-case for file system encryption is to protect data at rest, from someone with physical access to the disk. It seems odd to draw the line at physical security so arbitrarily -- that you'd be willing to encrypt your entire root file system and then throw your hands in the air when it comes to securing a tiny boot partition.


I mainly encrypt in case of theft, /boot contains nothing valuable. / contains a lot of information about my configuration, and installed packages, as well as the key to /home


Yeah, encrypting for theft or so you have piece of mind while the machine is off but in sight is a completely valid use case.

It all depends on your threat model. In the case someone is taking the machine from me while it is off (ie: most theft or legal problems), I have a chance given FDE.

In case someone has physical access to the machine without me around, I have little to no chance, no matter what I do.

A threat model which includes an attacker having potential physical access to a machine to perform an evil maid or other blackbag cryptanalysis is a threat model which is very difficulty to accommodate, and indeed replaced boot files are just the start of your problems.

A threat model without this however, has no reason to necessitate secure boot.

As such, I see no gain in using UEFI or SecureBoot as this guide outlines. It worries me that the author didn't consider a realistic threat model when writing this guide.

This guide also suggests:

> Unless you have concerns about physical security, it is fine to write down your passphrases and keep them in a safe place away from your work desk.

So it's highly confusing what sort of threat model the author had envisioned this to be written for.


Self-Destructing Cookies is an absolute must. It's fantastic practice that makes sure that you don't have any cookies persisting longer than they should, and really reduces your footprint.


It certainly makes sense from a security standpoint, but is there any additional overhead to encrypting swap? This is the first time I've ever seen it mentioned as a security measure.


Modern systems have enough ram that swap is not much used. If it becomes necessary, the overhead of encrypting will be the least of your concerns. If your system is swapping, it's crawling anyway.


In theory anything could be in ram could also end up in swap. So yes, it is important. And yes, there is overhead, but it isn't too bad if your system isn't RAM constrained, and your CPU supports AES-NI or similar.


Ideally, you wouldn't even have a dedicated swap partition. No data is always more secure than even the most perfectly-encrypted data.

If you do have swap, though, then the encryption probably won't matter much compared to the fact that your machine is swapping in the first place.


Hibernation is a very nice feature (I use it for laptop and desktop machines) and it requires a swap partition. Also, a encrypted swap is even more important in this scenario.

In the laptop I'm typing this, swap is one of the partions over a LVM2 physical volume over LUKS.


That's why I sad "ideally" :)


You don't encrypt /usr? I really do not understand the point of encrypting a subsection of your drive.


I'm assuming he specifically mentioned encrypting /home because it's on a different partition/disk? I'm guessing /usr is on the root partition which he said he encrypts.


I guess that /usr is probably on the same partition as his root.. But depending on the distro, and how the system is managed; it may be the case that every single file in /usr comes from a package, and can be cryptographically verified using your package manager (this is certainly possible with RedHat, Debian + derivatives at least). As long as /var or wherever your package DB lives is encrypted..


>I really do not understand the point of encrypting a subsection of your drive.

To prevent people from reading sensitive data on a stolen/lost laptop.


Everything except /home is on the root filesystem. I've never seen the point in loads of mountpoints.


It's a legacy thing from running UNIX machines with small'ish hard drives. A rogue process' log file could fill the root volume and prevent logins to the system. That's why /var/log or /var was on its own mountpoint/partition. It was much more of a UNIX sysadmin thing than a LINUX sysadmin thing, as EXT2 reserved space for root to do things.


Historical reasons aside, some operating systems (like OpenBSD) are designed to be able to implement different security policies by filesystem. For example, you could mark a given filesystem as executable or non-executable, adding yet another layer of security (at least policy-wise) to a system. And really, with things like LVM and btrfs, there's little reason why this is a bad idea anymore, since expanding subvolumes/LVs is generally trivial.


You can do that in linux by bind mounting a folder to itself with the more secure options. I have a couple systems where I do this to have directories noexec, nosuid, etc. Kinda hackish but useful.


Have you never used more than one disk in a machine? Locked out of a machine because /var/log is full?


More the one disk -> LVM

And no, only /tmp has reached capacity for me when building large packages with Archs makepkg.


Any security checklist should start with a description both how the machine is to be used and the expected threats models. There are plenty of things in this list that I disagree with, but only because I am looking at different security needs.

For instance: I see no mention of Tor or VPNs. So this workstation isn't concerned with APT-style threats, or anyone else with the ability to manipulate network connections at a high level. And my quick read sees no talk of memory encryption or any of the physical measures for countering cold-boot scenarios.

This is not a workstation for international (China) travel or for protecting against surveillance. It has some good advice, but is certainly not comprehensive.


> Any security checklist should start with a description both how the machine is to be used and the expected threats models.

It states right at the top that the target audience is Linux sysadmins and their workstations.

> It has some good advice, but is certainly not comprehensive.

"This, by no means, is an exhaustive "workstation hardening" document, but rather an attempt at a set of baseline recommendations to avoid most glaring security errors without introducing too much inconvenience."


Ya, all of a paragraph. Any security document, even something as basic as an internal policy statement about passwords, should begin with a thorough discussion of the threats that were in the minds of the drafters. Security is heavily a matter of opinion and perspective. The background on which those are based is therefore as important as the individual recommendations.


Make sure you check the regs for China before modifying machines for said travel; a few years back I was supposed to head there (didn't due to a snafu), and encrypted my normal stays-at-work laptop's drive (among other measures), but it turns out that I'm not allowed to bring an encrypted device in without advance permission. Might be able to get away with it, but what happens should it be detected?

If I was going to China and really needed to bring a machine, I'd bring a Chromebook with some means to run Linux and the minimum needed to pull critical items... and it would get binned upon return, preferably without being powered up anywhere near any means to connect out (though I'd probably want to ensure it was wiped).


Nice list, until... install a closed source product that sends backups offsite (SpiderOak). wtf?


The other backup suggestion's odd too - make a good backup disk passphrase, store it in your password manager. (where's the password manager backed up?)


I was using Wuala but they are closing it down later this year, starting from now. They recommend switching to Tresorit, Swiss based with end to end encryption and no keys held on the server (lose your password at your peril). Any recommendations for cloud storage that is secure and has a Linux GUI? Tresorit is not open source either as far as I can tell from the website.



Nice, doesn't do Windows though which I am locked into at work, and is aimed solely at backups, not sharing and managing files over multiple computers and OSes which is what I was mainly using Wuala for. But you answered the question, so you get an Internet point.


The Deja-Dup front end for duplicity is built in to Ubuntu. It does encryption, and knows how to store in all sorts of places, including locally and on multiple cloud providers.


And, last I checked, SpiderOak's protocol actually has a flaw: they could potentially store a copy of one's initial key. I emailed them about this, but not response.

Cyphertite was really interesting, but it sounds like they're going out of business.


That's why SpiderOak moved from browser-based signup to in-app signup, to eliminate this possibility. (The switch was made in April 2015.)

discaimer: I work for SpiderOak, but the response is my own.


That's really good to know! I'll have to check it out again—other than that one flaw, it looked like a really smart set-up.


It's just mentioned in passing, alongside other recommendations.


I always install fail2ban so to prevent brute force ssh attacks from getting in. It's popular enough that's it's probably available in your distro's package repositories.

http://www.fail2ban.org/


Also, don't use SSH passwords. Authenticate only with SSH keys.

An SSH private key is almost impossible to brute force, compared to passwords. (unless your SSH key generator has been patched by a clueless Debian maintainer)


Funny tidbit. I spun up a DO box (luckily it wasn't too important) to do some tasks on.

I set the root password (over ssh) to something random but only 8 characters. I meant to change it to use only keys, of course, but the machine was compromised in under an hour!


Better yet: it's a workstation. There's a pretty solid chance you don't actually need sshd to be even running.

(Granted, this guide specifies that it's meant for sysadmins managing groups of workstations, in which case SSH access might be necessary for remote administration, but for most users, SSHing into a workstation is unnecessary.)


Alternatively, use SSHguard if you want/need IPv6 support.

http://www.sshguard.net/

https://github.com/fail2ban/fail2ban/issues/1123


Why would you have sshd running on your workstation in the first place?


I've ssh'd into colleagues' machines to help them with system problems, and they've ssh'd into mine to retrieve their VPN keys. Not all of my colleagues have ssh access to a common server somewhere.

I've also remoted into my workstation from home to do emergency work.


Ideally, those colleagues would keep sshd disabled unless they actually do need your remote assistance, at which point they'd "service sshd start" or "systemctl sshd.service start" (or however it's normally done with systemd) or "/etc/init.d/ssh start" or "/etc/rc.d/rc.ssh start" or what have you. Minimization of attack surface is an important part of a comprehensive security strategy, and a remote login system - even one with a phenomenal track record like OpenSSH - contributes pretty heavily to that attack surface.


It's irreplacable when one occasionally works from home.


So that you can ssh to it?


To allow remote management tools to run. If there's one thing that'll make your systems insecure, it's letting everyone manage their own boxes.

SSHd allows you to restrict particular users to IP ranges and of course you can use IP tables and fail2ban too.


For my home computer, sshd is always on. It is configured to disable password authentification. On the firewall, I authorize only two things: ssh port and wakeonlan. If I need to access another port, it is generally enough to open temporarily a ssh tunnel.

I think that you lose most of the advantages of a unix computer if you can not access it remotely.


I have Linux on all my home machines, and none are configured to be accessed remotely. There are still plenty of advantages. Far less viruses / malware / crapware. Better performance on under powered machines. Better development environment (for the sort of development I do). No phoning home to MS / Apple (as far as I am aware).


> I have Linux on all my home machines, and none are configured to be accessed remotely. There are still plenty of advantages. Far less viruses / malware / crapware.

I don't think the 2 ideas are related. I can't think of a script kiddie that would brute force your SSH to install a virus on it. Join a botnet maybe - but probably not to just install a virus.

Also if you use something like SSH keys or OTP - it's pretty much guaranteed that no one but you can access it.


I think I would need to configure my router to port forward ssh before they could do that.(Or am I being dumb assuming that they can't get around that?)

Anyway my point was that Linux has plenty of advantage (for me) without being remotey accessed.


Today, my wife (at home) has called me (at work) because she could not launch chrome anymore. The disk was full. The files /var/log/kern.log and syslog were eating gigabytes. I was able to solve her problem remotely. I have also installed ubuntu on the computer of my old father who lives far. Regularly, I upload pictures (he is my backup for pictuers) and put a shortcut on his desktop.


I suppose SSH is easier to use on machines you don't own, but the best practice would be to set up a VPN tunnel. 6 - 1/2 dozen.


Why is a VPN better practice?

Only I've been using sshuttle to make ad-hoc tunnels and it's exceedingly easy - there's details at http://alicious.com/digitalocean-free-credit/ (if you'll forgive the rather mercenary promotional slant in that post).

In short "$ sshuttle --dns -r root@X.X.X.X 0/0 --exclude 192.168.0.0/9 # start sshuttle" tunnels everything but my local network over a server with SSH on it [I've not checked for myself with Wireshark yet, do your due diligence].

When setting up I found [Open]VPN a bit protracted in comparison.


VPN and SSH are only partially overlapping in their features. Are you suggesting that SSH only allows local connections, and your VPN is what allows you local network access?


It may be silly to ask[1] but is there a similar list for Mac OS X?

[1] Silly because, you know, closed source


There's the DISA STIG for OS X 10.10 [0].

[0]: https://web.nvd.nist.gov/view/ncp/repository/checklistDetail...



NSA [1] has pretty good list for major OSs.

[1] Yes, pretty ironic, isn't it?


I mean, if anyone knows about computer security, it'd be them. Nice of them to share.


No, it is not at all ironic that NSA provides STIGs for major operating systems. NSA is responsible for SIGINT and Information Assurance.


If only the NSA knew that :)


Why can't someone simply make a security wizard for Linux. Like I run the program and it gives me options and changes the settings based on my selections. Why must everything be so manual everytime on Linux...


They have, it's called Bastille, and it's been around for quite a few years: http://bastille-linux.sourceforge.net/


Last update in 2012...is it still workable?


This was a nice presentation comparing security in general to how cars were built in the 1960's, when engineers made the car work great - but with little consideration for "user error" or the user experience in general. 50 years later cars are highly protected against dumb things the user might do, and he suggests that's how the security community should look at building stuff, too(while hopefully not taking another 50 years to achieve great security with great user experience as well).

http://kernsec.org/files/lss2015/giant-bags-of-mostly-water....


Try YaST on openSUSE. Offers precisely what you're looking for (among other things; it's basically the Linux equivalent to Windows' Control Panel).

Relevant documentation: http://doc.opensuse.org/documentation/html/openSUSE_121/open...


Check out http://hardening.io - not quite a wizard, but a good start.


SecureBoot!? Hahahah, Linux Foundation marks this as critical?

I am sorry to laugh, but thank God the LF and others fought tooth and nail for some way to have someone other than Microsoft have the key.

But seriously, did anyone else laugh?


Why laugh? On proper UEFI implementations you can enroll your own keys. Arch Wiki (as usual) has an article with more details:

https://wiki.archlinux.org/index.php/Unified_Extensible_Firm...

It may not provide 100% security (what does?), but using this still provides much more security than just booting whatever lies on disk without any sort of verification.

So yeah. Laughing at this advice is at best uninformed.

A nice bonus-effect from using secure boot is that most UEFI implementations secure-boot faster. Why? Because when in "insecure" mode it keeps up a splash-screen saying "Booting insecure" for a few seconds before moving on.


I laugh, and get downvoted, because SecureBoot was a noble concept, and rightfully open source enthusiasts made it clear that trusting select corporate bodies and vested interests to be the sole distributor of keying material for the OS was problematic.

I deal with computer deployment for a living. I realize there is far more nuance, but I trust centralized system verification like I do centralized SSL PKI: I have to, and no one is trusting my self-signed certs unless it is not seriously work-related stuff. The majority of people, and I have a few budding Linux enthusiast friends, just turn off Secure Boot. Not to mention the numerous IT contractors I meet. It is embarassing.

UEFI is also a mess. I have recently read some good UEFI attack papers, and it scares me to no end, as the firmware extension capabilities make the B in BIOS increasingly seem more ironic. And the potential of having a FAT16/32 parttion of exploitable garbage from system vendors puts just back where we started.

I think this shit is great in theory, but in implementation I do not trust corporations with TPM devices, let alone SecureBoot. It works in principal, but only as long as you trust the US corporate and federal government interests.

For reasons long since expounded on HN and elsewhere, I do not.


rightfully open source enthusiasts made it clear that trusting select corporate bodies and vested interests to be the sole distributor of keying material for the OS was problematic.

And that was a good thing. It put pressure on getting the capability to add your own keys.

I realize there is far more nuance, but I trust centralized system verification like I do centralized SSL PKI: I have to, and no one is trusting my self-signed certs

Agreed. I guess it all depends on what you refer to as "security".

For me general internet security is about making it harder to get impacted by drive-by malware, portscans and similar.

I set as a general rule that I don't believe there exists practical security-measures which can counter-act malicious activity if anyone has local access to my hardware. I don't try to guard against that.

If your definition of "security" involves guarding against possible systematic attacks from US government agencies using pre-installed (Microsoft) keys, obviously secure boot is not going to provide that for you.

My point was that laughing at secure boot as a measure of increased security because "lulz Microsoft" is a knee jerk reaction which is factually wrong.

The majority of people, and I have a few budding Linux enthusiast friends, just turn off Secure Boot

And are you really going to tell me they are not more vulnerable than if they had been using secure boot?

Secure boot can be used to increase security and I don't think that is debatable. I think the reason you're getting downvoted is because you seemingly dismiss this point.

UEFI is also a mess

I think I'll have to agree with this one. With BIOS being clearly insufficient for modern hardware and software needs, UEFI was a pendulum move gone too far the other direction.

It's over-engineered, supports too many crazy things, and coupled with other "messy" features like Intel System Management mode (which also allows code outside the OS), the possible attack vectors (coupled with physical access) are getting increasingly crazy.


I thought there was a significant gap between our views, but I think you and I are of very similar viewpoints. Thanks for teasing something more detailed out of me, so I do not feel like so much of a troll.


I agree with you. I had to design some concepts for securing a boot process or replacing BIOS. My first inspirations were Open Firmware, Coreboot, and U-Boot, already doing much of the job. However, all too complicated, incomplete, or Forthy. So, I saw it from three basic angles drawing on what was done in past:

1. Replaceable ROM or read-only, flash chip that the system booted from with open-spec for user customization. You can trust their bootloader, yours, etc. Modifications require physical replacement to avoid software attacks. You can know what it's doing because you can flash it yourself or use your own if it's physically compatible. Can be cheap. Simple.

2. Combination of anti-fuse ROM and flash. The ROM has a highly robust firmware that test chip, initializes some stuff, pulls firmware out of flash, does crypto checks on it, and then runs it. In a maintenance or update mode, the ROM + flash combo can pull up an upgrade into memory, disable all I/O that can affect it, check it, flash it into storage, and reboot. ROM can't be modified without physical replacement, so always a root of trust to start with. Can be combined with No 1 if necessary.

3. Software-only protection necessitates combating 0-days. This means gotta use an EAL6+ development process. Abstract away hardware details into functions with interface types and checks. Code in safe subset with static analysis, testing, and simulation. Make different device handling pluggable modules to run that interact with safe interface. Extract implementation from that. Examples include using SPARK Ada, MISRA C w/ ASTREE checker, or Java via BootSafe technology. Optionally auto-gen tech like Termite for drivers.

Personally, I think we could go with No 1 on everything down to laptops and some tablets. Ultra-compact things such as smartphones can use No 2. No 3 is only for the most stubborn, cost-sensitive companies and those are unfortunately least likely to go the extra mile lol.


>On proper UEFI implementations ...

You're going to have to qualify that.


For your enjoyment, a screencap of UEFI dumping core on a fancy Dell "server class" machine during bootup:

http://i.imgur.com/YpOQXet.jpg

Work required that I restarted this machine frequently -- it would fault during boot probably 1/20 times.


It's no secret that the full UEFI spec is quite complex. Some parts are mandatory while others are optional.

It's also no secret that lots of hardware vendors are terrible at implementing software. The usual rule is that the less software they write, the better.

The result is that different vendors and OEMs have a different quality UEFI firmware implementations and different feature-set available in them. Some are clearly less proper and less complete than others.


Well, that's really the question, isn't it? Is there more security added by enabling UEFI, or keeping it disabled? I managed to brick four Lenovo Thinkpad T540p mainboards due to a UEFI bug. Fortunately I had the on-site corporate maintenance contract (that time it paid for itself), because neither I nor Lenovo could figure out why the system would end up getting so badly bricked it couldn't be booted at all. Turns out there was a bug in the UEFI implementation where if it was enabled at all (regardless of whether it was enforcing secure boot or not), and you had a Samsung SSD installed, the UEFI implementation would write garbage into its non-volatile flash storage, that would completely brick the mainboard, and nothing would fix it except for a complete mainboard replacement.

Sure, it was a bug, but if you've read the UEFI spec, it's scary how complex the thing is. It reminds me of all of the complexity NSA employees managed to insert into the IPSEC and TLS stnadards. Complexity kills, especially when security is concerned. You really want to keep the Trusted Code Base small and simple. And UEFI is not simple. Combine that with the competence traditionally associated with BIOS programmers, and the results are very sad....


Good questions all together. But if I were to make one technical nitpick...

> Is there more security added by enabling UEFI, or keeping it disabled?

You're not really disabling the UEFI firmware at a technical level. You're just telling the UEFI firmware to load a UEFI BIOS compatibility shim, which then proceeds to load a unverified bootloader instead.

How much security do you expect to gain from that? I don't think there's any evidence this will help make your machine more secure against software based attack Of course there are no absolute answers, or otherwise we wouldn't be having this discussion.


Well, the question is kind of moot, because I haven't dared to re-enable UEFI boot since. Supposedly newer BIOS's have the bug fixed, but the value to me in risky another range of motherboard replacements is just not worth it. I suppose if I cared about UEFI it might be a good idea to try it before the maintenance contract runs out, but as a kernel developer, I'm constantly replacing the kernel, so using UEFI is a PITA anyway.


I didn't laugh but I got a bit suspicious about their true motivations as soon as I saw that.


Not everyone has an evil agenda. And, if you read the guide, it specifically mentions the downsides of SecureBoot and offers alternatives (AntiEvilMaid).


So.. mostly use SELinux and UEFI? This is NOT the advice I wanted to hear.

Maybe I needed to hear it, but both of these things (last I tried them) were a giant messy pain in the rear end. And from what little I understand, both have parties involved in their creation/promotion (MS, NSA) that might don't have stellar open source/privacy pedigrees.

Maybe worth another look.... IDK. But for now... I turn both off.


I'm working on a fully encrypted laptop (LUKS, all partitions except boot/efi) and feel quite secure.


I have a fully airgaped and True-Crypt encrypted computer and don't feel secure at all. I guess it all depends on the attacker you envision and the trust (or lack thereof) you have in your encryption software.


What kind of breaches do you fear with your air-gapped setup?


The Police.


Shameless plug: a software "checklist" for desktop users - https://github.com/FedericoCeratto/desktop-security-assistan...


Wasn't the Linux Foundation supposed to be just an umbrella org that provided employment to key Linux devs? So that they could be independent from corporate influence and work indepently/freely without corporate interference.


IMO the Linux Foundation serves three functions:

1. It manages assets: a. Linux intellectual property, like the Linux trademark. b. It secures funding and invests the proceeds to establish fellowships for kernel developers. c. It manages computer hardware for kernel developers.

You remember how https://en.wikipedia.org/wiki/Kernel.org#2011_attack was a thing? It seems reasonable for the people who's job it is to operate computer hardware to publish guidelines on securing their own workstations, and those of kernel devs who randomly have root access on kernel.org.


I'm surprised that they left out Tomoyo as a MAC option. I thought it had a decent following.

https://en.wikipedia.org/wiki/TOMOYO_Linux


Setting SElinux on enforcing mode on a desktop is almost impossible.


Works for me. SELinux is enforcing by default on Fedora and I've never had to change it.


Do you add any software that you use into SElinux? Otherwise SElinux is useless.


Maybe if you need this much security just use OpenBSD? :)


Can you enlighten the uninformed what would make OpenBSD more secure by default? It's my firm belief that the only secure system in the world is one that no one can use, biggest security hole you can add is a person...


PaX is not a MAC framework.


This is an interesting list, and I don't see anything glaringly wrong ( a few personal preference subjects, but..), so here are my handful of extra tips on top:

1. You can encrypt grub in order to prevent single user mode et al boot attacks. It can also make FDE systems recovery a pita though.

2. They already said it, but GRSEC is where it's at. It's really the future of linux security enhancement, and while you can run it in tandem with SElinux et al, I find it's better to run GRSEC and just fine tune it. You will thank yourself for learning it.

3. These days, you need a HIDS, full stop. What good do logs do if you never know what happens or only check your logs once a week/month/year/never? After spending time trying all the main ones out, OSSEC is my HIDS of choice.

4. SSH: while fail2ban, denyh0sts, et al are all workable options along with the listed option tweaks, what I find to work the best in addition are two things. A) Obscure port. We all know security through obscurity isn't, but reducing scripties bogging stuff down and keeping your logs cleaner helps imho. (it's also the difference between a metric ton of log alert emails and only a few). B) Two factor all the things. I am using the Google pam module, "libpam-google-authenticator". I stopped trusting tor but some friends of mine swear by ssh over tor hidden service.

5. The bottom line is that the linux kernel is out of control at >10mil loc, and 0-days/1-days are prevalent. If you have an internet facing system, it's probably going to get compromised, what you really need is the ability to find out as soon as possible when it happens. What this boils down to is you don't want to lose your data, so you need encrypted backups and verifiable checksums/hashes, so that once you've brought up a fresh system, you can restore data asap. Another thing that factors into this is configuration scripts and management stuff. I really like ansible since it works over ssh/powershell. Can really save a lot of time.

If you really want security, you also need to start and run minimal. I would say self compiled is the best (use flag changes often prevent sploits that otherwise work) so gentoo/slackware/arch would be the best nix distros for this. Beyond that, BSD is still king of the security world imho, especially OBSD, but please give DragonFlyBSD a look. While it's not touted as a "secure" distro, it has a ton of features that make it sexy as hell and it needs security contribs if you have the time. If I were starting a fresh ISP, I would be using DBSD.

For those of us stuck wanting to game and do more fun stuff though, who live in a debian/fedora/ubuntu world, just keeping an eye on logs is really the best you can do. Also keep in mind impact on perf that FDE may have if you are a linux gamer.

Those are my main tips/tricks, but I'm sure both the article and I are missing things, so take it all with a grain of salt.


You're leaving off two details in your otherwise nice list. First, the backups aren't going to help you in a targeted attack if the attacker can mess with them. For this reason, I always recommended write- or append-only storage for backups. Or a second copy on these made later (i.e. batch process). I used CD/DVD-R's.

Other thing is you've hit problem (0-days) without solution (isolation). There are numerous technologies, especially separation/MILS kernels, that can isolate damage inside one or more VM's. They can also run security-critical processes outside of them directly on the kernel or in their own VM. INTEGRITY Desktop, LynxSecure, and Turaya Desktop are commercial examples. QubesOS, GenodeOS, and Muen Separation Kernel are open-source examples.

So, there are two things. We also have all sorts of interesting tech for protecting kernels in the works in academia that might transfer to rest of us eventually. Better virtualization (esp I/O), DIFT, CPU obfuscation, tags, capabilities, automatic safety/security transforms... you name it, there's already prototypes. So, all is not lost yet. :)


Good point on the seperation kernels. I really like the team behind QubesOS, who have written some of the first evil maid articles I've read, but I haven't tried it yet. Have you used any of those systems and have an insights? Are they usable or still in the works?

Of course DoD/Darpa have their nice little distros but they tend to be proprietary and expensive so I essential pretend they don't exist for my purposes.


QubesOS founder and I got into it on their mailing list so I haven't tried it. Joanna updated her blog and FAQ to try to counter every point without mentioning my name or allowing replies lol. Anyway, my worries were: the Dom0 code in TCB, Xen kernel's complexity plus bug count, no covert channel analysis, that she was unaware of all similar research/issues in that area before Qubes, that she didn't know why user-mode drivers improved system robustness, and that she cited Mach/Darwin as why microkernels like L4 weren't good foundations (?!). All troubling traits if I'm to trust what they produce against High-Strength Attackers. However, my friends that have tried it like it, praise the usability, and say (with backups) you could use it day-to-day. So, I recommend it along the same vein as whitelisting and anti-virus: stops low to mid-grade attackers along with background radiation of Internet.

Plan to try all three again soon. Muen is a straight separation kernel with static configuration. So, will be limited but usable for simple setups: appliances, trusted + untrusted VM's, main stuff in Linux w/ crypto stuff in OBSD or native partition, maybe embedded on decent hardware, etc. GenodeOS is getting rapid development for a small project with a clever, resource-management architecture that needs further evaluation by pro's. Unlike QubesOS, they follow academic work producing best-of-breed components (eg Nitpicker GUI, seL4) and try to integrate them. Project itself was result of work to make more secure architecture. Both of prior have tiny TCB w/ GenodeOS having microkernel's performance advantages (Muen situation unknown). Finally, QubesOS builds some nice architecture, excellent usability, and hardening on top of mature Xen code-base with its risks/rewards. So, not really apples to apples here with any of them. QubesOS is definitely ahead in usability and features, though.

"Of course DoD/Darpa have their nice little distros but they tend to be proprietary and expensive so I essential pretend they don't exist for my purposes."

That's true. You have to pay to get the really good shit. OSS/FLOSS never do [1] high-assurance security, though: almost always companies or academia releasing it OSS after the fact. So, I've been investigating models that combine open-source, proprietary licensing, and review. If that's a shock, it's because almost all online discussions talk either proprietary/closed or free/open. However, that's barely relevant for security in practice and narrow thinking that misses other options [2]. So, my idea is to make the software proprietary, optionally a non-profit, have pro's build it for money, put an upper bounds on licensing cost, keep purchases perpetual, simple contract terms that won't change, source provided, extensions allowed with re-submission requirements yet to be determined, paid review by pro's, rewards to encourage others, and contractually to be released Apache/GPL if company tanks or product to be discontinued. This should cover extreme sophistication and labor required to build high-security, let people extend, let people fix stuff, and being more trustworthy. Your thoughts?

[1] https://www.schneier.com/blog/archives/2014/04/reverse_heart...

[2] https://www.schneier.com/blog/archives/2014/05/friday_squid_...


you cant encrypt all of grub, some bit of it has to load the rest.


This is very handy


This is heavily dependent on hardware platform (namely, UEFI - and therefore SecureBoot, which is very foolishly being advocated given its limitations - are x86-only). While this might be acceptable for most users, this ignores that non-x86 workstations do still exist.

> Has a robust MAC/RBAC implementation (SELinux/AppArmor/Grsecurity) (CRITICAL)

This is too specific; there are plenty of operating systems with a much better security track record than even GNU/Linux that don't implement MAC or RBAC (namely, OpenBSD), and it misses the point of MAC/RBAC: privilege separation. Really, the goal here is for any given program to only have the minimum necessary permissions required for it to do its job. You can do this quite effectively by keeping running services/daemons isolated to dedicated users with a minimum permission set (this is, in fact, the core of how Android apps are sandboxed), especially when paired with a proper sandboxing solution.

The nice thing about MAC and RBAC is that they have policy implications; when used properly, they can clearly define the level of access some running program should have to a given part of the system. They also tend to go hand-in-hand with fine-grained control over resource access, but it's not correct to conflate access control mechanisms with granularity (you can have a fine-grained DAC-based system or a coarse-grained MAC-based system).

> Use full disk encryption (LUKS) with a robust passphrase (CRITICAL)

Or (inclusively) a key file (preferably one which is password protected).

> Make sure swap is also encrypted (CRITICAL)

Or just don't use swap. Even with encryption, if data can be ephemeral, it should be.

> Set up a robust root password (can be same as LUKS) (CRITICAL)

I very strongly disagree with this. As much as I dislike Ubuntu, it does one thing right: it defaults to disallowing any sort of direct root login (by setting root's password to some randomly-generated garbage during installation, IIRC), requiring all root access to be done with sudo unless the user explicitly sets a root password.

I especially very strongly disagree with the suggestion that the disk encryption password should be the same as any other password, let alone the root password that shouldn't exist in the first place (well, more precisely, should exist but should be entirely unknown to anyone or anything, including yourself).

> Globally disable firewire and thunderbolt modules (CRITICAL)

This, along with the recommendations to not use hardware with such ports, should be marked as (PARANOID). While it's certainly a good idea if you know they won't be necessary, there are plenty of valid use cases for them (particularly on Apple hardware; while Thunderbolt display support is still sketchy on Linux, it's still a very common use case), and such actions meet (PARANOID)'s criteria much closer than they do (CRITICAL)'s.

And really, while FireWire and Thunderbolt do have specific security implications (due to them effectively being hotpluggable PCI and PCI-E, respectively), this should hold true for any port on one's machine. Any connector can be a security liability when confronted with a sufficiently-motivated attacker.

> Configure the screensaver to auto-lock after a period of inactivity (MODERATE)

This needs to be (HIGH), if not (CRITICAL). Why bother with some FireWire jig like what this guide is so afraid of when the machine's already unlocked?

> Installing rkhunter and an intrusion detection system (IDS) like aide or tripwire will not be that useful unless you actually understand how they work and take the necessary steps to set them up properly

None of the things in this guide will be that useful unless you actually understand how they work and take the necessary steps to set them up properly. None of them. Not MAC. Not RBAC. Not grsecurity and PaX. Not SELinux. Not LUKS. Not passwords. Nothing. This sentence is entirely meaningless.

Not to mention that rkhunter should probably be (MEDIUM)...

> SSH is configured to use PGP Auth key as ssh private key (MODERATE)

What? That's a terrible idea. It's as terrible an idea as using the same password for root and LUKS. If one key is compromised, now the other is, too, because they're the same key.

This is really just a waste of effort and time. The normal approach is to just generate two separate keys, and there's no reason to deviate from this; doing so will just make your life harder and less secure.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: