I love the level of paranoia. The transparent message of these documents is that its possible for The Enemy to exploit everything on your system. "Disable your laptop camera. Disable your computer's audio input."
For the NSA and other top secret uses this makes good sense.
Why not indeed: after hearing that the FBI could remotely activate microphones on mobile phones when no call was even active, of the story of that school that used webcams in Macs to spy on students at home with a special system that disabled the activity light, and knowing the ability for malware to do any of that: Why do I have this camera facing me all day, when I use it three hours a week?
I see, I could be wrong about the particulars of that case. I could see how someone might not notice or be concerned about the light flashing every now and then, too.
If the school wanted to disable it though, it wouldn't be impossible for them since they had access to the hardware. A thin dot of black paint might do the trick, especially if they peeled back the glass a bit, which is a pain, but quite do-able. Certainly something people in the NSA would be wary of... it's a shame that high school students should have to worry about that, too.
If people want something that won't discolor the plastic or leave residue, use the sticky portion of a yellow sticky note to cover the camera. I've been using the same piece of little sticky for 2 years or so.
On Macs it has the added bonus of still letting you see when the camera light comes on.
DoD had a removable device scare a while back (someone intruduced a windows virus into an internal network via one), and for a while there they were going around and filling external USB ports with epoxy. Eventually they "relaxed" their position on that.
They also have a very useful guide focused on network infrastructure hardening. It's intended for Cisco IOS switches, but many of the principles can be adopted to other platforms:
If obscurity is useless, then why does the Army camouflage tanks? Why not just paint them blazing orange/pink and let them stand on their own defenses? There is a place for obscurity in security and the endless parroting of "security through obscurity is useless" should stop.
It seems like the argument against security through obscurity is similar to the argument that T(n) = 10,000n^2 is equivalent to T(n) = 10n^2, as both are O(n^2) even though they have vastly different coefficients.
At the end of the day, they have the same Big-O notation, but one of them is a thousand times better than the other.
Sometimes the coefficients matter. Obscurity in security is like the 'lost' coefficient in Big-O notation: it's not the first thing you should focus on, but it can really help.
It's not that obscurity is useless, it's that it often inconveniences the legitimate user as much as the attacker. "Often," in this case, scales with the size of the organization. Changing the ssh port on your home server might not be overly inconvenient. Changing the ssh port in a 1.4 million person organization puts you on the wrong side of the cost/benefit curve.
Obscurity isn't useless in all situations, but in this case it's important to realize that changing the port number is really only effective in preventing scanning attacks.
For a targeted attacker, finding the new port is trivial. (There are only 65k of them...)
Comparing service discovery with maintaining a fog of war isn't really an apt comparison.
How about using "password" as a password if security is so good? Or do you suggest people keep track of 255 UTF-8 characters for a password?
An attacker would still need to go through all 65k ports. I would assume by even false scanning 5 ports, the attacker gets immediately null-routed and still get no response. I would also hope such programs have a paranoid sense of security that they would deny user/password if either are false by not even providing a response as if the program didn't exist.
Due to lack of feedback, users will get inconvenience and confusion why ssh doesn't work. Much like passwords example, there's a trade off between usability vs security with varying obscurity levels.
> I would assume by even false scanning 5 ports, the attacker gets immediately null-routed and still get no response.
So run the scan using a botnet. Each machine makes one attempt (there are some really big botnets out there). There's no way for the machine to prevent the attacker from finding the port being used, unless the machine notices that lots of requests are coming in from unknown machines and starts refusing all requests - of course, refusing requests from unknown machines is a good thing to do if you're being paranoid. Use a whitelist of allowed machines, not a blacklist of disallowed machines.
Is there really no way for the machine to prevent private ports from being be known without a specific call? It seems to me that machines are designed to follow the standard to be nice and respond back [0]. Even with a whitelist, I have concerns that the machine is opening itself for a DDoS SYN attack by simply replying back rejections.
Tanks don't have camouflage for defensive protection from attacks - they have it to reduce the likelihood of detection.
This is exactly the same as how a computer system attacker will attempt to disguise their attacks as legitimate traffic, and how honeypot system disguise themselves as real targets to attempt to detect attackers.
Security is a subtle field. "Security through obscurity is useless" is a useful starting point, but - as with most things - you need to understand what it means.
This is true, but it has advantages. For example, system log files are much smaller. :) When I adopted this policy on my own systems, it was to help me change my worry profile from "noise from every random bot on the net" to "targeted attack".
For a long while I ran a tricky firewall chain that would allow you to connect to port 22 only 3 times before it would lock out your IP for 30 minutes. My ssh port was at something like 2022 so I (almost) never got caught in this, but an unsuspecting scan bot would get stuck on it (http://en.wikipedia.org/wiki/Tarpit_(networking)#The_origina...).
> That's not security, that's obscurity. If your ssh is secure, you don't need to change your port number.
It's also a mild performance hack. Bouncing a connection to a closed port is faster than waking up the SSH daemon to bounce the same connection. Port 22 gets a lot of traffic.
There is another reason to do this as well, when at Lockheed - we changed all SSH ports as we were scanned literally continuously in real-time from chinese IP blocks.
Port knocking can be automated. Hopefully everyone that needs ssh access is either experienced enough to install the port knocking script themselves, or under IT's supervision so they don't even have to know the port knocking script exists.
Scripting something with port knocking, when you don't actually have the compiler available, or nothing user-writeable is executable is not nice. ssh on a standard port will just work and users don't have to figure out how to activate it. Changed port will take some explaining, but could be used for advanced people.
40 character passwords, along with host-based firewalls that /dev/null you after 10 failed attempts are just as effective as keys. In case you are wondering, the passwords are pasted, not typed.
...Until someone hacks your system and manages to grab the password. The nice thing about keys is you don't have to reveal your secret, just prove you have it.
Sadly the revered secret is usually stored as safely as the password in plain text on the compromised server. I would any day recommend someone to use 40 character random passwords locked down on the client, available through pasting, than have them fumble around with improper key exchanging.
It's pretty typical to audit all your destination servers (which may number in the thousands) to ensure that they don't have private keys (the id_rsa) - in theory all they need is your id_rsa.pub.
Then, you only need to worry about securing your clients - and, it's typically difficult to wholesale (a) hack into all the clients, and then (b) yank the encrypted id_rsa off of them (as compared to hacking into a destination server, and then just wholesale waiting for people to login and grab their password). It's next to impossible in a secure environment (financial/medical/utility) where they have hardware dongles which contain their id_rsa. (Or, they move into a new class of secure, and use One Time Passwords, RSA Tokens)
Net-Net - multi-use static passwords are not used in secure environments.
Yes. Have a look at Tiger: http://savannah.nongnu.org/projects/tiger/. There are also much more active solutions. Specifically, take a look at http://en.wikipedia.org/wiki/Metasploit_Project. With Metasploit you can explore your entire network and try to break into it. I saw a great presentation on this framework and while it's complex and the learning curve is high the stuff is fascinating.
Basically, let's say you have a mixed Windows/UNIX network with a single UNIX-like web server and a strict firewall. Metasploit will automate looking for vulnerabilities on the exposed web server ranging from known shell and SQL injection attacks to straight-up buffer overflow attacks on system services. It will then automate deploying remote exploit, then a local exploit to get root, and opening up a shell connection to the broken server (can tunnel traffic through a variety of methods including DNS). It will then help you scan the rest of the network from this newly rooted machine, doing the same thing.
The takeaway is that once you get any access to any Windows machine, you've owned the entire network because there is usually at least one local root exploit on Windows and then it takes minutes to crack the admin password. After that, you better pray that each machine has a different admin password, or else it's game over.
EDIT: BTW, the presenter had some fun ideas/suggestions for what a person breaking into the machine might do:
* Take a picture of the user using the web cam and set it as the desktop background.
* Take a screenshot and set it as the background.
* Play the sounds recorded through the mic to the speakers.
"This same tool has now been resurrected and there is ongoing development in order to make it useful for newer versions of the UNIX operating systems it supported."
In all seriousness, is there a shorter "cheat sheet" of this document or something similar? I'm sure I'm not the only one here working on building a server side app but has little security experience. "Security" is tough to give top priority to at a startup and implementing a 200+ page security protocol isn't realistically going to happen anytime soon.
Lots of vendors have offerings in this area. The quality is ... variable. If you are thinking of getting into this market, then the hard part is not the scanning engine per se. The more difficult parts are:
* Keeping the policy rule set in sync with reality (changing business policy, attack and vulnerability landscape)
* Maintaining the rule set (are DSLs for non-specialists feasible? service model?)
* Intrusiveness (intrusive validation of a test platform requires careful change control -- is test identical to production? intrusive validation of production may lead to shot feet)
* Application, domain-related and physical security (as opposed to system security captured by this document). These things are harder to scan for automatically.
Also, as the guide mentions, it's possible to mount the partitions with different options. As the following (all mentioned in the HowTo, excerpts from "man mount"):
noexec Do not allow direct execution of any binaries on the mounted filesystem. (Until recently it was possible to run binaries anyway using a command like /lib/ld*.so /mnt/binary. This trick fails since Linux 2.4.25 / 2.6.0.)
nosuid Do not allow set-user-identifier or set-group-identifier bits to take effect. (This seems safe, but is in fact rather unsafe ifyou have suidperl(1) installed.)
nodev Do not interpret character or block special devices on the file system.
Of course this is just one less (perhaps improbable) attack vector. Never the less; many of the mentioned partitions should never have these kind of files unless it's for a malicious purpose.
A common way for an attacker to get root access after exploiting a service and getting accesses on that service's user account is to download some additional tools to tmp. Since no one should be running programs from tmp, turning exec off hopefully gives an attacker with a compromised daemon account no place to download code that they can then execute.
take /var/log as example. if someone floods your server with traffic and you log every tiny bit (or even worse you have the debug level set to on) the logs will quickly take lots of space. if the space is full no further write access is possible
if it's a seperate partition you can't write more logs... if it's on the same partition you won't be able to log in by ssh as example because your drive is full and you can't spawn the sshd agent ;)
This one is worth doing. I've been asked to fix a Trixbox that was blocking calls because they were logging every phone call and it filled up the harddrive. Easy solution to a possibly annoying problem.
I really hope we can get to a new generation of filesystems soon. Something closer to what ZFS is now. It frustrates me that I have an 80GB disk and I have to allocate portions of it to each of these tasks and try to correctly estimate in advance what each one will use. Eventually it will turn out I mis-estimated and it will be a pain to juggle already-filled partitions.
With a system like ZFS, you create logical filesystems essentially at a folder level. So you can say that /tmp is noexec and has a quota of 20GB but a reserved space of 2 GB. You can also make sure that your root drive has at least 10GB reserved, so even if /var fills up, you still have at least 10GB for your root space.
And this is without heavyweight partitioning, things are easily modifiable on the fly, and you still get constant-time snapshots.
Fortunately, logical volume managers exist and do help.
Does anyone have any other guides like this? I'm using Arch Linux and want to run nginx on it. I'd like to make sure it's as secure as I can make it before deploying the website.
The NSA's STIGs are a useful resource. If you want the higher-level policies they come from, NIST[0] is one candidate.
If you want documents around the same level of specificity as the STIGs, you'll probably need a non-governmental source. Arch Linux and Nginx are not EAL certified[1] at any level, so the US Government isn't even going to try to secure them.
Security recommendations must be understood and modified (even ignored) as the environment requires.
If you're in an environment where security is very important, then having to send someone on-site to reboot a server may be reasonable. Or maybe your data center is staffed. Or maybe you have an IP-KVM, so you can access the console remotely.
Alternatively, most BIOSes I've seen have both a supervisor and user password, or similar: one disallows access to the BIOS setup, the other restricts booting. You could set only the one that disallows setup access.
Yes, you do have to take the KVM over IP device's security into account, like any other remote service you expose. It's the same set of problems you face with securing your other encrypted remote login (SSH) with similar solutions.
There are NO BSD's on the list. I suspect this is due to a few things.
1) How many people actually use BSD in a production environment?
2) If you are the sort to actually use BSD in a production internet-facing environment, you are probably using OpenBSD and you are probably already so paranoid about security the NSA has nothing left to teach you
BSDs get used a lot in production environments, even in DoD, but they're usually embedded in devices or otherwise certified and managed as a specific product, not as a general purpose operating system.
Really, NSA? So you enjoy tracking down broken package dependencies, installing software 5x a week as developers need it (thus slowing down their development time), and not having the tools to troubleshoot downed systems when they're down (and potentially without access to Yum)? Not to mention having to USE YUM, which is in itself not a fate i'd wish on anyone. If you actually audit and secure stupid stuff like excess running network services and setuid-root binaries you are left with one thing: usermode applications which cannot be used for any attacks. Thus it's not only annoying to not have software already on the box, it's stupid too.
2.1.1.1 Disk Partitioning
Are you people really stuck in 1998? Are we really still making a separate partition for /boot? Look, guys. BIOSes could access disks past the 1024th sector like 10 years ago. And for christ's sake, nobody has ever been saved from having a 4GB /var/log partition and a 20GB / partition. The disk space is finite. If you run out, YOU'VE RUN OUT. Just make one bigass partition and IMPLEMENT DISK SPACE MONITORING and clean up your crappy logs before the disk runs out. If /var fills up you're fucked anyway, so might as well give it as much space as possible.
2.1.1.2 Boot Loader Configuration
Oh my god, HOW could we possibly be secure without a password to BOOT OUR MACHINE. The damn disks and boot partition aren't even encrypted, guys! This is useless! If i'm at the machine trying to change the boot configuration i'm just gonna remove the hard drive or use a jump drive and get at the data myself!
2.2.1 Restrict Partition Mount Options
OK, they redeem themselves here on the partition shit. I still think /tmp should be tmpfs or a swap partition, but whatever. Mounting user-writeable partitions with nodev,nosuid,noexec is actually a really effective and easy way to prevent payloads from being dropped and executed. Of course you can still just buffer overflow and have at whatever service you want, but it makes it much more annoying for attackers as they can't just download a payload to disk and run it. Of course this also means you can't scp scripts as a normal user and run them; you'd need to make a special account that can write to / or some other directory which can execute scripts, so you can copy admin tools/scripts there on the fly for maintenance etc.
2.3.5 Protect Physical Console Access
Again, this is stupid. BIOS password? I'll just remove the CMOS battery. Boot loader password? I'll use a jump drive or remove the hard drive (or put in my own and boot to it, then access your disk).
2.5.3.1 Disable Support for IPv6 unless Needed
Too lazy to use ip6tables, huh? Yeah you're right, we'll never need IPv6.
2.5.4.1 How TCP Wrapper Protects Services
REALLY, NSA? Allow only specific IPs or hosts? Are we really talking about fucking TCP wrappers? If you rely on TCP wrappers you should probably be fired.
3.5.1 Disable OpenSSH Server if Possible
....how the hell am I supposed to maintain the system then? Use Rsh? Just hope that nothing ever goes wrong so I never have to log in to troubleshoot?
Clearly somebody just decided to list every single commonly-available-at-install "security feature" found in modern Linux distros instead of showing how to implement security best practices and a structure of limited access control on available services (combined with robust configuration management). Yes this is all very nice for beginners, but if you're really trying to secure a machine you shouldn't be giving the task to a beginner.
> 1.1.2 Minimize Software to Minimize Vulnerability
The NSA have different priorities to you. If you want to install all software possible, like in a software shop, fair enough.
There's nothing wrong with yum, so ignoring that point.
> 2.1.1.1 Disk Partitioning
It's better to keep /boot separate so that when you upgrade, say, your ext3 root filesystem to ext4, your bootloader will keep on working if the upgrade were to b0rk.
> 2.1.1.2 Boot Loader Configuration
Say the machine is in a cage in a rack. There is a BIOS password to prevent you changing the boot order and other settings. In this case a password for the boot loader could make sense.
> 2.3.5 Protect Physical Console Access
The machine is in a cage in a rack. A locked cage.
> 2.5.4.1 How TCP Wrapper Protects Services
TCP wrappers can give application level warnings that iptables can't.
> 3.5.1 Disable OpenSSH Server if Possible
If possible. You could admin using a kvm if you wanted, but yeah disabling OpenSSH is kinda silly, restricting to a management subnet is better.
I'm pretty sure the NSA still has to maintain their vast networks of computers. You don't need to be in a software shop to eventually require a new dependency. Networks of servers are hardly ever static. At some point you will need to deploy/upgrade/patch an application, troubleshoot an error or a network glitch, or support some added functionality. It is far less problematic to have the software available than deal with the potential fallout of problems of maintaining such a network and its access to installing software remotely.
Also, clearly you have never read the source to yum or dealt with its myriad bugs and functional inconsistencies. Let me just tell you that indeed, yum is horrible, and I urge you to avoid it whenever possible.
> It's better to keep /boot separate so that when you upgrade, say, your ext3 root filesystem to ext4, your bootloader will keep on working if the upgrade were to b0rk.
What are you going to do with a system with only a /boot partition? Install busybox? If you want a new filesystem you should be kickstarting via PXE and reinstalling the whole thing, not upgrading in place.
> Say the machine is in a cage in a rack. There is a BIOS password to prevent you changing the boot order and other settings. In this case a password for the boot loader could make sense.
If you've got physical access to the machine in the cage you can do anything you want, which includes bypassing BIOS and Bootloader passwords in about 5 minutes.
> The machine is in a cage in a rack. A locked cage.
You've never noticed that the cages are easy to scale? If i'm going to attack the datacenter i'm going to have a fake badge and pick up the key at the front desk anyway. Trust me, most datacenters are not really that hard to get into. NSA datacenters on the other hand...
> TCP wrappers can give application level warnings that iptables can't.
No, iptables can give you the exact same warning tcpwrappers will, which is access control based on host or IP. There are lots of ways to replicate it with iptables. The more important thing to note is that it's fucking retarded to rely on host or IP level access control of a network service. It's somewhat useful in preventing mass brute-force network attempts, but you should also have iptables rules in place to stop that.
> If possible. You could admin using a kvm if you wanted, but yeah disabling OpenSSH is kinda silly, restricting to a management subnet is better.
A kvm won't transfer files for you and it's usually pretty hard to script 1,000 KVMs to run automated commands in 10 minutes.
>It is far less problematic to have the software available than deal with the potential fallout of problems of maintaining such a network and its access to installing software remotely.
The purpose of this document is to make the system as secure as possible. It's not meant to make deployment or maintenance easier.
Is it problematic to have things you might need somewhere else? Sure. Can keeping those processes running on the system make it less secure? Probably. The entire reason you keep the smallest subset of executables, scripts and processes running is so you avoid any potential holes.
The NSA is choosing security over convenience. For systems that actually contain information that needs to be secure(medical/financial records, governmental data, etc.), I definitely wouldn't choose the convenient choice over the secure one.
You want a really secure system? Turn it off and unplug it, then lock it in a cave underground. With a really big lock. That's REALLY inconvenient, but REALLY secure. In real life you can't just choose security over convenience... it needs to be convenient enough for people to do their work, even if that means being potentially not as secure as possible.
And really, this document doesn't go nearly far enough to really harden your average Linux machine... jesus christ they don't even mention what services will run under what roles, reduced capabilities, device ownership... it's a really bare minimum document. But that's not why I critiqued some sections. I critiqued them because some parts are, to me, just stupid.
Like I mentioned twice, there is no potential for a security compromise from simply having excess "files" on a system. You have to actually have a security hole which can be taken advantage of by those files.
What are the files in question? Well, all sorts... icons, manuals, libraries, executables, .... ah, there's where we might see some potential for a security hole (heh, realistically even the icon files could be, but that's out of scope). And how could an executable by itself be a security hole? Usually it's got some modified permissions which allow it to run as another user (commonly root). They should be designed to be secure against an attack but nothing is bug-free. So you know what you do? You remove those permissions. Suddenly it's just a normal executable with the same capabilities as any other executable on the system (of which there are many even on a minimal system).
To accomplish this monstrously difficult task you can simply follow the rest of the NSA guide and it will secure these files for you. Hence, it was (imho) stupid for them to tell you to only install what you need, as their own guide removes any possibility that these excess files could do any damage to the system. On top of that, it's really annoying from the perspective of people who just want to get shit done.
>Usually it's got some modified permissions which allow it to run as another user (commonly root). They should be designed to be secure against an attack but nothing is bug-free. So you know what you do? You remove those permissions. Suddenly it's just a normal executable with the same capabilities as any other executable on the system (of which there are many even on a minimal system).
Or you don't even leave it on the system, which means that it can't exploit the hole. If you're aware of the hole, you don't put it on the system in the first place, and if you don't know about it, it's impossible for someone to use an exploit that doesn't exist.
That's rather simpler than leaving it on there, and hoping that permissions work. Nothing's bug free, so why take the chance?
First, there's no chance about it because you can't do anything with it once it's a normal executable. There's no "hoping" about it - remove special permissions and audit the whole FS and it can't be used for an attack (any more than any other binary on a minimal system could be). That being said, you do it to improve the efficiency of the system for the users and reduce time to troubleshoot issues.
If you've got physical access to the machine in the cage you can do anything you want, which includes bypassing BIOS and Bootloader passwords in about 5 minutes.
That assumes you have unlimited physical access. 5 minutes per machine * 80 in a rack * 1,000+ racks is a real world limitation.
PS: The NSA has a vary different approach to security than the average firm. To put this in perspective they have been known to refer to their computing power not by computer or rack but by the Acre.
True. But it would be unfeasible to access 16,000 machines even if there was no bootloader password. The best course of action is try to identify the management/admin server and take over that, which may have unrestricted access to every server on the VLAN (or as sometimes happens, every single server period).
(Also, what's your datacenter or machine profile that you can get 80 machines per rack and not hit overheading or overload your rack power circuits? We could only get 40 to be stable, but we were using commodity gear)
I highly doubt that these recommendations are intended for administering developer's workstations.
Also, a number of the recommendations you are railing against here are prefaced with "if possible" and "unless needed". For example, advice is provided on how to configure OpenSSH if it is deemed necessary to be run: "If the system needs to act as an SSH server, then certain changes should be made to the OpenSSH daemon configuration file"
The point of guides like this is to lock down everything that isn't necessary. As a part of this, it helps to question what services and actions each of your machines really needs. It's not unthinkable that some Linux servers deployed in a network do not need OpenSSH server running.
This guide is written by the NSA so it is reasonable for them to be paranoid by default.
> 1.1.2 Minimize Software to Minimize Vulnerability
I agree on yum. If the attacker has root and can run yum. It is too late.
In regards to user mode applications: If you have wget, python, gcc on a php-only shared hosting server and your security depends of open_basedir (bad idea, don't do this) these usermode applications give you access to all data on the server.
> 2.1.1.1 Disk Partitioning
> nobody has ever been saved from having a 4GB /var/log partition
This is just plain wrong. If there is no disc-space left all kinds of strange error beginn to appear - e.g. your emails are not beeing delivered, apache fails with strange errors, users can't login. Imagine an attacker that wants to DoS you and he managed to fill your logs with excessive data.
> 2.1.1.2 Boot Loader Configuration
> Oh my god, HOW could we possibly be secure without a password to BOOT OUR MACHINE. The damn disks and boot partition aren't even encrypted, guys! This is useless!
It is not. I can boot from my USB thumbdrive and my private toolbox is now part of of the network (I can hijack the MAC and IP-Adress of the computer in question, can do arp-spoofing. If they use an old version of nfs I can even gain access to all files on the nfs server, because older nfs versions trust the client. And I can doing this likely without beeing noticed.
2.3.5 Protect Physical Console Access
Again. I'm into GRUB and and I can edit the linux-boot entry and add init=/bin/bash and voila I'm root on the machine. Without having to open the computer.
> 2.5.3.1 Disable Support for IPv6 unless Needed
IPv6 is still not largely deployed and it is a possible attack vector you can easily avoid unless you need it. I don't see a problem with this approach.
2.5.4.1 How TCP Wrapper Protects Services
It is another onion-ring in your security scheme. You should only permit hosts that require connections with your system. It is part of a bigger picture not the whole strategy.
3.5.1 Disable OpenSSH Server if Possible
Why not? E.g. I managed to sniff/can have a look at your E-Mail and you are so stupid to send plaintext account data around (happens all the time). Without access to OpenSSH I can't easily login into your server.
They just show a lot of possible attack vectors you can focus on. Taken alone every point mentioned here sounds kind of useless to implement. But if you combine all these ideas and implement them across your network/your server you have better security.
I can't understand why you ridicule this suggestions, they all are important depending on the context.
>> 1.1.2 Minimize Software to Minimize Vulnerability
>
>I agree on yum. If the attacker has root and can run yum. It is too late.
You missed the point. More software means a larger attack surface. Minimizing software isn't meant to keep an attacker from installing software, it is meant to keep an attacker from using unmaintained/unneeded software to compromise the box in the first place.
> I agree on yum. If the attacker has root and can run yum. It is too late.
Having yum is not the problem, it just has really annoying bugs and limitations.
> In regards to user mode applications: If you have wget, python, gcc on a php-only shared hosting server and your security depends of open_basedir (bad idea, don't do this) these usermode applications give you access to all data on the server.
You're telling me if you're depending on an un-secure method of operation that it could be un-secure?
> This is just plain wrong. If there is no disc-space left all kinds of strange error beginn to appear - e.g. your emails are not beeing delivered, apache fails with strange errors, users can't login. Imagine an attacker that wants to DoS you and he managed to fill your logs with excessive data.
Having your logs partition fill up is not something that's supposed to happen anyway. This is what we have log rotators for. Some software fails to work entirely once logs can't be written. Bottom line: if your partition is filling up, you're screwed. Be proactive and put in place limits, log rotation, and disk alerts.
> It is not. I can boot from my USB thumbdrive and my private toolbox is now part of of the network (I can hijack the MAC and IP-Adress of the computer in question, can do arp-spoofing. If they use an old version of nfs I can even gain access to all files on the nfs server, because older nfs versions trust the client. And I can doing this likely without beeing noticed.
Spoofing network shit has nothing to do with physical security.
> Again. I'm into GRUB and and I can edit the linux-boot entry and add init=/bin/bash and voila I'm root on the machine. Without having to open the computer.
Yes. And it'll take me a whole 5 minutes (or less) to do the same thing with a bootloader password. Congratulations, you have been owned by security through obscurity.
> IPv6 is still not largely deployed and it is a possible attack vector you can easily avoid unless you need it. I don't see a problem with this approach.
Like I said. Learn ip6tables. You'll need it soon.
> It is another onion-ring in your security scheme. You should only permit hosts that require connections with your system. It is part of a bigger picture not the whole strategy.
Iptables does this already and does a much better job.
> Why not? E.g. I managed to sniff/can have a look at your E-Mail and you are so stupid to send plaintext account data around (happens all the time). Without access to OpenSSH I can't easily login into your server.
What the fuck are you talking about? Sniffing my e-mail? Look. OpenSSH is a pretty secure codebase. Lock it down more and use 2-factor and it's pretty goddamn reliable. And everybody needs remote access to their boxes, and this is as good a solution as anything else.
Yes, I agree that combining many methods to secure a system makes it much more robust/secure in general. But you need to have more than just a guide to hardening a system. It takes a certain mindset and a particular idea of just how secure you want your system. Even following everything in this guide I could probably still own a variety of machines if they were maintained and used carelessly.
I ridiculed the suggestions because it's the goddamn NSA. They should be able to come up with a better guide than this, and one which is realistic for both the individual at home and the large enterprise network admin. This shit looks like an infosec intern threw it together from other online hardening guides. And like I said originally, yes, it's a beginner's guide at the most, but it does a disservice to those that will use it and wave it as a flag to show they've hardened their machine. Now amateurish admins will tell their bosses "I just hardened our server according to the NSA's specifications!" and the boss will jump and clap with giddy moronic glee, because that's what bosses do. And they'll still get owned.
I agree on your conclusion. After re-reading your original post and my answer It seems that I misunderstood some of your points.
After skimming through the guide again, I also found that certain security related aspects are not included. There is no discussion about sensible ressource limits and other topics.
tl;dr: Don't take security advice from organizations whose job is spying on you.
I don't know about anyone else. But the NSA is one of the last organizations I'd let give me security advice. I wouldn't put it past them to purposefully omit a pointer or two in the hope that those who follow the guide to the letter not knowing any better will leave a way in. Based on the other comments the advice is banal rubbish. Perhaps this is purposeful.
Part of the NSA's mandate is protecting US interests and communications as well. My personal favourite example of this is the changes they recommended to DES while it was being established that strengthened DES against differential attacks, a class of attacks that was not publicly known of until years later http://en.wikipedia.org/wiki/National_Security_Agency#Data_E...
A more secure society wouldn't have secret agencies operating in peacetime. Why? Because their mandates, budgets, policies are also secret and subject to mission creep.
Except that this specific document comes with good advice and rationale. There is nothing here that indicates a secret agenda. It's not as if the document is telling you to install a backdoor for them.
http://www.nsa.gov/ia/guidance/security_configuration_guides...