Hacker News new | past | comments | ask | show | jobs | submit login
The Six Dumbest Ideas in Computer Security (ranum.com)
44 points by lsb on March 19, 2009 | hide | past | favorite | 20 comments



Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

Isn't this line inherently contradictory, how can a person make a system hack-proof ( as if it is possible ) if he doesn't know how to identify the vulnerabilities he could be introducing into it by doing something a particular way.

Anyway the only good thing I got out of it was Personal Observations on the Reliability of the Space Shuttle by Richard Feynman(http://www.ranum.com/security/computer_security/editorials/d...), its a better read than this article.


> The cure for "Enumerating Badness" is, of course, "Enumerating Goodness." Amazingly, there is virtually no support in operating systems for such software-level controls

The closest an OS has come to "enumerating goodness" is the User Access Control (UAC) introduced in Vista that, among other things, blocks any executable from running as Administrator until you say "Yes, please run this" at least twice. While well-intentioned, that frustrated most users to no end. So it's not as simple as the author may think.


Alternately, /etc/sudoers and explicit instructions to escalate privilege.


The basic issue with Security is that it is almost always in contention with Usability.

It's easy to look at something with security blinkers and go "eeks - you n00bs!" but it's ridiculously difficult to put in place any measures without immediately reducing the usability of your solution.


Absolutely not. That's only the case when you have nearly optimal usability and nearly optimal security. Then you are on a Pareto optimality frontier.

However, I almost never encounter such systems; usually they fall short on both counts.

Consider: How much usability do you lose to close a cross-site scripting attack? How much usability do you lose to close an SQL injection? How much usability do you lose to close a buffer overflow?

Now, how many security vulnerabilities fall into one of those three categories? Probably more than 90%.

Until you close off the obvious, huge attacks, there's little point adopting two-factor authentication.

Coming at it from the opposite point of view, how much usability do you lose from adopting two-factor authentication on top of a secure system? That can greatly add to a system's security, but you pay the price ("enter a number from this fob") once per login, hardly something to cry about.

Only once your system is very secure and very usable do you actually, truly get into tradeoffs between security and usability. But there's nothing special about security and usability here that bears any special attention... when you've optimized sufficiently any two interests start being in opposition to each other!

I mention this because the same fallacy that leads people to think that security stands in opposition to usability leads people to misinterpret decreases in usability as security increases. Like my bank, which uses "two-factor authentication" (note the scare quotes) consisting of a password and the answering of one of three questions that would normally be used to recover passwords... in other words, first it asks me for a password, then it asks me for a password. Two-factor my ass. But it is mistaken for security because it lowers usability.


>How much usability do you lose to close a cross-site scripting attack? How much usability do you lose to close an SQL injection? How much usability do you lose to close a buffer overflow?

None of course. But these aren't philosophical security factors (like the OP). They could actually be classified as "security bugs" or just "bugs".

>how much usability do you lose from adopting two-factor authentication on top of a secure system? That can greatly add to a system's security, but you pay the price ("enter a number from this fob") once per login, hardly something to cry about.

Considering normal users - a whole lot!!! If I put in a system like this, I would expect a whole load of problems.

Consider: Delivery of fob, recall of fobs, damaged to fobs, lost fobs, stolen fobs etc etc. Not to mention about 20% of users would just hate learning how to use it.


Seventh dumbest mistake: enumerated goodness. Because good and bad are context dependent and computers are such awesome mind readers.


I don't agree with much in this article. My thoughts on Marcus' list of "dumb ideas":

1. Default Permit - this one makes sense, getting "fail open" vs "fail closed" right is important in lots of places in engineering.

2. Enumerating Badness - isn't this is just the typical method for implementing default permit?

3. Penetrate and Patch - yes, it's always better to design a system to be secure and to code it with good security practices in mind, but any system with more than a handful of lines of code will be too complicated to guarantee correctness. "Penetrate and patch" can still find errors in well built systems. I do think that different ways of doing the "penetrate" part have different value. For example, I much prefer "white box testing" (ie testing with access to source code) to "black box testing". Extending this concept, when writing code with security in mind it can be helpful to consider common approaches an attacker might take (ie simulate the "penetrate" part of the process mentally), and use that insight to write better code (ie "patch" the code as it's being written).

4. Hacking is cool - well, the truth is, finding flaws can be intellectually gratifying. Sure, maybe the "coolness" factor has an impact on how many script kiddies exist, but there will always be people who like to "hack" stuff. Some portion of those people will stray into gray and black areas.

5. Educating Users - sure it's hopeless, and designing systems and processes to minimize people's impact is a win, but the prediction that users who need education will not be in the high tech workforce in ten years... not gonna happen.

6. Action is Better Than Inaction - From the article: I know one senior IT executive - one of the "pause and thinkers" whose plan for doing a wireless roll-out for their corporate network was "wait 2 years and hire a guy who did a successful wireless deployment for a company larger than us." Not only will the technology be more sorted-out by then, it'll be much, much cheaper. What an utterly brilliant strategy! Ugh! How about looking at risk vs reward. How much did that conservative "pause and thinker" put his company behind by waiting two years to adopt some new technology? Yes, sometimes that's the right choice, but sometimes high rewards merit taking big risks! This is as true when thinking about security as it is in any other business decision. A blanket recommendation to avoid risk does not cover all situations.


Ok, so you agree with 1 and 2, and as for 3, where users feel free to break things 4 teh lulz (cf HN exploit), there's a non-zero chance than a normal user will become an attacker.

4. Dissecting a slug, making slides, and examining them under a microscope is cool. Sprinkling a family of slugs with salt and watching them die is not cool. The end result is the same, that the slugs die, but it's fundamentally a social problem, when people feel empowered to break breakable things.

5. In growing up, why did you not talk to strangers as a kid? Why did you look both ways before crossing the street? Our parents and friends teach us convenient behaviors so that we don't have to make the same mistakes. If all of your data is now going to live in the cloud, you'll learn pretty fast, and educate your kids/kid siblings about what not to do on the internet. No one who's ever been phished is going to make that mistake again (my friend in undergrad did), no one who's ever downloaded random video codecs from Eastern European nations is going to make that mistake again (my bf did); we'll learn what not to do so that messing up really merits a techno-darwin award.

6. Yes, the "senior IT executive" calculated the risks and rewards, and for an enormous corporate network with large institutional inertia, it was cheaper to hire a guy who learned on someone else's dime. It's like "cutting-edge accounting", or changing your house's plumbing. If your business isn't accounting, just follow industry best practices.


Yeah, he seems to think that everyone who is a pen tester or a security researcher also writes malicious viruses on the side. Here's a clue: botnet writers are not also working for the security industry, because they don't have to. They're making enough money to just sit and improve their botnets.


What people like the author of this piece don't get is that security isn't the end goal.

The job of an IT professional is to insure the availability, usability, and safety of services to his or her users. Security is a very important part of that, but you can't let it stifle your users' ability to do their jobs.


I don't see much value in this essay. What makes sense is rather old hat, and points like #3 are downright absurd, boiling down to, "You can't make a system more secure by finding weaknesses and correcting them - you must make the system magically devoid of weaknesses to begin with."

Somehow, if there was much of a way of doing that with non-trivial programs, I don't think we'd have security exploits anymore.


But the author mentioned qmail, which is an application that was built "secure by design", is non-trivial, and hasn't had many bugs. This is a little paper by the author of qmail: http://cr.yp.to/qmail/qmailsec-20071101.pdf


And yet there are still qmail exploits and fixes to those.

Not many people build insecure by design, but the simple truth is that security is both hard and always surmountable in some way.


I saw this some years ago and all I took away from it was that "default permit" and "enumerating badness" are really the same thing.


"There have been numerous interesting studies that indicate that a significant percentage of users will trade their password for a candy bar..."

I have a few dozen passwords if someone else out there has candy to match. I withhold the right to decide which candy bar is worth which password.


The author offers many criticisms but few real solutions.


I'm not sure that was the intent. Security is hard. Most of the time these "simple" solutions are applied in the thought that we can fix our security problems easily. I think the author was just pointing that out.


"best expressed in the BASIC programming language"?!

Come on! Do we still pay attention to this guy? Even if it were best expressed in BASIC, but early 70's BASIC cannot even express 70's BASIC.

Not to dismiss the person on his language choices (or bad taste for jokes), but the rest of the list is also pretty... shallow.

Not to mention the views that security is more like "just don't write the flaws". Argh


Questions I've been asked today:

1) "If we have no email encryption software (which we don't), is there any reason we would not be able to receive encrypted emails from a customer?"

2) "We think someone has stolen one of our databases. Can we have a system where people have to put a password in to copy a file or print?"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: