Hacker News new | past | comments | ask | show | jobs | submit login

> No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.

It seems strange to describe this this way for something like fixing a memory corruption bug or switching from a vulnerable cryptographic algorithm to a less vulnerable one. The capability that you're giving up is ... potentially breaking your own security model in a way that you weren't even aware was possible?




I think I might not be conveying my point very well. Let me clarify this as succinctly as I can.

Usability doesn't just mean things users want to do. Usability means things anyone (users, developers) can do. By definition, "securing" things means limiting the capability of certain users or developers to do (hopefully) specific things. How efficient you are at this determines whether or not you'll also reduce the capability users or developers want to have when you reduce the capabilities they don't want to have.

To give a concrete example: using a cryptographic algorithm immediately impacts usability along performance and capability axes. Previously, you could arbitrarily read and manipulate that data because it was plaintext. Afterwards, you could not. Now you need to be careful about handling that data and spend developer time and resources implementing and maintaining the overhead that protects that data and reduces its direct usability.

It doesn't matter if you wanted that capability - it's gone either way. That was a trade-off, and it is an easy decision to make, but not all decisions are easy to make. Every security decision can be modeled as a trade-off.


I fondly remember the convenience advantages of plaintext password storage, both as a user and somebody supporting users.

Occasionally I wonder if there are user accounts in my life that are irrelevant enough I'd be happy to buy that convenience advantage with the necessary security risks ... but of course people's tendency towards password re-use makes that trade-off basically unofferable in any sort of ethical way.

At least bcrypt makes it moderately easy to not completely screw up the hashing part.


Although I'm tempted to argue against your view, it ended up reminding me of

http://www.oreilly.com/openbook/freedom/ch07.html

and somewhat relatedly https://web.archive.org/web/20131210155635/http://www.gnu.or...

which tend to support your point.


That's a good example but a bit cherry picked. I could just as easily point out the opposite with accessing an account. If insecure, it still requires a certain amount of information and time upfront then some login just to identify the user. The server will compare that to its local data. The time due to network latency or server load means it usually happens in seconds.

Adding a password that gets quickly hashed by the application to be sent instead costs some extra time. Almost nothing given libraries are available and CPU cycles cheap. If remembering the password, the user has to just type it in once or rarely. The hashing happens so fast that the user can't tell it happened on top of already slow network. Most of the time the user of this properly-designed system will simply type the URL, the stuff will auto-fill, and exchange will take same time. No loss in usability except one time whose overall effect is forgotten by many interactions with identical, high usability.

Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems. They could theoretically get performance benefits without the tagging but that's only if incorrect software + much extra work is acceptable. If premise is it's correct, which requires safety much of the time, then the more secure CPU and language are better plus improve productivity. Easier to read, too.

A final example is in web development. The initial languages are whatever crap survived and got extended to do things it wasn't meant to. So, people have to write multiple kinds of code with associated frameworks for incompatible browsers, server OS's, and databases. Many efforts to improve this failed to generate productivity/usability and security. Opa shows you can get both by designing a full-stack, ML-like language with strong types that makes many problems impossible by defaults. Easier to write and read plus more secure. Ur/Web does something similar but it's a prototype and functional programming rather than production.

Conclusion: usability and security aren't always at odds. They are also sometimes at odds in some technical, philosophical way that doesn't apply in real-world implementations. Sometimes getting one requires a small sacrifice of the other. Sometimes it requires a major sacrifice or several.

It's not consistently a trade-off in the real world.

Note: I cheat with one final example. An air-gapped Oberon System on Wirth's RISC CPU uses far less transistors, cycles, energy, and time than a full-featured, Internet-enabled desktop for editing documents + many terminal-style apps. Plus you can't get hacked or distracted by Hacker News! :P


> Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems.

In the parent commenter's framework, I suppose the safer language still comes at a cost in terms of the ability to use unsafe programming techniques -- like type punning and self-modifying code.


Hmm. You could use those as examples. There would be cases where type punning might be true in developer time. There would be cases where self-modifying code might buy you better memory or CPU efficiency. Yet, self-modifying code is pretty hard to do and do right for most coders that I've seen. Type punning happens automatically in dynamic, safe language with decent conversion rules. You often only do the conversion rules once or when you change the class/type but you mentally do that in your head anyway if you're analyzing for correctness. Difference is you typed it with conversions being mechanically checked.

These you bring up seem to be double-edged swords like the others that can have about no negative impact or significant one depending on context.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: