Hacker News new | past | comments | ask | show | jobs | submit login

Tell me about it!

Gotta love those security experts that your company hires when they say to you "your app has a security issue right here" and I say "alright then prove it, hack it, let's see if there really is a security issue" and they can't do it.

If I don't want to worry about deployment, there's Heroku. If I don't want to worry about testing, there's Circle CI. If I don't want to worry about scaling, there's AWS EC2. If I don't want to worry about security, there's... nothing. Because it's not a real product. At least not real in the way databases, deployment, testing and scaling are.

So when people say "programmers don't care about security" I honestly don't understand what they mean since I've never seen a secure app. It's like there's this mob of believers that want to convince you security is the salvation. OK, teach me by showing. Show me a bunch of secure apps and we'll learn from it. But those don't exist, so no one ever learns, but that doesn't keep "security experts" from blaming programmers building real things in the real world for not caring about their imaginary friend.

I'll believe security experts care when they create a service and sell it for money to people like me.




Security Guy: Hey, bank, it seems like your vault is accessible via some old sewage tunnels.

Bank: So what? Nobody knows about those tunnels.

Security Guy: But someone who finds them, like me, but with less morals, could rob you.

Bank: Prove it. Rob the vault.

Security Guy: ..... ?

Finding a vulnerability isn't the same thing as exploiting one, and a lack of exploitation doesn't imply a lack of vulnerability. You also have to consider that a small portion of vulnerabilities are actually exploitable, but it's a very hard problem to find out which ones are and which ones aren't. Exploiting a single vulnerability is typically harder, in fact, than patching a dozen of them (for example, you can easily start using a secure version of strcpy(), but exploiting it requires an attacker to smash the stack or ROP their way into full execution).

The bottom line is that you're not only naive if you believe what you just said, but you're doing a huge disservice to anybody who uses any code that you may write.


Security Guy: Hey, bank, it seems like your vault is accessible via some old sewage tunnels. But fret not, I as a security expert that goes around making sure places such as banks and schools can't have specific places accessed by entrances other than designated the ones, have a solution for you. Just put your safe inside this chroot building. What this does is makes sure only sewage goes through sewage pipes (not people). So all you have to do is purchase this solution and we will guarantee that no one will come into your bank through sewage pipes.

Why does that never happen? Why are security experts always consultants and they never have a product to sell?

Naive is a person that thinks just because they are a security expert, programmers will care. No amount of shaming will change that. If you're a security expert your job is to make this so easy that I almost don't think about it. Like I almost don't think about databases, deployment, testing, scaling. Getting on your high horse and begging programmers changes nothing.

Just look at RSpec. All of a sudden everyone wants to write tests because it's fun and easy and looks sort of like English. Now we don't have to care much about tests, we just write them and RSpec runs them, collects and reports errors, formats them nicely, tells me the path and the line number where each error occurred, etc. Now imagine you're a "testing expert" and there's no RSpec and you keep yelling at programmers to change their ways, to write and maintain tests, and so on. No one would do it (like few did before the recent craze). So please, learn from that lesson, round up some peers, and contribute to your damn field by letting me forget about it.


So a structural engineer shouldn't worry about the structural integrity of his buildings, only that they stand up under ideal conditions? A car manufacturer shouldn't worry about crash-testing or other safety concerns, only that their car moves?

HOW DOES THAT MAKE ANY SENSE?!?!

Like it or not, we're stuck on Von Neumann architecture, and as a result, data can be treated as code and vice-versa. The consequence of this is that, under certain circumstances, data can be carefully crafted to act as code, and can be executed in an unforeseen context. As a software engineer, it is your job to take precautions when developing software. Precautions that prevent this execution. Security people do the best they can to make it easy to develop safely, but all of that is useless if the developers ignore it. And, because security vulnerabilities are a manipulation of context-and-program-specific control flow, there's not a way to encapsulate all security measures in a way that is transparent. It's just not possible. Only developers know the specifics of their software, and only developers can protect certain edge cases. If you assert otherwise, you have a fundamental misunderstanding of the systems that you work with, and you need to re-evaluate your education before continuing to work in the industry (assuming you do). This isn't an opinion. This is a fact.

Lastly, us "security experts" do contribute to our field. Security is one of the hard problems in computer science - far harder than whatever you're doing that lets you "not think about databases, deployment, testing, scaling" - and there's a lot of solutions that have been engineered to deal with software that has been created by people like you. There's static code analysis tools, which can detect bugs in code before it is even compiled. There's memory analyzers that can detect dozens of different classes of memory-related bugs by just watching your software run. There's memory allocators and garbage collectors that can prevent issues with use-after-free and other heap-related exploitation bugs at run-time. There's data execution prevention and buffer execution prevention that, at run-time, help prevent code from being executed from data pages. There's EMET and other real-time exploit detection tools that exist outside of your software and can still prevent exploitation. That's not even an exhaustive list. There are literally hundreds of tools out there that make finding and fixing security bugs easy, but those tools can't patch your code for you. That's why there are consultants, code auditors, and penetration-testers that can give advice on how to fix bugs, find bugs where automated tools fail, and even coach developers into writing more secure code; because having smart, security aware developers is one of the major ways to defend against security bugs.


> As a software engineer, it is your job to take precautions when developing software.

On other people's software as well? Why was it not PostgreSQL's (random example) job to make sure their software rejects invalid input? All it would take is for them to use a typed language (given that the type system in Haskell, for instance, is enough to prevent SQL injection). So tell me, when does it become my job to patch whatever database code I choose because no database ever has concerned itself (it seems) with solving this for everyone else in one fell swoop (so we didn't have to think about it anymore for all these decades of dealing with SQL injection in every language that implements a database driver)?

Before the first million programmers had to write the same damn code to clean the input to give to these databases, the database coder should have fixed it themselves. But you weren't there to chastise him so we didn't get it.

Maybe the "mere mortal" programmers like me would be more excited about security if the industry standard software was also secure (we would want to mimic it, and keep it all secure, and not introduce security problems). No security expert has fixed the SQL injection problem where it should be fixed, but they do charge by the hour to fix it in every company that uses a database.


That's a horrible example. SQL injection IS the fault of the programmer, not SQL itself. SQL injection is achieved by adding extra code to a query, which is only possible when a programmer allows inputs that can contain code to be concatenated directly into a query. Here's an example:

    query = "SELECT * FROM USERS WHERE NAME = '" + userinput + "'";
    exec(query)
This input can be given:

    ' OR 1=1--
To make the application show the entire list of users. If this programmer used parameter binding, which is supported by PostgreSQL, MySQL, SQLite, and any other SQL platform you can think of, then SQL injection wouldn't be an issue. They could simply do something like this:

    query = "SELECT * FROM USERS WHERE NAME = @:USER";
    statement = prepare(query, "USER", userinput)
    exec(statement)
Just because you don't know the right way to do something securely, doesn't mean it's not there. But you're right, no security expert fixed this problem. It was fixed by the library designers of these SQL platforms. Security experts just charge you by the hour to teach you that you're unfamiliar with the existing security mechanisms inside of these platforms.

Also, just to be pedantic, I'll point out that a type system wouldn't change how SQL injects currently work, lol, no clue how you think that's the case, but I wouldn't put it past you at this point.


I've programmed for a while now. I think I've heard vaguely of parameterized statements. :)

Just to be pedantic, I'll point out that maybe your C and C++ "type" system wouldn't change how SQL injects currently work, lol, but the one I use can avoid not just SQL injection but XSS attacks: http://www.yesodweb.com/page/about

I'll say it again, you're wasting your time staying in that small rickety photocopy room called C/C++. But I wouldn't put it past you at this point. Whatever that means, hahah.


I'm sorry, I thought we were talking about security? Are you leaking the other thread into here just so you can feel like you won both, instead of neither?

And I never said anything about any C/C++ type system doing anything? But okay.

Back to the topic: if you've heard of them, why did you insist that SQL is inherently insecure? Did you forget they existed, or did you just think I wouldn't notice? Are you that cocky?

I really hope your employer one day recognizes your incompetence and fires you, because the software world is plagued with enough bugs without people like you purposely and gladly laying out a red carpet for them to walk in on. I can't continue to argue with what is either a relentless geyser of misinformation or a brilliant troll, so I'm done. Maybe one day you'll come to your senses, but I doubt it.


The way a strong type system solves this SQL injection problem (despite your saying it's impossible and ignoring my having shown your wrong) is by automatically escaping arguments before binding them to parameters.

Well guess what, you don't need pre-compiled statements to benefit from this feature - all you need is the hoisting aspect of it. In other words, if SQL drivers did not offer the unsafe function exec_query that takes the whole query as a string and returns a result, and instead they only exposed a hoisted version of that function that takes a list of arguments and a placeholder query as a string...

  exec_query ["john", 12] "SELECT ... WHERE... = $1 AND ... = $2"
Then there is no SQL injection problem, as the SQL database driver would always automatically escape the arguments before binding the parameters.

So if only SQL database drivers did not offer exec_query but instead forced the user to provide the whole query string in one go with placeholders, then the driver would be able to enforce security at the proper software layer - which is not everyone's program that interacts with a database.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: