Hacker News new | past | comments | ask | show | jobs | submit login

> The point was that runtime checking of critical aspects of safety isn't necessarily some high-overhead operation.

The runtime check doesn't make the operation any less unsafe - it only makes the consequences of an error less disastrous.




"The runtime check doesn't make the operation any less unsafe - it only makes the consequences of an error less disastrous."

I think we have a different definition of unsafe. If you think it's potential to cause problems, then I agree it doesn't make it less unsafe. Alternatively, it does make it less unsafe if that means it reduces the ability to invisibly corrupt data, hack your system, or crash your system. That's what I'm claiming. Even Rust or Ada might have a hard time meeting that definition once their pristine codes are converted into assembly.


> I think we have a different definition of unsafe.

“Potentially without useful meaning”, e.g., indexing an array without knowing whether the the index is within bounds.

> Alternatively, it does make it less unsafe if that means it reduces the ability to invisibly corrupt data, hack your system, or crash your system.

Your kid (your data) attends the same school as a bully (your program). One day, the bully threatens to punch your kid in the face (raises an exception). “But it's less wrong because he never punched your kid! He only threatened it!”

> Even Rust or Ada might have a hard time meeting that definition once their pristine codes are converted into assembly.

As long as the assembly code has the same meaning as the safe program from which it was generated, it's safe.


"Your kid (your data) attends the same school as a bully (your program). One day, the bully threatens to punch your kid in the face (raises an exception). “But it's less wrong because he never punched your kid! He only threatened it!”"

You're going out on a limb here. I'm not even countering that example as it's rigged to prevent that. Instead, I'll point out these unsafe things don't happen in a vacuum: there's something in the language that sets the risk in motion and then there's something else that determines what happens. The compile-time safety stops it from being set in motion. The run-time safety makes things set in motion meaningless as they'll just be exceptions or alerts for admins.

Back to your analogy, it would be more like a prison learning environment where people learned in cells with bulletproof glass while one shouted he was "gonna cut someone" for not sharing answers. He can try every unsafe thing he wants but the glass makes it meaningless. The prisoner that's being targeted can literally just pretend all that unsafety doesn't even exist with no effect.

Ideally, we have prevention and detection. Reason for prevention is obvious. Reason for detection, even in a perfect language scheme, is because compiler or hardware errors (esp SEU's) will make something act badly eventually with runtime measures being last-ditch effort. If there going to be there, though, then might as well lean on them some more if there's no performance hit, eh? ;)


If my analogy is is flawed (which it almost certainly is), yours is even more so: the computing equivalent of your analogy is preventing runtime errors by simply not allowing software components to interact - yes, of course, without interaction there are no errors because there is no computation!


I was thinking about that as I submitted it but was multitasking. You called it so I gotta correct it. So, let's drop the analogy and go back to what I originally claimed: CPU's modified to protect pointers, arrays, stacks, and so on. The primitives forced to be used in acceptable ways. The programmer does the rest expressing the problem in the type-safe HLL.

Now, almost every hack on a system that I can think of requires forcing a pointer to go out-of-bounds or something like that. Further, the rest send in data that becomes code. One check, from Burroughs, is CPU looking for code tag bit before executing which only can be set by OS or isolated service on microkernel. So, that wouldn't work.

What remains, with little performance hit & no static checking, is a system where hackers (a) have to con the admin into installing malware, (b) break the minimal, trusted loader/installer w/out abusing above components, or (c) get a denial of service attack. Forget analogies: the reality is much more impressive given there's almost no high-severity CVE's left. You can also do great static checks and such as I always recommend. Yet, you either don't or rarely need them in practice if goal is integrity or confidentiality rather than availability.

"Check and incidentally... mate" (Sherlock Holmes, Game of Shadows)


> Yet, you either don't or rarely need them in practice if goal is integrity or confidentiality rather than availability.

My goal is correctness. A program that fails with a runtime exception is just as wrong as another that silently corrupts your data. Dijkstra put it very nicely:

“We could, for instance, begin with cleaning up our language by no longer calling a bug a bug but by calling it an error. (...) The nice thing of this simple change of vocabulary is that it has such a profound effect: while, before, a program with only one bug used to be "almost correct", afterwards a program with an error is just "wrong" (because in error).”

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/E...


Cute quote by Dijkstra that shows he hadn't quite figured out reality yet. So, correctness is your goal. That means you'll have to specify the behavior, safety/security policy, and prove the two equivalent. The implementation, both source and binary, will have to be shown equivalent along with proven free of language-level issues. Finally, you have to do that on triple, modular hardware [1] that's rad-hard w/ similar rigor in its lifecycle. Or use run-time checks [2] for each algorithm that can correct errors probably also on TMR or rad-hard board. Let's not forget the custom circuitry for RAM that works [3].

Darn, Dijkstra or not, you still need some kind of runtime checks and protection for correctness. :)

[1] http://www.rockwellcollins.com/~/media/Files/Unsecure/Produc...

[2] https://galois.com/project/copilot/

[3] https://research.google.com/pubs/pub35162.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: