Hacker News new | past | comments | ask | show | jobs | submit login

I disagree completely. Devs who use C are the least complacent about security in my experience. The problems are from previous eras before they knew about many of these things. A ton of people in modern languages couldn't name a single dangerous function, though they do exist in every language. You'd be amazed at how many race condition vulns result from TOCTOU errors just in authentication, or checking for the existence of a file before opening it, etc.

It's absolutely true that decades ago the C community was complacent, but it's not true now. Source: I taught secure coding in C/C++ in the 00s.




What you said. Nobody is complacent. Anyone who thinks the Linux or OpenBSD (etc.) kernel developers take the lazy way out is talking about a thing they know little about. I do think better languages than C exist and maybe could even be used as a basis for new systems. But I have yet to see a mature OS that’s as secure and as performant as these. Closest might be the chips I’ve seen that have an embedded Java byte code interpreter.


I agree in principle but think these security-focused C developers are focusing on the trees for the forest. Every developer having the responsibility of cultivating their own pet list of banned functions is, frankly, NOT the way to achieve security. Those things need to be enforced at the widest level possible (OS, or language) to have the needed effect.


Two ways to look at it. If I told you that your computer or phone could run the same OS and all the same programs but at 1/10th of the speed so that certain classes of bugs could be eliminated, would you make that switch right here and now? I don’t mean theoretically I mean the device you are looking at right now. If not, would you do that in a year?

On the other hand, Moore’s law and all that. Computers will get faster over time so at some pint we might not care. And the opposite of my question is also true: if you could switch to a faster OS written in assembly, would you (assuming all functionality stays the same), knowing certain classes of bugs are more likely?

It seems to me that the cost of these kinds of bugs is amortized such that it is cheaper to use C than to switch. Expressed in those terms, we will only switch to a different language for all our systems stuff when the cost of the rewrite and the cost of the performance penalty are clearly and significantly less than the cost of the bugs we are likely to expedience.


You're ignoring the fact computer science has advanced since the 1970s when C was created. C is not full of footguns because that's the only way to build a fast language, it's unsafe because it's old and full of legacy baggage. Modern systems languages (primarily Rust, and to a lesser extent Zig) are on par with C in terms of performance, yet eliminates entire classes of potential safety bugs. Rewriting of course has a (major) cost, but I don't think the argument that using C is somehow inevitable in order to get fast code holds any water.


Any runtime security measure produces overhead (array bounds checking, dynamically checked borrow rules like Rust RefCell, etc.), at least in computational cycles. There is no magic formula.

Calculating mandelbrot fractals to measure speed might be a nice exercise in which Rust or Zig can compete with C. But in a real software implementation, when you need to open a file you still have to call the OS function fopen(). Whatever thing File::open (Rust) is doing before calling fopen() is overhead.

How can you avoid that overhead? Write in C (at your own risk).


Compile time security measures have absolutely no runtime overhead. Also, I don’t see what you mean by File::open — there is a kernel call somewhere there. But if you are writing a new OS you are free to implement the fopen call as you wish - C has no advantage here.


Not all bounds checking can be done at compile time, can it? You can’t check if a file exists on a target system before it is opened at compile time, can you?


Benchmarks or it didn’t happen. The last OS project I saw was written in Rust and was twice as slow as Linux. It also required that all your software be written in Rust.

This is why we keep seeing the “X is faster than C” articles: if you use the standard C library in a sort of not great way (sscanf) vs a more intelligent version of the code in another language you will get faster than C results. But on the whole doing less work is always faster. Not doing bounds checking on an array will always be faster than doing bounds checking. How could it not be? No amount of computer science can make bounds checking take negative time.

I am not saying C is magically faster. I am saying that by letting you not do critical safety checks it will be faster. Rust has a similar capability for some things but if your goal is to write unsafe Rust for the sake of performance, then is it worth the switch?


Reminds me of something I once read on Evan Martin's (creator of the Ninja build system) blog [1].

"Underspecifying and overspecifying.

Ninja executes commands in parallel, so it requires the user to provide enough information to get that correct. But at the other extreme, it also doesn't mandate that it has a complete picture of the build. You can see one discussion of this dynamic in this bug in particular (search for "evmar" to see my comments). You must often compromise between correctness and convenience or performance and you should be intentional when you choose a point along that continuum. I find some programmers are inflexible when considering this dynamic, where it's somehow obvious that one of those concerns dominates, but in my experience the interplay is pretty subtle; for example, a tool that trades off correctness for convenience might overall produce a more correct ecosystem than a more correct but less convenient alternative, if programmers end up avoiding the latter. (That could be one reason Haskell isn't more successful. Now that I work in programming languages I see this dynamic play out regularly.)"

[1] http://neugierig.org/software/blog/2020/05/ninja.html


I’m sorry but compare apples to oranges. A serious OS project is a tremendous multi-decade project, it has not much to do with the implementation language.


With the recent push from Rust community to implement OS kernels in Rust I think it is apt.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: