Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a user, you may not be aware that C makes it relatively easy to create https://en.m.wikipedia.org/wiki/Buffer_overflow which are a major source of security vulnerabilities.

This is one of the best reasons to rewrite software in Rust or any other more safe by default language.





Everyone on hackernews is well aware that C makes it relatively easy to create buffer overflows, and what buffer overflows are. You're still not responding to GP question.

I'm not involved in the initiative so I can't answer the question definitively? I provided one of the major reasons that projects get switched from C. I think it's likely to be a major part of the motivation.

I didn't know that C makes it easy.

Right, I never mentioned that I am a decently experienced C developer, so of course I got my fair share of buffer overflows and race conditions :)

I have also learned some Rust recently, I find a nice language and quite pleasant to work with. I understand its benefits.

But still, Git is already a mature tool (one may say "finished"). Lots of bugs have been found and fixed. And if more are found, sure it will be easier to fix them in the C code, rather than rewriting in Rust? Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.


https://access.redhat.com/articles/2201201 and https://github.com/git/git/security/advisories/GHSA-4v56-3xv... are interesting examples to consider (though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?).

> Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.

I would assume that's the case.


> though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?

Based on the descriptions it's not the integer overflows that are issues themselves, it's that the overflows can lead to later buffer overflows. Rust's default release behavior is indeed to wrap on overflow, but buffer overflow checks will remain by default, so barring the use of unsafe I don't think there would have been corresponding vulnerabilities in Rust.


This doesn't matter at all for programs like Git. Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS. This seams to be the more reasonable security-boundary then relying on the language implementation to just not issue code, that does illegal things.

Yeah sure, memory-safety is nice for debuggibility and being more confident in the programs correctness, but it is not more than that. It is neither security nor proven correctness.


Not quite the best example, since Git usually has unrestricted file access and network access through HTTP/SSH, any kind of RCE would be disastrous if used for data exfiltration, for instance.

If you want a better example, take distributed database software: behind DMZ, and the interesting code paths require auth.


Git already runs "foreign" code e.g. in filters. The ability to write code that reacts unexpectedly on crafted user input isn't restricted to languages providing unchecked array/pointer access.

Unintentional bugs that caused data destruction would also be disastrous for a tool like git

Which are more likely to be introduced by a full rewrite.

> Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS.

This seems like a rather strong statement to me. Do you mind elaborating further?


I think bugs in the MMU hardware or the kernel accidentally configuring the MMU to allow access across processes that isn't supposed to be are quite rare.

Sure, but I think illegal interprocess memory accesses is a fairly narrow definition for "access[ing] memory its not supposed to". There's plenty of undesirable memory accesses that are possible without needing to cross process boundaries and I don't think the OS does that much to solve those outside of currently niche hardware.

It might be undesirable to you, but you haven't specified this to the computer. Process-boundaries are one way how we specify what is allowed to touch and what not.

OK, sure, but there's no reason you can't extend that argument to in-process improper memory accesses either. free() is you specifying that a particular bit of memory isn't supposed to be touched any more, malloc() is you specifying that some amount of memory is legal to access, etc. Language runtimes, inserted/compile-time checks, etc. would be analogous to the OS/MMU here.

Yes, but this is not across a trust boundary. Since these are in the same process/program. Rust "only" applies checks at compile time, it doesn't enforce security.

Not sure, if I'm clear. Rust is like cooperative multitasking, nice but not guaranteed. My claim here is, that we actually want preemptive multitasking.


I'm not quite sure I'm understanding your analogy here, but would that effectively mean each allocation lives in its own process?

Maybe? If we try to backport it to the current hardware/software. It would be an improvement to configure the MMU to enforce boundaries below a process.

However my point was that not every allocation is a trust boundary. What a program does in its own memory doesn't matter at all, this is gone in an instant. Everything that matters is I/O and this goes through syscalls, so there security can be enforced.

Why do you care about corrupting process memory? The memory state itself is totally irrelevant. What annoys you is when it e.g. deletes a file it is not supposed to. Would you rejoice when the file gets still deleted, but the process memory is totally fine? Of course not. The only thing that matters is the deletion of the file, you don't actually care about the memory safety. Thus, what you actually want is the computer to know that the file is not supposed to be deleted, when you have that, the memory can be trashed like the program likes to.


> Everything that matters is I/O and this goes through syscalls, so there security can be enforced.

I don't think "good" I/O and "bad" I/O are necessarily distinguishable by the OS ahead of time and/or in general. The OS isn't going to know whether the program wrote out a proper file or complete gibberish, or whether the numbers you're displaying were derived from uninitialized values, or whether what you're sending over the wire is what you intended (e.g., Heartbleed), etc., but those are very much things one should care about!

> Why do you care about corrupting process memory? The memory state itself is totally irrelevant.

Strong disagree here. If memory is corrupted all bets are off, especially if you know your program is actually supposed to perform some I/O.

> The only thing that matters is the deletion of the file, you don't actually care about the memory safety.

You would care if memory safety issues directly led to file deletion!

> Thus, what you actually want is the computer to know that the file is not supposed to be deleted, when you have that, the memory can be trashed like the program likes to.

So what happens if you know a file is supposed to be deleted but memory corruption led to the wrong one being deleted?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: