Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can have safe manual memory management. The main cases of bugs are:

1. Null pointer deref. Can be fixed by having optional types and requiring that possibly null pointers have to be wrapped in them.

2. Out of bounds references. Can be fixed by making the type system track how big all objects are, and having the compiler insert bounds checking.

3. Use after free. Can be fixed by the free function zero'ing heap objects smaller than a page (eg 4KB), and unmapping larger ones so that future accesses are a seg fault. The heap also needs to not create new objects at the same address as deleted ones, but we have 64-bit address spaces, so maybe that's fine.

These all have costs, but so do all solutions to these problems.

I can't think of any reasons that a language with manual memory management has to be less safe than one with a Garbage Collector / ARC.



You are correct and Odin supports all of this.

`Maybe(^T)` exists in Odin.

Bounds checking is on by default for all array-like access. Odin has fixed-length arrays, slices, dynamic arrays, maps, and #soa arrays, all of which support bounds checking. Odin does not have pointer arithmetic nor implicit array-to-pointer demotion which is pretty much removes most of the unsafety that languages like C have.

Odin also has built-in support for custom allocators which allows you do a lot more with extra safety features too beyond the default allocator. Use-after-free is usually also a symptom of an underlying value responsibility and lifetime problem rather than a problem in itself, of which is fundamentally an architectural issue. Ownership semantics in a language in Rust does deal with this issue BUT it does come at a huge cost in terms of architecting the code itself to accommodate this specific way of programming.

There is a common assumption amongst many of the comments that if you have manual memory management, you are defaulting to memory unsafety. This is untrue and memory management and memory safety are kind of unrelated in the grand scheme of things. You could have C with GC/ARC and still have all of its memory unsafety semantics.


Odin and Zig solve almost all the memory safety aspects, except for use after free. Having bound checks, union types, etc. is good and a real improvement over C, but I think that use after free is a memory unsafety issue. It's the one that's harder to tackle because it's more dynamic ("temporal memory safety"?).


"Use after free" is a symptom of other problems. You can "solve" it by making it very different to do in the first place with something ownership semantics, but there are usually better ways of dealing with it in the first place.

One really good approach is to not use pointers in the first place and use handles. I highly recommend this post for more information: https://floooh.github.io/2018/06/17/handles-vs-pointers.html

Because use-after-free is a responsibility problem, handles are a way to make sure that a subsystem has responsibility over that memory directly rather than have it spread out across the program.

This is why Odin nor Zig "solve" this problem: solving it at the language level is not necessarily the best option.


That’s essentially what Rust does: it makes using handles to get temporary access to memory that are owned in a single place very attractive because otherwise the borrow checker will yell at you.


the last thing is safe concurrency. this is the greatest achievement of the rust borrower checker imo. the ownership model means data races are detected at compile time, eliminating a huge class of concurrent programming bugs without forcing a message passing architecture. deadlocks are still possible if you have two or more mutexes but that’s a much harder problem


By "manual memory management", I think of explicitly allocating and freeing memory. Assuming you're thinking of the same thing, are you suggesting that the compiler (or other static analysis) could catch all of the issues you listed? If so, the natural question is, why is it necessary to manually insert the allocations and frees, if the compiler knows where they are supposed to go?

This path leads you to something like Rust, which is safe without garbage collection or ARC, but I also wouldn't call it "manual memory management". The trade off they took for this is complexity in the language.


GP's solutions to problems 1 and 2 are the same as Rust's; the difference is problem 3 ("temporal memory safety"). Rust solves this problem with statically analyzed lifetimes, which, in addition to the safety and correctness advantages of compile-time checking, also permit RAII (which answers your "natural question" with "no, it's not necessary to do that"), which makes programming more pleasant. The disadvantage is, as you said, language-level complexity, since it's not enough for the programmer to be personally satisfied that the lifetimes are correct; they have to be represented in the code in a way that satisfies the borrow checker.

GP's proposal is to drop lifetimes and RAII (thereby simplifying the language semantics), make the programmer responsible for allocations and frees (as in C), and solve the temporal memory safety problem by doing additional work at runtime to ensure that use-after-frees reliably crash the process instead of overwriting return addresses or doing other arbitrarily bad things. Whether this is more fun to program in than RAII depends on whether you think it's better to suffer from too much abstraction or too little; people have sharply diverging intuitions on this and it's been a holy war since forever and it probably always will be.

The clearer-cut problem is that such a language would be slower than C or Rust, both because freeing memory involves extra work that C and Rust programs don't have to do, and because the requirement that use-after-frees must reliably behave a specific way inhibits optimization, since the compiler can't assume that use-after-frees don't occur. Also, using memory from a small allocation that's been zeroed out doesn't reliably crash the process unless a pointer in that allocation is dereferenced, and even then, this (contra point 1) would require the compiler to assume that null pointer dereferences can happen and must segfault, which, again, inhibits optimization.


I agree with all of that. How theoretical are the optimization gains from the extra constraints guaranteed by Rust's borrow checker?


Well put!


> Assuming you're thinking of the same thing

Yes.

> are you suggesting that the compiler (or other static analysis) could catch all of the issues you listed?

Yes, the compiler for the first two and the standard library's heap implementation for the third.

> This path leads you to something like Rust

There are stops along this path before you get to Rust. If you just add the 3 things I mention above to a C like language, it would still be perfectly possible to leak memory. But that isn't a safety problem.

The 3 things don't include a borrow checker. You could still make doubly linked list and graph data-structures.


That makes sense! This isn't the set of tradeoffs that most appeals to me, but I can see your point that it's a different set of trade offs that may be good ones.


An additional tradeoff is that many safe data structures or algorithms are impossible to implement in safe rust. Doubly linked lists being the canonical example. However I feel like the juice is very much worth the squeeze and most rust developers won’t run into these limitations regularly.


Yeah, but I think this is also captured under the complexity trade off. It is possible create a correct safe interface over an unsafe data structure, but it is complex to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: