Rust's memory management is both (relatively) deterministic and automatic, in that it's easy to figure out exactly when objects are being destroyed if you care but you don't have to do anything yourself to ensure that they're destroyed properly. This is in contrast to C or C++ where you have deterministic destruction but you have to clean up things you have to clean up things on the heap yourself, or Go or Java where you can't be sure at all when the garbage collector is going to harvest something.
> This is in contrast to C or C++ where you have deterministic destruction but you have to clean up things you have to clean up things on the heap yourself, or Go or Java where you can't be sure at all when the garbage collector is going to harvest something.
This is a false belief many C and C++ developers have.
If you use the standard malloc()/free() or new/delete pairs, you are only certain at which point the memory is marked as released by the C or C++ runtime library.
At that point in time the memory can still be marked as in use for the OS and only be released at a later point, just like GC does.
This is one of the reasons why HPC makes use of special allocators instead of relying in the standard implementations.
In the end, this is no different than doing performance measurements to optimize the GC behaviour in GC enabled systems languages.
Besides pure memory, there are other types of resources as well, which are deterministically destroyed/closed/freed (automatically, in C++ RAII usage).
Also, in C/C++ your performance critical loop/execution is not interrupted by harvesting GC passing nearby.
AFAIK, one of the main reason people come up with custom allocators is that the system new/delete (malloc/free) are expensive to call. E.g. it is much faster to pseudo-allocate memory from system pre-allocated static memory, in your app.
> Besides pure memory, there are other types of resources as well, which are deterministically destroyed/closed/freed (automatically, in C++ RAII usage).
In reference counting languages, the destroy method, callback, whatever it is named, takes care of this.
In GC languages, there is usually scope, defer, try, with, using, or whatever it might be called.
> Also, in C/C++ your performance critical loop/execution is not interrupted by harvesting GC passing nearby.
Code in a way that no GC is triggered in those sections, quite easy to track down with profilers.
Not able to do that? Just surround the code block with a gc.disable()/gc.enable() or similar.
> AFAIK, one of the main reason people come up with custom allocators is that the system new/delete (malloc/free) are expensive to call. E.g. it is much faster to pseudo-allocate memory from system pre-allocated static memory, in your app.
Which funny enough is slower than in languages with automatic memory management, because the memory runtime just do a pointer increment when allocating.