> And deterministic memory management, which is quite rare these days... :)
Memory management should be automatic, either by reference counting or GC.
I think it is a generation thing until mainstream OS adopt such kind of system programming languages.
There are a few OS with such system programming languages, but it only counts when the likes of Apple, Microsoft and Google adopt such languages.
Objective-C with ARC, C++11, C++/CX are already steps in that direction.
>But to be honest, skimming through Rust samples, I find its syntax somewhat noisy. It feels ad-hoc. Is there any document about justification of its syntax elements?
My only issue is the Perl like prefixes for pointer types. I think it pollutes a bit the ML language influence.
On the other hand, I am starting to get used to it.
Rust's memory management is both (relatively) deterministic and automatic, in that it's easy to figure out exactly when objects are being destroyed if you care but you don't have to do anything yourself to ensure that they're destroyed properly. This is in contrast to C or C++ where you have deterministic destruction but you have to clean up things you have to clean up things on the heap yourself, or Go or Java where you can't be sure at all when the garbage collector is going to harvest something.
> This is in contrast to C or C++ where you have deterministic destruction but you have to clean up things you have to clean up things on the heap yourself, or Go or Java where you can't be sure at all when the garbage collector is going to harvest something.
This is a false belief many C and C++ developers have.
If you use the standard malloc()/free() or new/delete pairs, you are only certain at which point the memory is marked as released by the C or C++ runtime library.
At that point in time the memory can still be marked as in use for the OS and only be released at a later point, just like GC does.
This is one of the reasons why HPC makes use of special allocators instead of relying in the standard implementations.
In the end, this is no different than doing performance measurements to optimize the GC behaviour in GC enabled systems languages.
Besides pure memory, there are other types of resources as well, which are deterministically destroyed/closed/freed (automatically, in C++ RAII usage).
Also, in C/C++ your performance critical loop/execution is not interrupted by harvesting GC passing nearby.
AFAIK, one of the main reason people come up with custom allocators is that the system new/delete (malloc/free) are expensive to call. E.g. it is much faster to pseudo-allocate memory from system pre-allocated static memory, in your app.
> Besides pure memory, there are other types of resources as well, which are deterministically destroyed/closed/freed (automatically, in C++ RAII usage).
In reference counting languages, the destroy method, callback, whatever it is named, takes care of this.
In GC languages, there is usually scope, defer, try, with, using, or whatever it might be called.
> Also, in C/C++ your performance critical loop/execution is not interrupted by harvesting GC passing nearby.
Code in a way that no GC is triggered in those sections, quite easy to track down with profilers.
Not able to do that? Just surround the code block with a gc.disable()/gc.enable() or similar.
> AFAIK, one of the main reason people come up with custom allocators is that the system new/delete (malloc/free) are expensive to call. E.g. it is much faster to pseudo-allocate memory from system pre-allocated static memory, in your app.
Which funny enough is slower than in languages with automatic memory management, because the memory runtime just do a pointer increment when allocating.
>There are a few OS with such system programming languages, but it only counts when the likes of Apple, Microsoft and Google adopt such languages.
There are a few OSs written with everything. It only counts when pragmatic, useful OSs are written with a language. Most of those OSs are unusable, slow, proof of concepts.
Selection, yes. Natural? Do you somehow imply that technology that "wins" mass adoption is "better"? Or that it gains mass adoption based on rational analysis of its "value"? There are quite a few unnatural forces at play, here...
>Selection, yes. Natural? Do you somehow imply that technology that "wins" mass adoption is "better"?
No, just that it's more fit.
Which is the exact same thing natural selection in nature implies. An animal that spreads is not "better" -- it's just more fit for it's environment.
For an OS "more fit" can mean: faster, consuming less memory and with more control over it, usable in more situations and more hardware, cheaper to run, leveraging existing libraries, etc. It doesn't have to be "better" as in "less prone to crash", "safer" etc.
The parent mentioned SUN experimenting with a Java OS. What a joke would that be, given that SUN's experiments with a similarly needy application (a Java web browser) ended in utter failure, with a slow as molasses outcome.
Sure, it would run better with the resources we have now. But a C OS like linux also runs much better with the resources we have now -- so the gap remains.
It's not like we have exhausted the need for speed in an OS (on the contrary). It's not also like, apart from MS Windows of old, we have much problems with the core OS crashing or having security issues.
In fact, I haven't seen a kernel panic on OS X for like 3-4 years. And my Linux machines don't seem to have any kernel problems either -- except sometimes with the graphics drivers.
So, no, I don't think we're at the point where a GC OS would make sense for actual use.
ARC, an example the parent gives, fine as it might be, doesn't cover all cases in Objective-C. Tons of performance critical, lower level stuff still happens in C-land, and needs manual resource management, it's just transparent to the Cocoa level.
No, it just means so far Apple and Microsoft were not that interested into doing that. I don't count with the commercial UNIX vendors, except Sun, because those only care about C and C++, mainly.
But this is going to change slowly anyway.
Sun did experiment using Java in the Solaris kernel, they just didn't went that far due to what happen to them.
Apple first played with GC in Objective-C, and now you have ARC both in Mac OS X and iOS.
Microsoft played with Singularity, only allowed .NET applications on Windows Phone 7.x and Windows (Phone) 8 uses C++/CX, which uses a reference counting runtime (WinRT). While C is considered legacy for Microsoft (official version).
C++11 has reference counting libraries and a GC API for compiler implementers.
Android has Dalvik and the native applications are actually shared objected loaded into Dalvik. Contrary to what some people think the NDK does not allow for full development of C and C++ code, except for a few restricted system APIs.
Sailfish and Blackberry, Qt makes use of C++ with reference counting and JavaScript.
FirefoxOS and ChromeOS are anyway only browsed based.
If there is natural selection in this, it might just be that manual memory management is going to be restricted to the same set as Assembly. Interacting with hardware or for special cases of code optimization.
So a systems programming language using either GC or reference counting, with the ability of doing manual memory management inside unsafe regions, might eventually be the future.
Memory management should be automatic, either by reference counting or GC.
I think it is a generation thing until mainstream OS adopt such kind of system programming languages.
There are a few OS with such system programming languages, but it only counts when the likes of Apple, Microsoft and Google adopt such languages.
Objective-C with ARC, C++11, C++/CX are already steps in that direction.
>But to be honest, skimming through Rust samples, I find its syntax somewhat noisy. It feels ad-hoc. Is there any document about justification of its syntax elements?
My only issue is the Perl like prefixes for pointer types. I think it pollutes a bit the ML language influence.
On the other hand, I am starting to get used to it.