I'm really excited by Rust because it seems to be one of the most sensible language designs I've seen in a while. Go is nice but has some pretty odd syntax in places. I'm just waiting on Rust to hit 1.0 before I start playing with it more seriously.
Eh, the only real surprise in Go's syntax is the odd ordering of type and variable names. That is, I think, a welcome change for those of us who sometimes have a hard time remembering if *a[] is an array of pointers or a pointer to an array. There wasn't anything I couldn't puzzle out without looking it up on the basic syntax side of things.
Rust, by contrast, introduces more new syntax for things like it's various sorts of pointers. And there are other things that look new to me too, thought that might be due to me not knowing ML. They're there for a good reason, but they add to the learning curve.
And deterministic memory management, which is quite rare these days... :)
But to be honest, skimming through Rust samples, I find its syntax somewhat noisy. It feels ad-hoc. Is there any document about justification of its syntax elements?
We actually have partaken in long, long debates about the syntax, and the current syntax is one that everyone seemed to be OK with, modulo a few compromises here and there.
Usually people who say Rust looks like line noise are concerned about `@` and `~`. These are there for a reason: they're the Rust versions of the smart pointers `shared_ptr` and `unique_ptr` in C++. Unlike in C++, you have to use them: there is no other way to allocate memory. So `shared_ptr` and `unique_ptr` would be all over the codebase in Rust code if we didn't abbreviate them, and making them one character was the easiest way to do that.
I actually think that one of the reasons few people write exception-safe code in C++ is that `shared_ptr` and `unique_ptr` (especially if you don't use `using namespace std;`) are so long that `new` and `delete` end up being more convenient…
Unlike most other language designers today, Rust's designer was more concerned about getting the concepts right than the syntax. So the syntax came late in the process, and as far as I know they did get inspiration from various sources. Possibly that is what bugs you.
I don't think there is a single document about syntax choices. It's evolved a lot in the past three years.
> And deterministic memory management, which is quite rare these days... :)
Memory management should be automatic, either by reference counting or GC.
I think it is a generation thing until mainstream OS adopt such kind of system programming languages.
There are a few OS with such system programming languages, but it only counts when the likes of Apple, Microsoft and Google adopt such languages.
Objective-C with ARC, C++11, C++/CX are already steps in that direction.
>But to be honest, skimming through Rust samples, I find its syntax somewhat noisy. It feels ad-hoc. Is there any document about justification of its syntax elements?
My only issue is the Perl like prefixes for pointer types. I think it pollutes a bit the ML language influence.
On the other hand, I am starting to get used to it.
Rust's memory management is both (relatively) deterministic and automatic, in that it's easy to figure out exactly when objects are being destroyed if you care but you don't have to do anything yourself to ensure that they're destroyed properly. This is in contrast to C or C++ where you have deterministic destruction but you have to clean up things you have to clean up things on the heap yourself, or Go or Java where you can't be sure at all when the garbage collector is going to harvest something.
> This is in contrast to C or C++ where you have deterministic destruction but you have to clean up things you have to clean up things on the heap yourself, or Go or Java where you can't be sure at all when the garbage collector is going to harvest something.
This is a false belief many C and C++ developers have.
If you use the standard malloc()/free() or new/delete pairs, you are only certain at which point the memory is marked as released by the C or C++ runtime library.
At that point in time the memory can still be marked as in use for the OS and only be released at a later point, just like GC does.
This is one of the reasons why HPC makes use of special allocators instead of relying in the standard implementations.
In the end, this is no different than doing performance measurements to optimize the GC behaviour in GC enabled systems languages.
Besides pure memory, there are other types of resources as well, which are deterministically destroyed/closed/freed (automatically, in C++ RAII usage).
Also, in C/C++ your performance critical loop/execution is not interrupted by harvesting GC passing nearby.
AFAIK, one of the main reason people come up with custom allocators is that the system new/delete (malloc/free) are expensive to call. E.g. it is much faster to pseudo-allocate memory from system pre-allocated static memory, in your app.
> Besides pure memory, there are other types of resources as well, which are deterministically destroyed/closed/freed (automatically, in C++ RAII usage).
In reference counting languages, the destroy method, callback, whatever it is named, takes care of this.
In GC languages, there is usually scope, defer, try, with, using, or whatever it might be called.
> Also, in C/C++ your performance critical loop/execution is not interrupted by harvesting GC passing nearby.
Code in a way that no GC is triggered in those sections, quite easy to track down with profilers.
Not able to do that? Just surround the code block with a gc.disable()/gc.enable() or similar.
> AFAIK, one of the main reason people come up with custom allocators is that the system new/delete (malloc/free) are expensive to call. E.g. it is much faster to pseudo-allocate memory from system pre-allocated static memory, in your app.
Which funny enough is slower than in languages with automatic memory management, because the memory runtime just do a pointer increment when allocating.
>There are a few OS with such system programming languages, but it only counts when the likes of Apple, Microsoft and Google adopt such languages.
There are a few OSs written with everything. It only counts when pragmatic, useful OSs are written with a language. Most of those OSs are unusable, slow, proof of concepts.
Selection, yes. Natural? Do you somehow imply that technology that "wins" mass adoption is "better"? Or that it gains mass adoption based on rational analysis of its "value"? There are quite a few unnatural forces at play, here...
>Selection, yes. Natural? Do you somehow imply that technology that "wins" mass adoption is "better"?
No, just that it's more fit.
Which is the exact same thing natural selection in nature implies. An animal that spreads is not "better" -- it's just more fit for it's environment.
For an OS "more fit" can mean: faster, consuming less memory and with more control over it, usable in more situations and more hardware, cheaper to run, leveraging existing libraries, etc. It doesn't have to be "better" as in "less prone to crash", "safer" etc.
The parent mentioned SUN experimenting with a Java OS. What a joke would that be, given that SUN's experiments with a similarly needy application (a Java web browser) ended in utter failure, with a slow as molasses outcome.
Sure, it would run better with the resources we have now. But a C OS like linux also runs much better with the resources we have now -- so the gap remains.
It's not like we have exhausted the need for speed in an OS (on the contrary). It's not also like, apart from MS Windows of old, we have much problems with the core OS crashing or having security issues.
In fact, I haven't seen a kernel panic on OS X for like 3-4 years. And my Linux machines don't seem to have any kernel problems either -- except sometimes with the graphics drivers.
So, no, I don't think we're at the point where a GC OS would make sense for actual use.
ARC, an example the parent gives, fine as it might be, doesn't cover all cases in Objective-C. Tons of performance critical, lower level stuff still happens in C-land, and needs manual resource management, it's just transparent to the Cocoa level.
No, it just means so far Apple and Microsoft were not that interested into doing that. I don't count with the commercial UNIX vendors, except Sun, because those only care about C and C++, mainly.
But this is going to change slowly anyway.
Sun did experiment using Java in the Solaris kernel, they just didn't went that far due to what happen to them.
Apple first played with GC in Objective-C, and now you have ARC both in Mac OS X and iOS.
Microsoft played with Singularity, only allowed .NET applications on Windows Phone 7.x and Windows (Phone) 8 uses C++/CX, which uses a reference counting runtime (WinRT). While C is considered legacy for Microsoft (official version).
C++11 has reference counting libraries and a GC API for compiler implementers.
Android has Dalvik and the native applications are actually shared objected loaded into Dalvik. Contrary to what some people think the NDK does not allow for full development of C and C++ code, except for a few restricted system APIs.
Sailfish and Blackberry, Qt makes use of C++ with reference counting and JavaScript.
FirefoxOS and ChromeOS are anyway only browsed based.
If there is natural selection in this, it might just be that manual memory management is going to be restricted to the same set as Assembly. Interacting with hardware or for special cases of code optimization.
So a systems programming language using either GC or reference counting, with the ability of doing manual memory management inside unsafe regions, might eventually be the future.
>You're complaining about Go syntax, which is one of the most readable languages out there but you like rust which looks like line noise? Sometimes I have a hard time convincing myself people don't post troll comments on HN.
And then, as an example of "bad Rust syntax" you link to the "lexer.rs"?
As if a lexer in any language is a good example of it's everyday syntax?
Because the top part of the source he linked to has just struct definitions, which is what CSS was designed like anyway. Would struct definitions in most common languages, like modern C or Go look any different? (including an {} object definition in JS)
That file is very ugly (it's one of the oldest files in the repository) and does not reflect the current programming style. The indentation is all over the place, the type names are not CamelCased, there isn't enough use of methods—`str::len()` is ugly compared to `"foo".len()`—and so on.
(Of course, this isn't an excuse: we sorely need to refactor the compiler.)