Hacker News new | past | comments | ask | show | jobs | submit login
Four Years of Rust (rust-lang.org)
271 points by steveklabnik on May 15, 2019 | hide | past | favorite | 194 comments



I love watching Rust progress. But the #1 thing I'm watching is the RLS and vscode plugin. Maybe I'm weird but I work with so many languages that having to manage multiple editors is a non-start. So any time I have a little personal project that could be done in Rust (for learning) I end up using Go or Python instead because the vscode support is still quite buggy.

Naturally others will say you don't need IDE-like support and that's fine. It's just the way I enjoy learning. Autocomplete is an amazing learning tool.


We're embarking on a major project to re-architect the compiler and write a next-generation RLS. It's on this year's roadmap. See here for more: https://ferrous-systems.com/blog/rust-analyzer-2019/


> It has high latency: because build is costly, it is invoked at most once per x seconds, which means you either have to wait to get the results, or get stale results.

I find this is the worst part right now. Latency is everything if it is tied to actual typing in of code.

Excited to see this get fixed, or at least improved!


Maybe we can expand acronyms the first time they are used? I am guessing RLS is Rust Language Server? The URL you cite also does not have this spelled out.


since no one else has answered you yet, RLS is Rust Language Server.


I can recommend https://tabnine.com/ with https://github.com/rust-analyzer/rust-analyzer .

While rust-analyzer doesn't always resolve type (unlike Jetbrains Rust plugin, which always does but sometimes incorrectly), it still makes writing code faster and doesn't have RLS quirks (which come from the fact that RLS has to compile the code before offering suggestions, so code has to be correct at some point, which is not good if you are prototyping)


It's a lot smoother than stable RLS, but type resolution is practically nonexistent for futures-based code due to trait support still lacking. Definitely excited to check back once that's resolved though.


Too bad TabNine is a Freemium :-/


When working with Rust code you get all paid features for free: https://tabnine.com/faq#language


I do not understand what people like about vscode. If I ignore insane resource consumption this is at most average IDE (the only real benefit over sublime "text editor" is integrated debugger). This is most preferred editor at work so I am forced to work with it and there are some of the most serious issues I have:

1. Only one side panel, so I can't see outline, test results and project files at the same time as I am used to see on widescreen monitors in other IDEs.

2. Language servers are still not comparable to solutions offered by java IDEs and in case of C++ they are worse than anything else I used.

3. Python extension constantly forgets and founds unit test. There is little support for unit test in other languages.

4. Official C++ extension despite being completely useless consumes several gigabytes of space for "indexed" files (I wonder if it is so bad to not hurt sales of Visual Studio). I also tried to use clangd which is better but there is still a lot of work to be done before it is useful.

I like sublime rust support for really small projects and eclipse support for larger projects (which is not ideal) but I have not coded anything serious in rust yet, so I do not know if there is some good IDE.


For C++ try the cquery extension. It's pretty good. It mostly helped me to provide IDE support (full code completion and navigation) for a project that wasn't supported by Visual Studio or any other IDE (embedded project based on gcc and a hacky makefile).


It works well enough for the 6 languages I use weekly. I'm honestly too busy doing all the other engineering tasks that don't happen inside Vscode to really care that it's not perfect. I don't live in vscode that much each week to really care.


> 4. Official C++ extension despite being completely useless consumes several gigabytes of space for "indexed" files (I wonder if it is so bad to not hurt sales of Visual Studio).

I chuckled at this, because the implementation is actually shared between the official C++ extension and Visual Studio. That extra space usage is likely the recent addition of automatic PCH generation (to VS Code, VS has done that for a long time).


I bought and use JetBrains CLion (C++ IDE) precisely for these reasons. I only use VSCode for HTML/JS/CSS and as a markdown editor. For plain editing I just use VIM. For medium-scale projects, VSCode tends to take enormous amounts of resources.


Vscode has great Typescript and JS support. Arguably the best of any editor. I still use jetbrains for other languages, it's really hit or miss.


Before Vim I had to look for an editor every time I learned a new language. Now I just use Vim. For everything.


Try :q!

On a more serious note: I have a long-term goal to learn either Vim or Emacs for various reasons, the above mentioned being one of them.

It's hard to feel so spectacularly unproductive as I do when I try them out though, so it's easy to give up.

What's there best way to go about learning Vim or Emacs?


I don't have any reference material/tutorials, but my major breakthrough with Vim was when I understood that normal mode is the norm. Anytime you're not entering text, be in normal mode. Remap jk or jj to escape. Get in the habit of typing it whenever you're done adding text. Use hjkl for movement instead of the arrow keys. Learn the f,t, and o commands.

http://cloudhead.io/2010/04/24/staying-the-hell-out-of-inser...


“Normal mode” it’s right there in the name :)


This is going further & further OT, but hey - I didn't start it!

What you need is a motivating factor. Vim is almost guaranteed to be there on any Linux/BSD/Mac shell environment, which motivates people in those environments to learn it even if they learn no other editor.

Beyond such environments, like on any desktop GUI, what'd motivate you to try GUI Vim? For Emacs, I can say the integration between any set of tools you can imagine could be a motivation.

The reason Emacs users don't particularly appreciate IDE's is that Emacs' integration of code editor, compiler output, debugger control makes the development cycle extremely fast. Add any 1 more use to that. File explorer? Emacs dired. Process monitor? Emacs proced. Interactive shells? Emacs integrates Unix shell, Python sessions, anything interactive actually.

You will get all your work done in Emacs, just using editor operations! Go to the source code for a compiler error? Hit "Enter" or click on the error in the compile window. Go to the source code being debugged? Stepping in the debugger automatically opens the file at the right line number. Found the bug? The code is already there ready to be fixed; switch to the compile window & compile. Find a file in a directory? Do a text search in its dired window. Rename all files beginning with foo to begin with bar instead? Do a search-replace.

Emacs also scales way better than any desktop editor or Vim, whether it is opening GB-sized log files, or binary files viewed/edited in hexadecimal. Before long, you start managing your to-do's using org-mode, and doing all your Git operations in the insanely featureful Magit interface.

The list gets longer and longer every release. And many Linux shell environments support the same basic keyboard short-cuts (Ctrl-A to go to beginning of line, Ctrl-E to go to the end, Ctrl-R to search in the shell history), and also provide light-weight Emacs clones like 'zile', 'mg' for editing-only use in the terminal.


I would like to see a UI designer or UML layout written in elisp, including support for third party components.


Emacs is physically destructive. The chording will destroy your hands. This is why emacs enthusiasts need special keyboards or foot pedals.

https://www.emacswiki.org/emacs/FootSwitches http://ergoemacs.org/emacs/emacs_best_keyboard.html http://ergoemacs.org/emacs/emacs_pinky.html

Vim is great, but after 20 years of using it, I'm moving mostly to Jetbrains stack since flying around the code is faster and syntastic (a vim plugin) is slow as shit on large projects. Thankfully the vim bindings in Intellij are pretty darn faithful.


Maybe try out my vi quickstart tutorial:

https://gumroad.com/l/vi_quick/

vi is the predecessor of, and a subset of, vim, and is probably available on even more platforms than vim is. So knowledge of that single editor (vi) enables you to work on any of those platforms. And you can always progress to (and through - it is big) vim later, at your own pace.

I wrote it at the request of two Windows system administrator friends who were given additional charge of some Unix systems. They later told me that it helped them to quickly start using vi to edit text files on Unix.


Another nice thing about learning Vi first: no plugins. There are so many plugins that make Vim _worse_. In my experience most novice vimmers end up with a huge vimrc, which overtime is pared down to something pretty small as they learn how vim actually works.


Yes, good point. That's partly why I mentioned that vim is big, above. And there is tendency of some devs (not just beginners) to load many plugins (for any tool, not just vim), before learning the base tool well. Counterproductive, IMO, plus leads to cognitive overload. And at least in the case of vi/vim, you can compose commands, so, even without using things like macros or plugins, you can achieve a lot. So master that first. It will take you a long way.


Clojure for the Brave and True has a really nice Emacs primer


Not sure about emacs but vim basically requires a period of fighting through how horribly unproductive you will initially be, give it a week or two and you'll be ok.


Doesn't have to be so. See my other comment in this thread:

https://news.ycombinator.com/item?id=19923720


Nice tips, thanks!


Welcome :)


I learned by playing vim adventures, but that might be too kiddie for you.

Check out this old HN post[1]

[1] https://news.ycombinator.com/item?id=213054


> What's there best way to go about learning Vim or Emacs?

Sitting at a unix terminal at the university and being forced - you can't install anything - to learn either of both to get anything done. ;)


`vimtutor` It's installed everywhere vim is.


Well, there are two ways to interpret that :)


If the only thing you do with programming is typing text sure, my IDEs do a little more than that.


If you're open to a different ide, intellij idea or CLion with the rust plugin is a really great combo.

Idea + rust is free minus debugging.


> Rust+WASM went from an experiment to a usable product, making rustc the first compiler with focus on supporting WASM

This may be the most interesting use case of them all. WebAssembly is fast, but it's also not fun to write. There are languages like AssemblyScript and Lys that will let you write WebAssembly in what appears to be a higher-level language, but you're still schlepping bytes around and must build the entire universe yourself.

Rust offers an alternative: a high-level language with readily accessible tooling to easily switch compile targets from native to wasm.

Given that so many JavaScript frameworks now require a compile step, the fact that Rust is compiled is barely a disadvantage.

Not only that, but any general-purpose library code you write to support your web application can be redistributed as native binaries for use in C/C++/Python/Ruby/etc. projects.

Although the original idea for such a universal web/native target, Java, died from self-inflicted wounds, the underlying need for a way to deploy code across all platforms, native and web, never went away. Rust+wasm is a solution that works today.

There's something else as well. WebAssembly is a sandboxed runtime, meaning it runs in its own little environment. Anything it knows about the outside is an opt-in add-on.

This makes wasm an interesting target for systems in which the JVM is currently used. For example, database extension environments and web servers.


Java wasn't the original idea, Pascal P-Code from UCSD was it, among several other ones.


Curious: if you rip out the js runtime in favor of pure webassembly, does that have a significant impact on electron viability?

Obviously only once webassembly can do the full job


WebAssembly can't touch the DOM without JavaScript. Does that change your question?


... for now. We've built the Rust/Wasm story such that once wasm gains that ability, your code gets upgraded internally, with no API changes. You're already ready for the future!


WebAssembly can't touch the DOM without JavaScript.

There's still hope for ad-blocking, then.


Not really, there is Canvas & WebGL, and DOM support is coming anyway.


I'm curious what you think those wounds were.


I wonder how many Rust users really want linear types and borrow checking, instead of the other stuff Rust brings to the table: modern language design, large and friendly community, nice type system, native compilation, good package ecosystem.

Personally I would prefer a well-designed GC'd language with a strong type system and native compilation over Rust, unless I was doing something with specific demands on parallelism or embedded software.


There are already several decent GC languages. If Rust was just another GC language, I don't think it would have attracted such a strong community required to develop all of these things and break out of obscurity. It'd be another Go with generics, or Elixir-native, or Swift with uglier syntax.

In the no-GC no-runtime niche for a very long time there was nothing viable besides C and C++. For programmers who want C-like level of control, performance and low-level compatibility there are very few alternatives, and if you want non-crash-prone parallelism on top of that, there's nothing but Rust.


I don't really understand why people would want a GC over deterministic destruction like in Rust. GC is [neither necessary nor sufficient](https://ruudvanasseldonk.com/2015/10/06/neither-necessary-no...). Ownership model in practice is much more convenient precisely because objects can act like resources and implement their cleanup logic, and the business logic can rely on it. It's a great improvement in any imperative PL.

I have (re-)implemented couple of real-life projects, and each time Rust was much better higher-level language than eg. Java, Python or JS. Borrow checker fights are non-issue in practice: just avoid references, pass messages as copies and when you have to share data use `Arc` and copy on write (or just add `Mutex`).

There's a reason why Rust is most loved PL for 4 years in a row. Once you try it, and get past initial adjustments (mosty: what owns what relationship) everything clicks, and I guarantee that you won't miss GC at all.


Because having a GC doesn't mean throwing away deterministic destruction.

Many GC enabled system languages do offer both mechanisms.

It is a matter of enjoying productivity it offers, while having the tools to fine tune performance when it actually matters.


> Many GC enabled system languages do offer both mechanisms.

?

Can you give me an example? Java's `finalize` is not deterministic destruction. D's scope guards (or Go's `defer`) are also not like destructors, because a calling code has to take care of them.

> It is a matter of enjoying productivity it offers,

There's hardly any productivity gain, and it is being offset by productivity gained by a reliable and hassle-free resource management.


Not exactly a systems language, but Python has deterministic destruction (due to the use of reference counting) of non-cyclic data structures.

Edit: Oh, and if you don’t think that’s enough, note that Rust doesn’t guarantee destruction to ever occur in that case (“considered safe”): https://doc.rust-lang.org/book/ch15-06-reference-cycles.html


> Python has deterministic destruction

Not quite.

https://docs.python.org/3/reference/datamodel.html#object.__...

> It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.


Oh, interesting. But from what I read it's only an implementation detail of CPython?

With `with`I can't eg. pass an open file to another function to eg. be closed there etc.

Example:

https://play.rust-lang.org/?version=stable&mode=debug&editio...

closing `File` will happen at the end of `foo` or `foo2`.


That's probably a tricky question, since CPython is also essentially the spec. PyPy talks a bit about this: http://doc.pypy.org/en/latest/cpython_differences.html#diffe...

That's true, since Python has reflection it can't easily optimize that case, whereas that's a powerful benefit of having a linear type system for tracking ownership like Swift or Rust. But early freeing (which Rust has and Python does not) is slightly different from deterministic freeing (which both have) is slightly different from guaranteed freeing (which CPython has and Rust does not).


D structs have destructors, for example.

Dealing with borrow checker on cases that are still being worked on (NLL 2, GUI callbacks), using unsafe for graphs or dealing with use-after-free array indexes for the alternative workaround, unsafe Drop implementations, doesn't look hassle free to me.


D's destructors are not deterministic, IIRC.

> using unsafe for graphs or dealing with use-after-free array indexes for the alternative workaround, unsafe Drop implementations, doesn't look hassle free to me.

Valid points, but these are rare problems in idiomatic Rust.

Specifically avoiding graphs by structuring code into a tree of ownership, has greatly improved architecture of my programs, to the point that I just do it like this in all programming languages I use.

Rust made me realize how many problems objects carelessly cross-referencing each other (especially in OOP) - "because there's a GC, so why not" create.

Similar with array of indexes - great improvement both in usability and reliability. At least all use after-free in that case is checked and fails deterministicly. It makes my data with graph-like, or relationship properties resemble relational database.

For problems with lifetimes in destruction all I needed so far is https://crates.io/crates/dangerous_option . I actually wish it was part of language (a nullable types, panicking at runtime).


> D's destructors are not deterministic, IIRC.

Yes they are, they get called at the end of scope.

> Valid points, but these are rare problems in idiomatic Rust.

Being rare doesn't mean they aren't there, and anyone trying to write UI related code will deal with them on regular basis.


https://forum.dlang.org/thread/tsfgbmakzcrxwqreheiq@forum.dl...

Can you clarify, what am I misunderstanding here? In couple of places online I've found that you can't depend on dtor being called.


As pjmlp said below, GC doesn't preclude deterministic destruction, so it isn't about resources in general; it is largely about memory. Also typical programs in any language have quite a bit of stack-allocatable data and C/C++/D/Go/Rust do support stack-allocatation, so the memory heap isn't involved everywhere.

Wherever the memory heap is involved, if the system isn't memory-starved, it can be argued that deterministic free-ing is a step too far; maybe you don't want time to be spent putting that long-ish list of heap-allocated values on the free list when exiting a function on the critical path. There is something to be said for lazy operations in such contexts, which the GC can provide (off the critical path - using GC.disable() in D) - a lot easier than a "region-based" memory management solution in C/C++/Rust. I don't know of a synthetic benchmark result to tilt the argument either way.

At a system level, it is slightly dismaying that the "back-pressure" required to trigger lazy operations is only present for memory, and only within a Unix process; when it comes to limits on open file handles or sockets or overall OS memory usage, there is no back-pressure to reclaim them - indeed no mechanism to lazily schedule files/sockets for closure.


> indeed no mechanism to lazily schedule files/sockets for closure.

It's not only about file sockets. It's about your core abstractions. Thread-pools, channels, flushing streams, events and other stuff, unlocking Mutexes, auto-validating guards etc...

I have production or semi-production experience with code written in C, D, C++, Python, Go, Java, Node, Scala, Rust and others (please excuse argument from authority, but that's all I have right now) so I can tell the difference and I think ...

... people that haven't worked with Rust long enough greatly under-appreciate number of stuff that benefit from deterministic resource-like management - not only on system resource usage / performance side, but simply the day to day reliability and writing simple bug-free code with ease.


I wasn't talking about necessarily temporal operations like mutex ops; of course, they should be deterministically done - it is a beautiful implementation detail that it falls out of the principles of Rust, without explicit compiler support and/or "checked-delete" as is required in most other languages.

I meant lazy operations for avoiding unnecessarily making things like memory deallocation temporal. Unless you have a strict memory budget (and you might additionally need to have mitigation for memory fragmentation), freeing whenever GC deems it fit, or streams flushing whenever the OS/library deems it fit, can't be worse - but could potentially be better - for performance than being done only at points decided arbitrarily by the language and the program structure. It is simply easier to disable GC on critical paths.

But as you say, you are not (only) talking about performance. You are talking about determinism for predictability. I hope you are not implying predictability across various threads/processes that make up the system, only within 1 thread/process where that predictability leads to reliability.

Reliability across threads/processes needs strategy because failures are inevitable (yes, I have drunk the Erlang kool-aid too, among other ones). While I have no doubt that Rust's design principles would support developing such strategies, I do wonder as to what scale this would work up to ...

... as you say, I don't know the size of projects Rust has scaled to, I just know enough about Rust itself to balk at its complexity (and I am a C++ person!). Maybe we are just talking from different viewpoints. You, having done a variety of non-trivial production code in a plethora of languages, plump for Rust. But I honestly wonder about the size & longevity of your C/C++ semi-production code - semi, because it is C/C++ ;-) - I have worked on large C++ codebases for long-lived products, and I somehow can't see another complex language solving more problems net-net.


> I meant lazy operations for avoiding unnecessarily making things like memory deallocation temporal.

Yes, I wasn't talking about real-time/performance consideration. The root post was along the lines of "Rust is nice, but I like productivity of a GC". I'm saying deterministic destruction is more productive for the developer and Rust makes a great high-level language.

Regarding stuff like non-blocking deallocation etc. You can still do it in Rust if so you desire. In the a destructor, enforced by the type, you can use nice abstractions, etc. and eg. defer deallocations, or draw memory from an arean. I think eventually Rust will just have more or less standardized type for GCs and graphs with explicit roots etc.

But sure, it is all still very new here, so if one is writing a real-time trading or OS, maybe they should stick to C/C++.

> But I honestly wonder about the size & longevity of your C/C++ semi-production code - semi, because it is C/C++

In C I worked on code powering embedded chips (mostly radio communication, kernel modeules) and eg. real-time hypervisor powering some high-end cars. In C++ eg. some high-perf data management stuff (data dedupilcation, encryption, etc.) in a SV unicorn. I actually would describe myself as C (as opposed to C++ person), but I know my around modern C++ quite well - i just really don't like working with it.

> I somehow can't see another complex language solving more problems net-net.

I don't think Rust is actually that complex. It is definitely bigger than C, but I think it's much smaller than C++. And it is sane. Thinks play well together in it, I think I could get a new dev productive with Rust in a week or up to a monthy, and as long as you avoid `unsafe` in Rust they will produce a decent code with ease. In C++ you're either an "expert" or you're debugging segfaults. ;)


I truly was curious about your C/C++ code - good for you that it was less C++ and more C! I don't know any masochists, so I don't know anybody who likes to work in C++ (even with just composition and generics), or even any reluctant "expert". I get away from segfaults because I am on server & can get away by more static copying of data (it pales in comparision to what alternatives offer).

I can see why you think Rust is tractable; you have worked on fairly complex stuff like automobile base software (AutoSAR, was it?). I am probably not as good at it as you are, so the cognitive load of designing at that scale with borrow-checking seems prohibitive. I hope there is a way to slice the problem which makes for less cognitive load.


> AutoSAR, was it?

Just a real-time hypervisor for Tegra underlying Nvidia Automotive platform. MISRA C, ISO 26262. Bleh. :D

> so the cognitive load of designing at that scale with borrow-checking seems prohibitive

I have multiple 1k - 10k line Rust projects on github, and I hardly ever deal or think about lifetimes. I just randomly opened a project of mine on github, opening some major files and there's literally 0 explicit lifetime annotations anywhere.

People hang up on lifetimes because they are unfamiliar, but for someone that gets some understanding and accepts "ways of Rust" (mostly avoiding cyclical graphs, using IDs instead of pointers, etc.), there are not an issue. Only when designing some weird zero-cost abstractions in performance-critical APIs, trying to avoid any copying, one has to annotate some lifetimes in non-trivial ways. Usually you can just ask someone on IRC and they will give you an answer. :D

The mental overhead is actually way lower than in C (or most languages for that matter). After a while you get used to relying on compiler to check the mundane stuff and focus only on the higher level problems. It's quite relieving actually.


MISRA C? Cool!

About the design model for programming in Rust, I guess only making the time to try writing something in Rust can answer the question. It is interesting how the steepness of the Rust learning curve could actually be a hook to get people to try it, which is all any well-designed language would need to gain adherents.

Back to critical systems programming, does MISRA C have a "certified" compiler like CompCert C or something? If so, Rust usage in such a niche must be a long way away - even if one could "certify" the Rust compiler code, how would one "certify" the million+ lines of C++ in LLVM!


I think mainly because mastering new concepts increases the barrier to entry, overhead and scope. Which in turn affects creativity. There are many brilliant programming languages, used by brilliant programmers, yet, most really useful software tends to be produced by whatever. It would be awesome if Rust could overcome that, but I don't really see how currently.


In the no-GC no-runtime niche, you probably need brilliant programmers anyway. Most of them are banging their heads against the C++ wall right now; they would love to get back some of their lost creativity via mastering of new concepts (Rust borrow checker, or Pony's deny capabilities - https://pony-lang.io ).


In my experience C++ programmers aren't necessarily great programmers as such. Because low levels things usually requires domain knowledge and experience more so than programming brilliance.


The topic was the no-GC no-runtime niche, which necessarily requires brilliant programmers, and they are stuck with C++. They would welcome tools to represent their resource control well, borrow checking or deny capabilities.

As a current C++ programmer who remembers what GCC 2.95's error messages looked like for even simple templates, I second you - it is less about the intellect, and more about experience and enough domain knowledge. But quite a few like me work on large C++ software that is not in the no-GC no-runtime niche; modules, good abstractions and native compilation would be enough to match or likely beat whatever we are putting together in C++, which doesn't even have modules yet - and the GC-based conveniences would be a big icing on the cake, so D, Nim, Pony all look interesting. (side note ... https://ponylang.io and not https://pony-lang.io as stated earlier)


I definitely think it is needed. I just don't think Rust has solved the paradox that the more you need something specialized the less you can afford the overhead.

Finding hundreds of web developers with some spare time who don't mind learning Rust, can port some of their tooling and use it in some part of their stack isn't going to be much of a problem. The same isn't true if you, say, need an embedded programmer who has worked with zigbee and needs to be in your lab for testing and verification.

Pretty much all major programming languages are where they are as a result of being some kind of "lowest common denominator" for their application. Something like Bash is pretty horrible as a programming language, but extremely accessible for sysadmins. Only now is it changing slightly were things like Python and Go is becoming more popular, but mostly because they are easier than the alternatives.


Taking nothing away from the Rust community that has managed such a complex language so well so long, I have to say that there never was anything viable in the no-GC no-runtime niche (that is why it is a niche - otherwise why would anybody be using GC'd languages). We were/are just gritting and bearing C++.

If it is a typical program, i.e. not a device driver or OS kernel running on low-memory hardware, having a GC + runtime doesn't preclude performance, for a long-running program. Of course, for short-running programs, just malloc, don't free, is the fastest.


The use case for non-GC languages go significantly beyond low-level OS kernel type code. Most of our backend infrastructure software and servers (e.g. database engines) are also poor fits for a GC languages, and this is what Rust is targeting. For this type of software, a GC has a large negative performance impact, so it isn't really an option. Operational cost efficiency is often a key factor in platform selection -- being able to reduce hardware requirements by an integer factor is compelling.

To me, Rust is a cleaner and stricter but also more narrow systems programming language than C++, and can be used as a sensible replacement for many C++ code bases, particularly as Rust adds features. Rust still has some challenges. For example, the DMA-driven, schedule-based memory safety models that have become fashionable in high-performance database kernels are not compatible with Rust's memory model. You'd have to make most of the code "unsafe" (in a Rust sense, it is actually an alternative memory safety model designed around a different set of assumptions).


Off-hand, Mirage OS comes to mind - it completely upends how you would "build" and "run" back-end services, and it is written solely in a GC'ed language - OCaml. And all GC'ed languages are not equal; see https://roscidus.com/blog about a Linux package manager ported from Python to OCaml for performance reasons (followed by an OCaml reimplementation of a component in a virtualized OS).

Some infrastructure, e.g. database engines, is ubiquitous, but all instances are of the same few software products written in C/C++/Java. Those ubiquitous instances have banged most bugs out of those codebases, so it is an uphill battle there to convince incumbents and upstarts alike, of the value of a new implementation. But Rust & Pony are marvelously positioned to march up that hill, especially if you throw in multi-threading.

My impression is that only a minority of widely-programmed back-end infrastructure that suits Rust is written in C/C++ (say, map-reduce kernels). The code-base size & the data flow complexity inside those components is pretty limited, and Rust should be tractable at that scale.

Most widely-programmed infrastructure software - in the back-end and on cell phone OS'es - have been merrily using Java/Python/Ruby/Erlang. OCaml is quoted as being used in the management component of VMware. These are much larger applications; is there any evidence or hint that Rust isn't onerous to develop larger systems in? Without that evidence, I feel (not think) that a disciplined GC'ed language (D? Pony?) has a better chance there.


Having a GC also kind of makes it nasty to write libraries to other GC'd languages. In Rust all of this is very easy, and you can get the speed bump to your Ruby/Python/JavaScript libraries using a modern and safe language.


Is the speed bump for other-language libraries or for other-language applications?

For speeding up other-language libraries, D has a -betterC mode, which prevents you from using the subset of the language and the libraries (standard & user-defined) that relies on GC. The remaining language is a very clean C that simply works on the other language's GC'd memory (using the other language's C interface), and can use stack allocation or any heap allocation strategy of your choice for its working set (reference counting ala C++ shared_ptr could be the obvious choice, but it is your party).

For other language applications, it is a valid option to speed up the entire application by writing it in D, as it has "all" the features of those other languages + all the convenience that is afforded by a GC + threads if you don't want a multi-process design. I quote "all" because I mean useful things like blocks/closures, generic data structures, etc. - of course, neither Rust nor D give you runtime devices like monkey-patching/meta-class hackery/prototype changes.


We've been rewriting a big Scala code base to Rust now since January and the team is advancing in a nice pace.

Our plan was:

- Take one part of the system to rewrite over JNA

- Run a massive amount of Scala tests against the new code

- When green, take another part

- Meanwhile the other part of the team writes the user-facing code base, connecting it to the new Rust crates

- The test suit works also over an integration layer, so the other team can test their code and the backend with the same test suite

We're quite far already. Originally I was the only person with Rust experience. Now there's five of us.


Why are you migrating from Scala?


The company wants much lower resource usage per node and easy integration with other languages.


> Of course, for short-running programs, just malloc, don't free, is the fastest.

A custom malloc that knew free would never be called would be faster still. I wonder if any short-running utilities do this?


AFAIK the D compiler is basically this.


And it is a short-running utility :-)


> It'd be another Go with generics, or Elixir-native, or Swift with uglier syntax.

I just want to point out that the Swift package management situation is currently bad. SPM (https://github.com/apple/swift-package-manager) can't be used within Xcode, and is basically not used for anything outside of server-side swift / Linux.

It also doesn't look like SPM will be integrated with Xcode this WWDC, so you're stuck with CocoaPods or Carthage.

I'm really jealous of cargo. And RLS (Swift LSP doesn't really work well because it doesn't know about your dependencies - back to the package manager issue).


Are you describing the D language? https://dlang.org C-like syntax and execution speed with high-level scripting-language-like conveniences, close to Lisp-level ability to generate code at compile time, an active user-base (https://forum.dlang.org) that continuously strives to get the language improved. Its (thread-local) memory heap is GC'd by default. They also have an LLVM back-end if that is pertinent. Recently, it even became another supported front-end in GCC, alongside Go, Ada, etc.


D is a weird one, because it has been lurking around for at least a decade and a half without really picking up the kind of steam and hype that later entrants have. I remember dabbling with D for game development way back in high school, as a friendlier alternative to C++.


Well, Standard ML is a weird one too in that sense (lurking around for a couple of decades without picking up steam (outside academia?)). Languages having type inference and modules designed into them seems to have a good chance to live long.


D excluded itself from being C/C++ killer by having a GC. I know they've backtracked on that, and the -betterC subset of D looks interesting, but it's a bit late for that.


Only to the GC hating crowd.

Unreal and UWP/COM developers are perfectly fine with writing C++ code with a GC around.


If GC & runtime is on the table, then there are many nicer languages you can use (Go, D, Swift, Kotlin). I assume that the C and C++ users who haven't switched yet are largely the "GC-hating" crowd that can't.

COM and other refcounted ones get a pass. But I'm surprised that Unreal gets away with a mark-and-sweep GC. Perhaps because it's only required for UObjects, and the rest of the codebase can still easily avoid using the GC? You can even cause use-after-free bugs on UObjects, if you want to.


Isn't Swift also refcounted?


Reference counting is a GC algorithm, as per CS curriculum.


Try to broaden your horizons.. I work in telecoms only one step away from a FPGA, a 1ms pause would be a critical bug report..


My horizons are already broaden.

https://atlas.cern/discover/detector/trigger-daq

You can also try to broaden yours.

https://www.ptc.com/en/products/developer-tools/perc

Will this work for you? I guess most likely not, and you really cannot afford any kind of delay, where even a C++ virtual call would be considered a bug, given the 1ms delay.

The point being that only a very tiny population has such requirements, just like barely anyone writes applications 100% fully in Assembly.


I'll look at your links but to answer to your question: yes virtual functions are not accepted in the code.


I think that part of the reason it didn't pick up is because it was made during an era when talking about the problems of C++ was brushed aside with "learn to program kiddo".


I suspect we are at least partially still in that era.


Success is often a matter of time. A good idea too early is the wrong idea kind of thinking.


What do you mean by:

> Its (thread-local) memory heap is GC'd by default.

What happens to objects that are allocated in one thread, and then have their reference passed to another thread?


Do you mean "thread local passed to another thread by mistake"? There is an option to make data global instead. Unless it is a Unix pipeline kind of design, independent threads can populate their working set (from file/socket/whatever) into thread-local memory.

It depends on whether shared-memory is a design requirement or an implementation artifact (Erlang does just fine with a largely shared-nothing model).

It also depends on whether a program runs for a short time, or for a long time. If you are running for a short time, why not just avoid the GC entirely; "manage" memory manually by malloc-ing and never free-ing (free-ing too takes time, and doesn't make memory available to other processes anyway).


The typical approach here as far as I'm aware is to use a deallocation queue - frees of an object that was allocated on another thread are freed by putting that object in the original thread's deallocation queue. When that thread gets an opportunity, it frees everything in its deallocation queue to reclaim that memory.


Manually, or automatically?


In D the GC heap is shared across registered threads, so it supports values allocated in one thread and passed to another. This particular passage from one thread to the other is guarded by a "shared" type constructor, so as to have a type system guarantee.


I'm definitely in this category. I think I want "Go with generics and enums and Cargo" (and no, OCaml folks, I am not describing OCaml). I want to write more Rust, but I rarely can afford to trade off on Go's productivity for Rust's performance and safety.


What is it you don't like about OCaml, or similar ML-family languages (e.g. F#)? Most of "modern language design" seems to amount to the ML featureset.


As someone who really enjoys OCAML, it has a variety of issues that prevent it from becoming popular (though I'm hopeful for reasonML).

No multicore

Standard library has... issues

No community consensus on a common base setup, questions about which library to use for a task often get answers like "well, do you want to use functors or monads? because that will change the library we recommend" which is really not what you want to hear when you're just trying to set up your first server.

Documentation is, in general, bad. Most packages just give a list of function signatures.

The community is small, academic, and often French.


Modern OCaml (4.06+) is not that bad, but yeah, lack of proper parallelism and bad Windows support (without hacks like MinGW or Cygwin) is what still hurts. Standard library these days is only one - Base, one buildsystem - Dune, Camlp4 is dead, and compiler improved a lot recently (including speed if you enable flambda).


You guys are welcome to D, you may enjoy unboxed floats, guaranteed monomorphization and native threads. Also, 3 backends.


The reference compiler is as free-standing as OCaml's because it too has its own back-end. Its front-end is shared with the other 2 back-ends: feeds the ubiquitous LLVM in LDC, and is included as a supported front-end in GCC since version 9.1.

Anyway, there is no way you'd give up ML-tradition pattern-matching; D doesn't have it.


Well engineering decision are tradeoffs, using D i've never missed pattern-matching and I have used ocaml before.


Honest question: what is so bad about having to create one process per core? If you have lots of data you can use shared memory, so the performance should be similar no?


For shops that really care about highly performant parallel tasks, the difference matters. For people who just want more performance for free it's too much work versus what other languages provide.

If OCAML were already popular I don't think the lack of parallelism would be huge issue, but it's niche, which is a huge downside to any language in a business context. The upsides the language provides in terms of performance, ergonomics, and maintainability need to overcome that downside. All the issues I listed mean that OCAML can't generally pass that bar.


I don't know. Maybe because it is GC-only? It doesn't have custom allocators, so there may be no easy way to map an object to some location in shared memory.


Not the grandparent, but...

OCaml: multicore, standard library situation (which one?) is a mess, adoption. F#: Have to deal with null and lack of ADTs when you interact with .NET (or JS for Fable) standard lib. The tooling situation in F# has been a mess since .NET Core, especially in Linux. Treated as a 2nd class citizen by MS.


Multicore OCaml is actually making promising progress, but I agree that the standard library situation is a little bit unfortunate.


Sadly, that's what I've been hearing for at least 5 years.


I'm using F# tooling in Linux and both VS Code and Jetbrains Rider's IDE's outclass most functional language IDE's at present IMO. It's greatly improved within the last year or so. If you use code most of your app in F# the null factor doesn't really bite you all too often and its not too hard to handle when it does.


I love OCaml, but my biggest pain point is not having something like Cargo. I haven't spent a lot of time with opam since 2.0 came out, so maybe it's better now, but when I used it it felt very much designed around installing packages globally, instead of defining dependencies per-project.

The other issue with the package ecosystem is cross-platform support. While OCaml itself works on windows, opam doesn't (or at least didn't) without a lot of extra work, and it seemed like most packages were designed only for unixish OS's.

There are projects where I've used Rust instead of OCaml, even though I'd have preferred to use OCaml, simply because the infrastructure is so much better and easier to use for Rust.


Esy (https://esy.sh/) has completely solved the first problem, fwiw.


This is a huge step in making OCaml more approachable. I get the feeling that there are quite a few of these modernizations around, but you have to know about them up front in order to hit the happy path. Would be great if there was some sort of "OCaml Happy Path" repo that laid out all of the right tools to use to be successful right off the bat. Sort of on the subject, Reason would be great but last I checked it utterly neglected its native compilation story--it advertised native compilation but every time I ran into an issue the Reason community would tell me that OCaml support was broken and I should use Node. Excited for the improvements to be sure!


Thank you! I'll try it out.


Indeed! For example, it is amazing how small Poly/ML is for a type-inferred multi-threaded language.


> "Go with generics and enums and Cargo"

Isn't that d-lang?


From the looks of things you'll probably get the Go you want first. The only thing missing from the roadmap are enums and there are a few Go2 proposals for them.


Maybe, albeit I'm concerned that the Go team might ship some broken version of generics or enums that we'll be stuck with for years and years. I think that would be strictly worse than no generics or enums. I hope they think holistically about the problem.


> I wonder how many Rust users really want linear types and borrow checking, instead of the other stuff Rust brings to the table: modern language design, large and friendly community, nice type system, native compilation, good package ecosystem.

I think there are probably more users that would love this but they are not users of the language to begin with. The community that actually forms around Rust is the type of community that does not want GCs, wants the borrow checker and the constraints that form the language.


In that case, would ReasonML[1] suit you? ReasonML is based on OCaml, just with a more C-like syntax and nicer tooling, etc. OCaml's semantics are very similar in a lot of ways to Rust, perhaps even better in certain ways. Although it's pitched as a compile-to-JS solution for web development, it's perfectly usable as a natively compiled PL too, using `jbuilder` (dune).

[1]: https://reasonml.github.io/


That's what I love about Rust, you get all of those things without having to sacrifice performance. i.e. I get a modern language, get to write code at a relatively high level, and still don't even have to use a GC. You learn to love the borrow checker, it's an amazing tool to work with.


Having a GC doesn't "sacrifice performance" in itself


Personally I started looking at Rust precisely because borrow checking is a massive improvement over the C/ASM wild west.

If you like GC, I feel like there are plenty of other good options, like D, Ocaml, CLR languages (C#/F#).


This is exactly how I feel. I'd be all about Rust if it was GC'd. Go is close enough that it's what I reach for for personal projects, but not having ADTs and pattern matching is super frustrating for modeling data.


Not to be the nay sayer here, but garbage collection kind of goes against the premise of Rust, as stated by the mission statement. I'm a fan of Rust, but I also like Python, but they are different tools for different problems. Go seems like the tool for your specific problems. I'm not trying to be offensive towards you, or anything, but Rusts mission statement was written pretty clearly and the reasons against a garbage collection were laid out reasonably. It's a systems language, meant to be along the lines of C or C++. Used for close to the metal projects, like operating systems, or drivers (ostensibly), where a garbage collector is going to have a debilitating effect as cycles and efficiency are very necessary. The compiler is the checker, taking the place of the garbage collector (sort of).


You're missing the point. Go is not the tool for their specific problems because it's a crappy language that lacks generics, algebraic data types, etc.

In contrast, a language very much like Rust, but with GC, would be the right tool for their problems.


How about Scala?


If this is true then Rust is the wrong tool for many projects in which it is currently used. Most of those projects are not an OS or driver and are far from the metal.

Even the compiler itself would not fit your description.


It doesn't HAVE to be used for bare metal, but that's what it's best at, but it is bad for quick prototyping, that's for sure. So if you had a project that needed a project that needed quick turn around and rapid prototyping and someone wanted to use Rust because it's the "new hotness" or whatever, then absolutely not. It's the wrong tool for the job. The same as C, or C++ would be the wrong tool. Python is great for something like that.

Just like if you wanted heavy number crunching and data manipulation, I'd push someone towards R and Scala as they have the libraries to handle it (and, I'd also say Python, but I'm starting to show my biases here). Go is great for back end systems, as that's what it was designed for, and it's now getting libraries built out for it for other things to fill in gaps for other things, just as Rust is filling in other spaces, but there's only so far it can go from its original design doc. The GC makes it, inherently, not a systems language. That doesn't mean it isn't great at other things. Rust is great as a systems language, that doesn't mean it isn't great at other things, but it's terrible as a rapid prototyping language. It lacks that capability. It wasn't designed for that =/


What would stop you from using Rc/Arc/RefCell to have reference counting and internal mutability where you need it?


It's really, really awkward to use them everywhere.


They are, and it is one of the places where I'd see us working towards improving their ergonomics in the next couple of years. In the meantime I'm convinced that most code falls either on "small enough that they can be Copy" or big balls of state that also need internal mutability. For people just arriving to the language I always recommend "don't be afraid of .clone(), .clone() until you understand the rest of the language, then jump deeper into borrowing/lifetimes". I would love to have some affordances to avoid having to give that advice.


It's funny, I give almost the exact opposite advice when I'm helping Rust newbies. My diktat is "clone is banned unless you can explain why you must have it", and then I teach people about what a move is and how to think about lifetimes.

w.r.t. Arc/Rc, I'd say that it might feel verbose at first, but it makes explicit what a GC does implicitly, and you can get all the benefits of Rust's ownership model at the same time. I can pick and choose when to rely on the GC and when to explicitly manage lifetimes myself. That's really cool! I maybe a bit weird in this, but I like complexity to get surfaced so I can't pretend it's not there.


> My diktat is "clone is banned unless you can explain why you must have it", and then I teach people about what a move is and how to think about lifetimes.

That is good advice for people that are getting familiar with the borrow checker, making them think about allocations and ownership, but making newcomers that are getting familiar with the entire language, in some cases coming from very different paradigms, can be very demoralizing and the reason they stop or become convinced that "Rust is too hard for them" when what is happening is that they are trying to learn too many concepts at the same time.

The way I see it, the learning curve for most of Rust is a fairly mild slope, with a climbing wall around lifetimes. The further you progress learning the rest of the language, the shorter the wall will feel.

That being said this is born of my experience helping a few people learning Rust, but I could be completely off-base for the general case.

> w.r.t. Arc/Rc, I'd say that it might feel verbose at first, but it makes explicit what a GC does implicitly, and you can get all the benefits of Rust's ownership model at the same time.

But it is verbose. I think this is part of the problem with Rust learnability, because Rust makes inefficient code evident (think unsafe, clone and Rc), and that makes experienced programmers want to remove the inefficiency before they are proficient with the language enough to do so, so they encounter the hardest edges of the language early.

I appreciate that these markers make it better for me when reading the code and I wouldn't want them to disappear, but it does make it for a more verbose experience where the compiler sometimes feels pedantic. I think that better refactoring tools could make these kind of pains (and related ones, like adding lifetimes to a struct) go away almost entirely.

> I can pick and choose when to rely on the GC and when to explicitly manage lifetimes myself. That's really cool! I maybe a bit weird in this, but I like complexity to get surfaced so I can't pretend it's not there.

I'm in the same boat. I just wish it was easier for rustc to detect early when you're trying to apply a pattern from a language that doesn't have memory ownership or thread safety or relies on internal mutability and provide appropriate advice beyond "you can't do that".


> But it is verbose. I think this is part of the problem with Rust learnability, because Rust makes inefficient code evident (think unsafe, clone and Rc), and that makes experienced programmers want to remove the inefficiency before they are proficient with the language enough to do so, so they encounter the hardest edges of the language early.

That's a really, really insightful way to frame it. I don't think I agree that it's a problem, though. Indeed, it's probably the very thing I like most. There's a fine line between syntactic sugar and obfuscating the underlying principles (e.g., I think async syntax is toeing that line). I feel (and I've read others saying the same) that one of the reasons Rust has made me a better programmer is because it makes thinking about this complexity second-nature, and now I do it even when other languages permit (or encourage!) me to forget it.

I'm big on pedagogical rigor and I think there's a particular way to come at learning Rust that results in things like Arc/Rc/RefCell et al feeling good and natural (also, dealing with Result::Err, for similar reasons). I think both of our experiences are equally valid and probably entirely situational. I'll have to think a bit more about it. I have an opportunity to teach Rust to a large-ish group within my company, so it's top of mind for me right now.


> I have an opportunity to teach Rust to a large-ish group within my company, so it's top of mind for me right now.

If you could share an experience report when you're done, that would be most helpful. :-)


Have you written about this (your diktat about clone()) somewhere? I'd be interested to read more.



I haven't, but maybe I will today.


curious, does Swift potentially fill this gap for you?


Yes, Swift is very close. I know I'm moving the goal posts here, but...

My personal itches are usually web projects. Once Swift gets async/await and Vapor|Kitura switch to it, I'll probably jump to Swift. I really dislike dealing with futures/promises. The Swift web frameworks also don't generate standalone binaries like Go/Rust do...the tend to push you to cloud/docker solutions. Swift is still pretty Apple platform centric, but that feels like it's changing even now.


You can AOT C#/F#/.NET

It has value types, Span, PInvoke etc. that make low level interop simple, GC and higher level semantics and better ecosystem/tooling than most alternatives.

Runtime size and GC limit some use cases


AFAIK, F# has still got some wrinkles for AOT support. https://github.com/dotnet/corert/issues/6055


If you don't want the borrow checker, why are you even using Rust? The borrow checker is the foundation of the zero-cost safety guarantees that make Rust such an incredible language to work with.


swift is the language you're looking for. unfortunately it's pretty much only useful for ios dev.


Thank you, so much, to all the people that have put so many hours into making Rust what it is today.

Rust, for me, took what was, prior to learning Rust, a vague undefined notion and really helped me see it formalized into what I can now really call a "mental model" of ownership/lifetimes. It has made my code better in every language I work in.


@steveklabnik what would you say are the core tenets of Rust's success? From Python's side (as a Python prosumer), I'd say Python's core tenets may be:

- Building an explicitly and observably welcoming open source community

- Separating corporate money from technical direction, even at the cost of faster execution

- Upholding pragmatism over all else (e.g. keeping around a C/C++ API)

I'm not sure whether these principles carry over to all languages, or if there's anything you want to add or subtract. Would be cool to have a broadly applied philosophy endorsed by many language stakeholders.


I think this answer would be a bit too long, and I don't have time at the moment. I'll get back to you at some point :)

A quick response to your points: I'm not sure "upholding pragmatism over all else" applies to Rust. It really depends on the details. Open source community is absolutely important, and while you can argue that money is separate from technical direction, that's a very complex topic.


For me - from the language standpoint - it's a sweet post between correctness, pragmatism, high level and low level.

I might be a strange guy liking Haskell and C++. ;)


As a CS major in school who is mostly self taught using the internet, I'd say Rust's big win is how it takes basic concepts from low to high level without bothering me or killing performance entirely.

Just the presence of well integrated Algebraic Data Types (ADTs) makes an incredible amount of difference. They are used to represent errors in a meaningful and easy to understand way (```Result<T>```), are used to show that a function may or may not return a meaningful value without needing a garbage value (```Option<T>```), and the optional case can even be used to wrap a null pointer scenario in a safe way (```Option<Ref<T>>``` being the closest to a literal translation I think).

That's just one small feature that permeates the language. Whatever the opposite of a death-of-a-thousand-cuts is, Rust has it.


My journey with rust has been a single application serving a couple of endpoints from an sqlite database. Its never going over 50Mbs of memory. It has been a solid rock for the past year no matter the hiccups in traffic!


Kudos!

When Rust 1.0 was announced I had a look. I was very surprised because that language was very different from my previous look, when Rust still had GC and green threads (and a syntax with mystic sigils).

Rust 1.0 looked like it could be a good language, but coming from mainstream languages with a richer ecosystem -- such as Java/Scala or C# --, Rust lacked many common libs. For instance, a high-level approach to non-blocking IO, which is something I need when writing servers. There were other issues -- lexical lifetimes often meant one needed to jump through hoops to please the borrow checker. I thought Rust would need a lot more time to mature enough to be a serious contender in my PL list. The next year, the Zero-Cost Futures post[1] by Aaron Turon showed up in my news feed. It seemed interesting, but it was only a plan. A few months later Tokio was born and, again, it seemed interesting. But there's no way it was ready, was there? I went back to business as usual. I kept hearing about Rust, but I had no intention to use it in the immediate future.

Last year, I was offered a full-time Rust job out of the blue (because of my experience in building distributed systems; I had to learn Rust as a part of the job). I was interested because, in general, I was looking for a change -- but I also wanted to make sure I wouldn't spend energy on a language that wouldn't be relevant in 10 years (at least, according to my intuition). I've had job offers in other emerging languages (or old but trendy FP languages) in the past but I don't think they will becoming mainstream, and I want a language where 3rd party libraries for every (modern) protocol, API, database, etc, are complete and safe (and that's more likely to happen with mainstream languages).

I evaluated Rust again to see how mature it was. I was very surprised at the speed of development of the language and of its ecosystem, especially compared to other PL communities I had been part of. The energy in Rust reminds me of Java's better days (but with a modern language). There are librairies for everything, the language and compiler is surprising stable and mature (YMMV). I believe this is a testament to Rust's productivity and reuse. Rust is not perfect, but it's trying to reach an interesting trade-off. It is already very usable considering how recent it is (obviously taking advantage of LLVM).

In my experience, projects with such hype are often just good at marketing. You try them, and they have a lot of issues that they (sometimes) fix later; I'd say it was the case for Cassandra, Docker, Kubernetes, TypeScript (and more). With Rust, I started almost immediately on edition 2018 and found the tooling, ecosystem, language and compiler to live up to its hype. Sure, compile times could be faster, the RLS/IDE support is still a work-in-progress (though usable), but I'd happily trade compile times for fewer bugs at runtime, and most of all, the trajectory looks great; coming soon: async/await, const generics, GATs, and better IDE.

So -- congratulations to everyone who had a part in making Rust happen, and happy birthday!

[1] https://aturon.github.io/blog/2016/08/11/futures/


rust is getting HOTT? :p


I'm itching to get my hands dirty with Rust, but, approaching this from a web developer background (hence minimal low level language experience), I have no idea where to start. I have a Raspberry Pi3 sitting on my desk, perhaps I should start there.

Congrats to the Rust Core Team!


No need for a Pi -- you can play with Rust on your main laptop/desktop, installed in your home directory, and it might be a bit easier and more accessible that way.

A few caveats:

* Rust is getting really big and complicated. The basic borrow checking was pretty neat and accessible (except for hard cases), but a ton of other stuff has been added. Appreciate that getting through it all is difficult, and don't be discouraged. (I think discouragement is one of the biggest barriers to learning a programming topic, for no good reason.)

* Consider starting by writing command-line utilities you would like to have. If you instead decide to start with a GUI UI or full-screen character terminal UI crate (package), you might find that they tend to use a lot of Rust features in their APIs (perhaps necessarily), and you might also have to work around complicated ownership&lifetimes because of the model of the crate. This is an unusual barrier.

* Rust is really a systems programming language. To fully appreciate Rust, you have to need that performance, and know how much more difficult it is to write correct code in C (it's even harder than most C programmers think). But, for Web development, Rust might still come in handy for for high-performance backend work, and possibly later for full-stack (where maybe you won't have to write any JS bits on the frontend, because WASM).


Thanks!

A lot of web dev people start off by writing command-line tools. Later this year, Rust will also be significantly easier to write server-side web stuff in, it’s a little tough to get into at the moment. You could also try front-end web stuff with WebAssembly! The PI stuff is also a good choice.

There’s links to learn more about these topics on https://www.rust-lang.org/ that should help you get started!


I actually started writing Rust using Rocket (https://rocket.rs). It felt a lot like using nodejs/express which helped me get going and the docs are superb. I think using it quickly helped me learn the language and now I’m writing just about everything in Rust.


I'm interested in Rust and have dabbled but don't follow the community too closely. Why do you say later this year, specifically?


It's not 100% for sure, but it's 99% for sure that async/await will be stable in August. The decision to stabilize is being made in seven days, and if it's decided, it will take that long to make it into its first stable release.


We have the whole team waiting for this to land. We're probably going with nightly already due to the problems fitting Futures 0.1 model to our codebase.

Very nice to hear.


The proposed syntax has already landed in nightly, so you could use it right now if you wanted, by the way.


I've been trying a couple of approaches already, but first we're aiming correctness and after that async/await.

Many libraries use tokio::spawn behind the scenes, so it seems we need compat from 0.1 to 0.3 and back to 0.1 to get old futures to work with async/await and the resulting futures to work in 0.1 tokio.

Last time I checked the async/await feature in tokio had some weird crashes and tokio::spawn crashes when used inside romio...


I'm so excited to read this!


I think rust is very close to be "flask-like" (ie: to build REST APIs and some small-medium site) but not close to "Django-like".


Were the major security vulnerabilities fixed?

I remember all the talking while back about the major bugs regarding the language.


Short answer: yes

Long answer: it depends whether you mean actual vulnerabilities, or soundness bugs

Known bugs that could affect security of programs written in Rust get fixed ASAP. There was one serious bug in std's VecDeque that caused memory corruption. There was a more recent issue where if you override type_id method that wasn't supposed to be overridden, and then use another method that relies on type_id being correct, you get crashy garbage. In C or C++ that'd be called garbage-in, garbage-out, and a bad programmer shooting themselves in the foot. In Rust that was considered a vulnerability.

Apart from that, there are known soundness bugs in the language/compiler/LLVM that could lead to undefined behavior, miscompilation, or otherwise weasel out of things that the language is meant to guarantee:

https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais...

At this point these are mostly edge cases that you're unlikely to hit in real code, but if you really really want to make your program crash, Rust can't stop you.


I really want to like Rust, and I've made a few attempts in the past to get involved - I wrote a database driver for a time series database (KDB) in pure Rust among a few other things. I've even done a couple deep dives into rustc to figure out the cause of some iterator slowness (it wasn't giving proper hinting in the LLVM IM).

However, I keep having to step back. And in this short time Rust has progressed a lot (and that isn't entirely a good thing either).

Rust might be the most liked language, but that is mostly a measure of how vocal its supporters are. It is also very underused for how liked it is. It is become a C-like Haskell in that people like it a lot, but don't use it for much important. Like Haskell, it might turn out to be a great research language, but not be used much in practice. I was hoping to see a language that was much more practical.

I'm about to take another shot at Rust, and I decided this time I should just ignore all the dislike of `unsafe` and just use it as much as I need to. I think I was making life more difficult on myself by trying to avoid it so much, but when you have a lot of pointers that have multiple refereces between then, you kind of have to. Doulble linked and mutually linked structures - where any pointer chance can lead to a mutation - are Rust's achillies heel. Unfortunately, you see that a lot in system-level programming. I write a lot of high performance code and low-latency networking. My main reason to use Rust is for performance reasons and a language with fewer footguns than C++., but something that doesn't completely remove my ability to produce good code.

However, I think Rust has moved too fast in the language department. It has changed and continues to change at a pace that prevent people how mainly use it as a tool for other things from becoming an idiomatic writer of the language. The idioms change too quickly.

The networking stack for efficient non-blocking event driven code still isn't where it needs to be. In Java for example, the NIO abstractions are much better. C++ just turns you over to the C system API. Rust is a disaster in the middle with kind of having a socket abstraction and telling you to just use libc for the rest. This has been an issue for years. Tokio is the wrong abstraction on the wrong level for me, and mio underlying library is kind of a mess with its token architecture, how it handles user defined events. Hopefully it has improved in the last couple years. It used to be slower than it needed to be.

"Slower than it needs to be" can describe a number of things in Rust, also the default hash implementation. Why would you use a secure hash for your default? If you really need that, you know you do, and 99.999% of the uses cases don't need that. I often need small key performance, which is basically the trade off they decided against.

C++ has blown its complexity budget a 100x over. Rust: hold my beer. In just a few year, Rust has kept up to c++ in that measurement (again, not a good thing). I'd like the language to stabilize a bit and work on what is currently there instead of running off to add support for yet another programming paradigm (async and userland threads - feel like a trip back in time to 20 years ago).

Side note on docs formatting: Reading the documentation is still incredibly difficult for me. It doesn't flow well, the top of the page has no index - you basically just have to scroll down. The page width is fixed to 960px so there are interior scroll bars on the code segments. No use of background color is mostly white so it is a big mess of black on while with some small indentation differences. Look at cppref and cpluplus: both have essentially broken down contents at the top making it easier to find what you are looking for, they both make use of color effectively. And everything on Rust Docs starts off fully expanded: you don't actually expand what you want to see instead of have to fold what you don't want to see (so first thing on every page, I have to immediately fold everything, then unfold the often too long context section). Seems backwards to me. And navigation on ther two sites is much better once you get deep on the docs. You can't go to a rust page doc and skim down a page or two looking for the call you want (e,g.: I need to find the mutating calls on a tree). On the cpp docs on both sites very eary, on rust it is tough. There is no "here are all the lookup methods" and "here at all the methods to look things up"


There's a lot here, and I don't have a ton of time, and the important thing is that this is about your experience, so I won't say you're wrong. But to add some context on a few things:

> I'm about to take another shot at Rust, and I decided this time I should just ignore all the dislike of `unsafe` and just use it as much as I need to.

What it sounds like to me is that you struggled to learn Rust, and instead tried to write C or C++ in Rust. While they're similar languages, they're also very different. While it's true that these structures are used often in C and C++, they're used way less in Rust, and arguably, those kinds of structures are often bad for performance, not good. As always, it depends. It can be tough to communicate idiom, and to learn the "Rustic way" of doing things. This is true of every language, of course :)

> However, I think Rust has moved too fast in the language department. ... I'd like the language to stabilize a bit and work on what is currently there

This is actually the major theme of this year! https://blog.rust-lang.org/2019/04/23/roadmap.html

I disagree about overall complexity, but again, that's fairly subjective.

> Side note on docs formatting:

It is really, really hard to please everyone here. Some people love it, some people hate it. Sorry you hate it :/


> > However, I think Rust has moved too fast in the language department. ... I'd like the language to stabilize a bit and work on what is currently there

> This is actually the major theme of this year! https://blog.rust-lang.org/2019/04/23/roadmap.html

Major theme of the year, but not major theme of the language. One year of stability followed by one year of working on the next breaking idiom change followed by one year where that change is executed doesn't give you a stable language.

Also consider the pace of compiler updates. With C++, I can be very productive with my distro provided compiler. With Rust, the ecosystem is jumping onto new compiler versions so fast that you basically have to use rustup or a rolling release distro when you have a project with a medium amount of crates you depend on (hundreds). Cargo further has no way of recognizing the MSRV of crates and it's updating the index without asking so you can't even safely add new dependencies or update dependencies without increasing the MSRV of your Cargo.lock.

All of this can change, and maybe the rate of modifications will slow down. But right now it rather points towards the opposite direction, with the recent introduction of editions.


> Major theme of the year, but not major theme of the language. One year of stability followed by one year of working on the next breaking idiom change followed by one year where that change is executed doesn't give you a stable language.

Doesn't C++ have exactly this problem? It keeps adding new features, which change the language idioms. It actually seems much worse than Rust in this area, as Rust is mostly re-enforcing the existing idioms, while C++ is almost like two different languages now (well, probably even more than two).


> C++ is almost like two different languages now (well, probably even more than two).

C++ is a different language in every company I worked for. Ha. Even a different language per team/project sometimes.

Tab vs spaces, different capitalization, different indentation style, exceptions vs no-exceptions, 0 is failure vs 1 is failure and then we like featureA, we don't like featureB, and so on.


The C++ approach of adding new idioms by making the language fatter is of course not as nice as the Rust approach of having breaking changes and disallowing the old idioms.

But then again, the people behind Rust have made usage of the try! macro harder for a questionably useful keyword that could have been named differently as well. And now it's part of the 2018 edition and you need two more characters to use the try! macro even though there are still cases where it's much better than ? and nothing is being done about it.


What are the cases when `try!` macro bettern than `?` operator? I'm a beginner to Rust so I don't know about that.


Often when you are getting error messages you would like to find the place they originate. Failure provides backtraces but not everyone uses failure and tbh it's quite a heavy dependency, using serde and so on. With try!, you can easily shadow the macro to e.g. emit a panic. This can be done to basically any codebase using try!, even if it's not yours. Migrating it to use failure is much harder :).

It would be cool if you just had a compiler flag that invokes an user settable handler function every time Err() gets constructed or something like that.

To give concrete examples, I was developing a library to generate certificates [1]. Often you generate a wrong one though and the parser library you use for testing gives you a Err(BadDer). If that's everything you are getting it can be a bit tough to find out the origins. As it was using ? I had to grep for all the places returning a BadDer error and replace them with a panic. Shadowing try! would have been much easier.

[1]: https://github.com/est31/rcgen


Thanks through I still don't get it. I only know that you can get backtraces of errors by setting `RUST_BACKTRACE=1`.


That only gives you backtraces for panics, but not for errors based on the Result<,> type.


I am not serious. Often in these cases I would use printf debugging to know where the errors came.

(I am still a Rust beginner).


Well, async I/O is important to get early. Otherwise we're going to have an entire ecosystem of synchronous networking code, and there's no going back from that. You can look at languages like Ruby for what happens if you don't get an async I/O story before the package ecosystem develops.


Java seems to have done pretty well, and it didn't have broad stdlib support for asynchronous I/O until a few years ago.


NIO was introduced in Java 1.3.


That's non blocking, not async as in the async that rust is developing.


Non blocking code is asynchronous by definition, regardless of the syntax sugar.


On practicality: I have written a ~30KLOC system for provisioning, configuration and management of a medium-size (500 switches) event network, with an ASP.net-like web frontend GUI, all entirely in Rust with PostgreSQL backend. The system was used for the past two years at an event for 20K people.

So it is very practical stuff for me :)

The biggest difficulty was absence of something analogous to ASP.net, so I had to invent my own... but even there it proved an excellent and practical choice, the macro system is very expressive and the strong typing gives a lot of confidence when refactoring.


You speak of low latency code and doubly linked lists in the same breath.

Personally, I have moved from linked data structures to more cache friendly data structures, which means contiguous spaces. No jumping around. It's very liberating.


Doubly linked lists are used all over for low level programming. For low access, large dynamic data they're a pretty good solution AFAIK.


You are absolutely right. They are used everywhere in the Linux, OpenBSD, and FreeBSD kernels, and likely many other kernels.

Replacing large linked lists with arrays is rarely an actual win. With an array, insertion and deletion become far more expensive, virtual memory is more likely to become fragmented or grow monotonically, and the cache misses avoided are almost certainly irrelevant to total performance.


I wasn't speaking of low-level programming. I was referring to very low-latency code.

Highly performant code tends to get complex. One way to ensure high performance and reduce complexity is to deal with values, not pointers; it is good for concurrency (no aliasing), excellent for cache locality, and the value is right there. Multi-versioned stable values work excellently, like RCU-locks in the Linux kernel.


Doubly linked lists are perhaps not the best example, but there are lots of practical data structures that aren't compatible with the single ownership restriction. It's possible to implement them in safe Rust using various techniques, but it's not ideal.

Then there is the fact that mutation makes things even harder. So for example, you can easily implement a tree using Box to hold pointers to child nodes, but you'll then need a little bit of unsafe code to write a mutable iterator over the tree. (Even mutable iterators for slices require unsafe code under the hood.)


And that should be fine. unsafe is not something to be forbidden, it is a marker for "here-be-dragons, as the compiler can't check that all invariants are correct when you dereference a raw pointer or modify a mutable static variable". In reality I've found the need for unsafe to be minimal, either when doing something that is actually forbidden for good reason or interacting with C libraries.


Yes, the OP said explicitly that he decided to use unsafe to deal with these cases.

>I decided this time I should just ignore all the dislike of `unsafe` and just use it as much as I need to. I think I was making life more difficult on myself by trying to avoid it so much, but when you have a lot of pointers that have multiple refereces between then, you kind of have to. Doulble linked and mutually linked structures - where any pointer chance can lead to a mutation - are Rust's achillies heel.

Sriram_malhar pointed out that doubly linked lists are usually not a good data structure to use anyway. I was pointing out that there are lots of more useful data structures that raise similar issues in Rust.

I don't think that using unsafe is "fine", necessarily. It is much better to implement a data structure on top of safe Rust if at all possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: