I tried Rust a few months ago and I was a little dumbfounded on how little 'modernity' I felt (regarding the IDE experience) compared to what several articles told me I should expect when using Rust with, for example, Visual Studio Code.
Last month a project with requires the development of a high performance event collector came in, and I decided it was time for us to go to the Rust side. Again, I redid my dev setup with VSCode but this time I used rust-analyzer, and oh boy.
"What, I now have practically instant inlay typings as I type a line of code??"
e.g: "let x<: type> = ..." type inlay changing dynamically as I instantiated the object on the right hand side.
So if you just like me got little winded at the sheer complexity of the whole Rust thingy and decided to wait a little more to dive in, let me tell you: rust-analyzer makes the whole thing way more palatable.
Fast forward a WEEK later and I have a multithreaded message passing, postgresql consuming backend prototype to show the guys. It is very performant and consumes about 2 megabytes of ram while running and providing a continuous streaming websocket info feed to the to-be-developed dashboard that we are doing.
Rust is awesome. and better tooling is going to make it soar even higher.
Yes. And the thing is, because Rust is such a complex-to-compile language, and because there's all this discussion around its slow compilation times that they're working to improve, I just assumed the whole time that "well I guess this is the cost of having such a powerful type system". I didn't question how terrible the editor experience was. But now I see how much better it could've been!
I know the Rust team is hard at work, and I know they have a lot on their plate, and I'm very grateful for the work that they do. But I think it was a mistake to de-prioritize the editor experience for so long. It may have permanently turned off lots of people to Rust because they tried it and assumed the language was just too hard, when in reality they were flying blind. The compiler messages may be known for being great, but many people, especially newer devs figuring out a language, don't sit there with a compiler open while they explore what is and isn't allowed. There's much more to the dev experience than the compilation experience.
I'm not a compiler writer myself, but my impression is that designing a compiler "the traditional way" and designing one "the IDE way" look really different. Most people who learned to write compilers 10 or more years ago learned the traditional way. As a new language, Rust benefited from a lot of obvious-in-hindsight things, but "design your compiler for an IDE first" just hadn't reached quite the same level of obviousness by 2010.
You are right. Even today, my impression is that most CS students taking compilers and PL courses are still taught in the "batch mode compile the world" way. Most textbooks still assume that too.
It's somewhat understandable because the way you architect a compiler for an IDE is a lot more complex. It's basically everything that's hard about a compiler plus everything that's hard about data validation, and everything that's hard about caching and cache invalidation.
But, yeah, there's a big gap between industry and academia with regards to how to architect an IDE/compiler.
I think IDEs make things look more complicated than they actually are. Especially new students have trouble understanding what the difference between a programming language and a programming environment is, because they are exposed to them as one unit.
Keep in mind that Rust was already trying to figure out what the language itself was supposed to be and even if the whole lifetime analysis would work once they landed on it as a possible solution. Making a compiler suitable for an IDE from the beginning is much harder (but can inform language design). If you compare the output from rustc 1.0 to any recent version you will see how much of the work needed for a production compiler is at best tangentially related to what people consider the raison d'être for a compiler.
> As a new language, Rust benefited from a lot of obvious-in-hindsight things, but "design your compiler for an IDE first" just hadn't reached quite the same level of obviousness by 2010.
The whole idea of a "language server protocol" is relatively new. And it's why VSCode is absolutely killing it in a lot of spaces.
A "language server protocol" is ... kind of obvious? But it requires a lot of power behind it. It also requires languages whose grammars are optimized in such a way that they don't have to compile the universe to figure things out (see: C and C++)
How did IntelliJ handle things? Did they do something analogous?
Anders Hejlsberg explains explains how “the IDE” way compiler works. First ~10 minutes is traditional compiler background. Then about ~8 minutes before he starts talking about how an IDE focused compiler works: https://www.youtube.com/watch?v=wSdV1M7n4gQ
As someone who knows very little about compiler implementation but has some interest in it, I'm curious what makes the difference between the two in terms of architecture
Basically the IDE one needs to take into account that your program is broken all the time, yet you want code completion for everything else that is actually correct.
Also it needs to respond immediately after asking for completion, as anything beyond 2s is frustrating development experience.
You also want to get real time errors and warnings, just for the parts that are actually broken, not a wall of text like many batch compilers that fail to understand the remaining of the file.
Also you want to be able to do code refactorings, regardless of the compilation state.
So basically you want a Smalltalk/Lisp Machines like experience.
It not only informs the architecture but also the language design. For IDEs you want quick response even if it is not perfect, for a compiler you want correctness over everything else. The language having syntactic redundancy can help an IDE parser recover gracefully from a typo or missing tokens. Having semantic negative space can let an IDE recognize intent for code that looks correct from extrapolated understanding that is actually not semantically correct, and suggest solutions.
rustc already tries to do all of these strategies, but blows the latency budget because it still prioritizes correctness over everything else.
My understanding is that the "traditional way" is based on a number of "passes". In my head it looks like:
1) Parse all the code.
2) Assemble the set of all types.
3) Typecheck all the code.
4) Translate the parse tree into unoptimized machine code.
5) Optimize all that machine code.
This is oversimplifying things, as in practice there are intermediate representations between the parse tree and the machine code. And optimization itself usually involves multiple passes of different kinds. But anyway, the key point here is that, if you change any of the code, you have to run all of these steps all over again. (I'm oversimplifying again. Maybe you only have to rerun them for a given "compilation unit", but that's bad enough.) This is the opposite of what you want for an IDE. There, you want to say "I just changed this function. Please recompile the absolute minimum necessary to tell me whether my change works." To answer a question like that efficiently, you have to rearchitect the whole compiler from being pass-based to being query-based, so you can give it instructions like "please update the type of just this expression".
There was no conscious choice to de-prioritize IDE integration. Folks have been working on it for a long time; the rls has been on the stable distribution since september 2018, for example. There is always more work to do than there are hands to do it.
It is not-trivial to re-architect a near-million LOC compiler, while also still doing all of the other things that the project needs.
Apologies if I came off as ungrateful, that was not my intent. I don't know the inner-workings of the project's prioritizations; I just, from the outside, haven't seen much movement on the language server in the time I've been using Rust and assumed it was because some of the other (many!) things that were being worked on had taken attention away from it.
It's all good; I think in some cases it's a distinction without a difference. Like, it is true regardless of the why, we have had a less than stellar IDE experience. Saying "we do care about this" only does so much to help; it gives you hope for the future, but doesn't change the facts on the ground.
I tried Rust a few months ago and I was a little dumbfounded on how little 'modernity' I felt (regarding the IDE experience) compared to what several articles told me I should expect when using Rust with, for example, Visual Studio Code.
Last month a project with requires the development of a high performance event collector came in, and I decided it was time for us to go to the Rust side. Again, I redid my dev setup with VSCode but this time I used rust-analyzer, and oh boy.
"What, I now have practically instant inlay typings as I type a line of code??"
e.g: "let x<: type> = ..." type inlay changing dynamically as I instantiated the object on the right hand side.
So if you just like me got little winded at the sheer complexity of the whole Rust thingy and decided to wait a little more to dive in, let me tell you: rust-analyzer makes the whole thing way more palatable.
Fast forward a WEEK later and I have a multithreaded message passing, postgresql consuming backend prototype to show the guys. It is very performant and consumes about 2 megabytes of ram while running and providing a continuous streaming websocket info feed to the to-be-developed dashboard that we are doing.
Rust is awesome. and better tooling is going to make it soar even higher.