Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Rust Language Server Alpha Release (jonathanturner.org)
266 points by steveklabnik on Jan 17, 2017 | hide | past | favorite | 33 comments



Rust's IDE story is starting the year off with a bang. :) It's taken me until now to realize that the protocol is semi-standardized: https://github.com/Microsoft/language-server-protocol . Are Rust and Typescript the only two languages with implementations of this protocol so far? (EDIT: nevermind, found http://langserver.org/ , which implies there are quite a lot of these.)

I'm also fascinated to hear that it's using both Racer and rustc to provide autocomplete. Is there any long-term plan to provide "quick and dirty" info from the compiler itself rather than from Racer? EDIT 2: Ah, the final paragraph addresses this. That's what I get for commenting before I'm done reading. :P



I wonder why C++ isn't on this list.


It's the second entry in the table here. Links to https://marketplace.visualstudio.com/items?itemName=ms-vscod...


Eclipse is working on one for Java I think: https://projects.eclipse.org/proposals/jdt-language-server


Are there any implementations of the language server protocol for vim or emacs?


nvim-langserver-shim internally uses an older version of https://github.com/prabirshrestha/vim-lsp I'm hoping in couple of weeks vim-lsp will be stable and so that I can start contributing to nvim-langserver-shim. Currently I'm developing vim-lsp independently.

It already works asynchronously in neovim and vim on windows, mac and linux. You can follow up with the discussion at https://github.com/neovim/neovim/issues/5522 (you can see some of the progress in thread in gifs)



As someone who had worked on language support in IDEs for the past 5 years, it's really great that we have finally arrived to the point of getting common protocols. IDEs have historically been very self-contained, each with its own ecosystem, resulting in a lot of unnecessary duplication of effort. Now, at last, we can do neat things that are immediately usable across the entire language ecosystem at once.

Better yet, we can have language designers implement support themselves, ideally using the same code that powers their compiler. Historically, tooling support has been the single biggest stumbling block for new languages, no matter how promising. This should significantly reduce the barrier to entry for that, and make new languages more viable as a result.

The fact that it can also be used to "light up" hardcore editors like Vim and Emacs is also a nice bonus!


> it's really great that we have finally arrived to the point of getting common protocol

This. Common protocols is what has helped us independently, incrementally and exponentially make the internet more useful.

If we can (finally?) get this lesson learned down to the application level, computing may finally start advancing conceptually once again.

Right now we've been in a rinse/repeat standstill iteration for god knows how many years.

But I guess everyone is too busy trying to get rich building the next big closed service to consider fundamental issues like that...


https://github.com/Valloric/YouCompleteMe

YouCompleteMe for vim, although it relies on python so may not be portable. I took a quick stab at building one in pure vimscript and a process in vim 8 and it seemed very doable.


I really like this approach from the dotnet world but with one caveat. The language server assumes a project layout and doesn't allow for any variation from that. This has problems when you're generating code, say a web service interface or some lexx/yacc source. I like these to go in my build output (because they're build artifacts not source material), but AFAIK you can't tell the language server to also look at these files.

Another example is when you want to share code between projects without a seperate dll, like with a client/server model. Easy enough to do with make et al but not with a language server.

Neither problem is insurmountable though.


While yes, it can be a pain, I love that rust makes it hard to be a special snowflake. It promotes everyone following set conventions and makes it easier to grok other peoples work.

rust's powerful macros are an exception to this, I guess.


Is there a convention for the placement of generated code?


Yes. Cargo supports build.rs scripts that build code into a special directory reserved for code generation, IIRC. The cargo people generally think this sort of thing through.


Ok, I checked it out (http://doc.crates.io/build-script.html)

Would have been much better if you could tell it the dependencies instead of having to use yet another make clone (and a bad one at that) though.


> Would have been much better if you could tell it the dependencies instead of having to use yet another make clone (and a bad one at that) though.

It's not intended to be used like make, with lots of shell scripts in a Makefile.

Instead, you write all the actual build code for a given task once, and package it in a Rust crate, which can then be pulled as a build-time dependency.

So, for example, there's a cmake crate (https://docs.rs/cmake/0.1.20/cmake/) that handles any project using cmake. If you just have one of two glue files written in C, you use the gcc crate (http://alexcrichton.com/gcc-rs/gcc/index.html), which—despite the name—can also handle other C compilers on other platforms. And if you need to generate code for a perfect hash function, you use phf (https://docs.rs/phf_codegen/0.7.20/phf_codegen/). And so on. If you run into some other kind of common pattern across many projects, just write and publish another crate, and call it from build.rs.

These libraries typically do a lot of work to handle things like cross-platform compatibility.

If you already assume the programmer (1) knows Rust, and (2) might be running on either Linux, MacOS or Windows, this sort of interface is much more convenient than requiring them to get Makefiles and shell scripts working, and to handle per-platform compiler invocation issues, etc.


RLS plugs into the compiler so I assume that it can pick up build output files that are to be compiled in.


AFAIK (unless the rust one is very different to the .net one) it maintains it's own state by looking at the source, not at the compiled output. Adding a class in a new file should make it visible in intellisense before the code is recompiled.


The server-side of the Rust one will be completely different from the .net one. It just shares a protocol.


> types on hover - get the type of a symbol

Does it get me the the type of closure arguments in the middle of chained method calls? That's what I'm currently missing from other tools.


It should yes, if it doesn't then its a bug that I should fix...


That's what I'm looking forward to! I didn't realise how much I relied on that kind of information being easily accessible when using implicit typing until I started writing Rust and didn't have it like I do for C# at work.

But this, this is magnificent progress. Looking forward to trying it out!


Seconded. This would have saved me literal hours already. I can never, ever remember the type signature of the procedural parameter to Rust's higher-order functions.


With this general direction languages are taking with openly accessible compiler and language services, I wonder if RMS will ever reconsider his service-hostile approach he has forced onto GCC.

When every language and every editor supports this protocol, who's going to want to use GCC for anything, when it's as closed (and thus relatively useful) as a brick?


I'm not sure how the GPL effects anything here. This is a local service, not an internet service. And the GPL doesn't cover the output of GCC at all. This also doesn't actually produce the end binaries.


It's not about the GPL. Historically, RMS has opposed any effort to make intermediate outputs of GCC available (e.g. the AST, type information, or even having a stable plugin API). On the basis that it would then be used to build closed-source products that use GCC as a service, rather than those products contributing to GCC. It might have been prompted by earlier efforts by Apple to make an Objective-C compiler built on GCC that isn't GPL-licensed, which obviously ran foul of the GPL. Ironically, that's why Apple invested in clang and why today we have the rich clang/llvm ecosystem.


Is there anything in particular it's missing? Some quick searching shows that GCC produces a number of intermediate outputs.


As I understand it, its not as bad as it once was, due to the existence of LLVM & Clang.


How does this compare to YCMD[0]?

[0]: https://github.com/Valloric/ycmd


RLS plugs into the compiler so can fetch more info than YCM, which mostly uses racer (which sort of emulates the compiler but isn't perfect).

Moving forward I'd say RLS will end up being better. As much as I like YCM/racer, it will be very hard to make the type info complete without reimplementing a full typechecker.


That uses "racer" for rust I think. It looks like Rust Language Server also uses racer + some other stuff.


The other stuff is rustc's metadata.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: