> Tooling is so important to the developer experience for any language, and a language server is one of the most important kinds of tooling there is (because the user interacts with it on virtually every keystroke in their editor).
Here’s how languages backed by large corporations win. They have the resources and the optics to know why these non-fun features are important.
The irony is that the popularization of the language server protocol made it easier for small languages to get good tooling. Where before you had to either write your own IDE, or pick an existing one (of which all had upsides and downsides, so better pick wisely) and write an extension for it - often requiring learning a completely new language and framework to do so - now you can write a straightforward out-of-proc server implementation in the language itself (and likely reusing the existing parser etc that you already have) against a standard protocol, and have several popular IDEs and editors light up immediately.
To some extent; but there's solid reasoning behind this. By coming up with LSP, Microsoft achieved two important goals.
First, and most importantly, its languages (esp. TypeScript) can be used by people who already have an established workflow around an existing editor or IDE. It's much easier to sell a new language that plays well with an existing ecosystem, than it is to sell a whole new ecosystem (and Microsoft knows that full well from experience with .NET!). And higher adoption of TypeScript across the industry means that more people eventually come through it to the larger Microsoft ecosystem, as well, even if it's a small percentage of the larger crowd.
At the same time, it's also a way to recruit the open source community to extend Microsoft IDEs. If you're making a new PL or templating framework or something else that needs editing support, and your audience is primarily people on Vim and Sublime, then LSP is an obvious choice to minimize code duplication. But once you have it, you might as well package it for the VSCode extension gallery as well (and if you won't, someone else probably will).
So I think it's a win-win for everyone, except for companies that sell integrated all-in-one tooling solutions where the main benefit to the user is derived from only one part of it (because users can now mix and match as they see fit, and bundling becomes harder).
You can generalize this by saying that as production values go up and user expectations rise, it takes more work and a larger organization with more people to succeed.
This is true of programming languages, but just as true of movies, games, mobile apps, and so on. It's the result of competition in a creative endeavor where it's possible to scale by adding more people.
One way to avoid this is to come up with a different "indy" aesthetic where traditional production values aren't so important. Another way is to figure out how to build on the work of others instead of starting from scratch.
I don't see how it stands to reason that this is a function of large corporations. Rust has been further along this path given their relative ages and Mozilla is no where near the size or scope of Google. Likewise with a number of other languages that have official or otherwise LSs. Same story with Cargo out of the gate, vs Go modules after nearly 5 years of pain without them, etc.
Edit: I sound more negative than I meant. I am excited about this. It was cool that Sourcegraph and the VS Code extension pioneered here, but it's exciting to have official support and a place for everyone to rally for improvements. I look forward to seeing this used in things like Theia, VS Code, etc.
The reason Go took so long with modules is another function of large corporations. Google doesn’t use modules internally (they have the monorepo) so having a module system wasn’t important.
Rust may have been further along on this path, but in the grand scheme of things Go 1.1 feels far more “complete” than Rust 2015 - and a large part of that is Go’s huge standard library that Google could fund to develop.
There's a thread on rust internals where planning is taking place for a long-term strategy for RLS. It looks like end of 2019 might be a realistic time frame for a compiler-backed RLS. The good news is that it looks like this work might have some significant benefits for incremental compile times too :)
I agree Cargo is good but there are some issues that bug me (e.g. no support for docker layers with cargo --dependencies-only, xargo is not merged in, is there a support for custom repositories yet?).
Microsoft has always been very supportive of the developers in all the technologies they put on the market (Remember the "Developers!" from Ballmer?). They have the expertise in tools and company long-term requirements. Now that they are combining all that with a more open approach, we get Visual Studio Code, the Language Server Protocol, the buying of GitHub, etc. Interesting times for the developers!
This is ironic, in that tooling is exactly what large mainframe systems had early on. With extensive tooling can come lack of agility, which can be its own death sentence.
That said, an advantage of being backed by a large corporation is mind share. Having corporate spending on effectively propaganda helps. A lot.
>With extensive tooling can come lack of agility, which can be its own death sentence.
What do you mean by this? Without tooling you can get stuck in the muck doing everything by hand. Tools enable you to do more with less, so I don't really understand what you are getting at here.
Exactly. Hell, I consider tooling to be some of the most fun (and often difficult) things I work on in my free time.
If I can make something easier for my normal dev environment I see huge gains ~40 hours a week. That's pretty damn awesome and flashy to me. It's definitely not easy, but imo it's well worth my time.
Here’s how languages backed by large corporations win. They have the resources and the optics to know why these non-fun features are important.