> I still don't get it. We could create a parser framework that allowed a client to get some sort of generic AST and reference graph - which would enable all the use-cases of language servers but would also allow more use-cases in the future (e.g. IntelliJ-style inspections or language-aware diffs).
The point is to move the use-cases to the server/protocol to avoid the "(m languages) x (n IDEs)" problem. The protocol is at https://github.com/Microsoft/language-server-protocol with contribution guidelines so it should "allow more use-cases in the future".
> Instead, the standard requires running half a dozen processes with an API that doesn't provide anything except the exact data required for a number of handpicked use-cases.
Why half a dozen? Two seem enough (IDE + language server). It's also the minimum given that IDEs are using different languages/runtimes and that lot of languages have most of their tooling written in the targeted language.
> Why half a dozen? Two seem enough (IDE + language server). It's also the minimum given that IDEs are using different languages/runtimes and that lot of languages have most of their tooling written in the targeted language.
If you only work in a single language, yes. However, it's quite common that the same project includes multiple different languages, linked or nested into each other. E.g., a web project might have HTML, CSS and (possibly nested) JS for the frontend (Replace with Coffeescript, React, SASS etc as needed) and PHP, Java, or yet another configuration of JS as the backend - not including build scripts and config files. So if you switch between them reasonably often, you'd have language server processes for each of them running in the background.
Alternatively you could have more coarse-grained servers that handle multiple languages (e.g. HTML+JS+CSS, node.js+npm, java+maven etc) but that would seem to make adoption even harder.
Even if it's 10 - is it a problem? Classical Unix shell tools and build tools spawn hundreds of processes and communicate with pipes between them. And still most people don't mind because the overhead is not noticeable. The amount of overhead for the few long-living language-server processes should be far lower than that.
I just wanted to second Matthias247's comment: why is this a problem? I'm sitting in front of a 4 year old iMac. I have VSCode open currently with 3 windows containing 3 projects and code in 5 languages. The total CPU load for the aggregate 22 process group is less than 1% and while it's not using no memory it's not using more than, say, Chrome or Slack use.
Multiple language servers supporting multiple languages, each able to communicate over something more than the C ABI.
Meaning if your language's compiler and tooling are self hosted, you can write the plug-in in that language and leverage the very tooling used at build/runtime.
In other words, you can choose the right tool for the job. One that already exists and is tested, stable, and full featured. Or, you could rewrite everything in C or something that speaks it, run them all in the same process, meaning one bad plug-in brings the whole thing down.
And if you have to reimplement, that ups the chance that your version will behave subtly different than the real thing.
Not sure if you're agreeing or disagreeing, but I had in mind (for example) a language server supporting multiple languages that run on the JVM and a separate language server supporting languages that compile to JavaScript, etc.
(This is assuming that reducing the number of processes improves performance; otherwise, a process per language might be simpler.)
The point is to move the use-cases to the server/protocol to avoid the "(m languages) x (n IDEs)" problem. The protocol is at https://github.com/Microsoft/language-server-protocol with contribution guidelines so it should "allow more use-cases in the future".
> Instead, the standard requires running half a dozen processes with an API that doesn't provide anything except the exact data required for a number of handpicked use-cases.
Why half a dozen? Two seem enough (IDE + language server). It's also the minimum given that IDEs are using different languages/runtimes and that lot of languages have most of their tooling written in the targeted language.