I kinda wish that the people behind the Language Server Protocol understood this. The whole idea of that makes no sense to me at all. None. I mean, I get how it works, and why people think it's a good thing, but libraries are the better way to go in every situation that I can imagine. There is no need for the Language Server Protocol or language servers in general, and its existence only makes things more complicated than they need to be.
1. There are N editors in the world (vim, emacs, vscode, ...) and there are M programming languages (c, c++, java, python, ...).
2. Most programming language developers write tooling to make their language ecosystem nice. (gofmt, cargo, ...)
3. Most programming language developers like writing in their programming language.
4. Not all N editors or M languages are written in the same language. In addition the programming languages that our editors/tools are written in cannot always interface with each other clearly (cffi is not supported in every language).
5. "All" programming languages that people use today have some networking stack that can be used to open TCP sockets and send data.
Given these facts it would be easy to conclude that:
If you want editor developers to focus on writing text editors and you want tooling developers to focus on writing tooling but you also want your text editors to support advanced functionality that is already implemented in your tooling the easiest way to send that information is over TCP.
The other options are:
1. Don't have advanced languages for "All" languages.
2. Force everyone in the world to use one programming language.
3. Force all tooling in the world to be written in one programming language.
4. Force everyone to implement another cross-language communication system (ex: cffi) in "All" languages.
Unfortunately these options are more complex then just defining an API and sending messages back and forth.
So why can't you just write your language support library in whatever language you like, wrap that in something that supports the C ABI if it doesn't already, then call that from your editor? If you're going to use a language server, then you have to write code to call the LSP. Why not just call a library?
Why does there need to be a server involved? Why does there need to be pipes, or network traffic involved? LSP defines that JSON-RPC is to be used as the communications layer. Why does JSON have to be involved?
A library naming convention could have just as easily been written which allows all the things an LSP allows. Just name the methods in your language support library the same way everyone else is and then anyone can call your library and gain support for your language. This is the same thing that's happening with language servers, except it's much cleaner and more straightforward than language servers.
What people are doing is writing their language support library in whatever language they like, then wrapping that in a JSON-RPC wrapper with the proper LSP stuff on it. Now the editors have to implement an LSP client when they already had the ability to call libraries provided via the C ABI.
Sure editors only need one piece of code to call any LSP server, but they didn't need any more code to call a library.
If those editors don't implement the LSP calling code themselves, and rely on a library, they still have to call a library that they are using the LSP to avoid doing in the first place.
Nothing about LSPs enable anything that wasn't possible before. Nothing about LSPs makes anything that was possible before any simpler. It all just adds crap to the chain and everyone is calling it a "win." It's not a win. It's a loss. It's adding complexity and layers where they don't need to be, for no discernable benefit. Language support people still have to write language support code. Editor people still have to write editor support code. Except now they do it with new protocols they can add to their resume. This is resume-oriented development, that's all.
Language support is tricky. Libraries and services implementing it will have bugs, memory leaks, etc. ultimately leading to crashes. I'd prefer my text editor not to crash erasing my unsaved changes. So probably the language support should be isolated in a process separate from the editor.
Of course, an LSP client will also have bugs. When I tried LSP support in Kate editor last year, it crashed together with the language server. Emacs at some point (could be after a language server crash too) locked its UI. However, the client code has a limited scope; the editor developers can and will fix bugs there, in contrast with dozens of exotic language support libraries in dozens of different languages.
Probably we could think of better editor design (isolate the core and let the other things crash — like in Xi editor, or something Acme-like with most tooling being external), but we already have a lot of editors. It is also definitely possible to design a better IPC protocol than JSON-RPC, but compared to the idea of isolating plugins (for me, it is one of the primary advantages of VSCode over any other editor with plugins I've used: it almost never crashes, and when it does, it preserves the state), and considering the resource-intensity of the language support itself, I think, JSON-RPC overhead is not as large as it looks.
Finally, I'd like to agree with the sibling comment: unfortunately, C FFI interface to a library is probably not easier to implement nowadays than JSON-RPC interface to a service in most languages, excluding C/C++, assembly and so Forth :)
> So why can't you just write your language support library in whatever language you like, wrap that in something that supports the C ABI if it doesn't already, then call that from your editor?
You can! This is called "defining an API" and this is basically what an LSP is. The downsides of using the C ABI like I said is not all programming languages use the C ABI. To work around this you have suggested writing a wrapper transforms to/from this API that follows the C ABI calling conventions in each language you want to support. To see how fun of an endeavor this is you can look at things like libgit2 [0] which spend a lot of time maintaining bindings for each language. While these bindings do absolutely work they are:
1. Difficult to maintain (look at some issues [1, 2, 3])
If you instead separate into services the lsp team could maintain a whole test suite that your service could be run against. You would provide an LSP + a corpus that has certain features and the test framework could do a set of operations to talk to this system.
If you use a dsl to describe the protocol you can automatically generate server/client libraries to use to talk to/implement an lsp (and other services!) I'm a huge fan of gRPC for this reason: write a single dsl and now everyone can talk to/implement your service.
You can define shared debugging tools. For example ebpf can be used to debug any networked application in linux regardless of what it's implemented in. Similar tools can now be developed at an application protocol level for all LSPs to make development easier without tying the infrastructure to a single language or ABI.
The crux of the issue is: service boundaries solve the exact same thing that C ABI/ffi solve with the following benefit:
1. No dependency on any language-specific implementation of a protocol or API.
2. TCP is supported everywhere and you can get a bunch of free monitoring features from using it. It's also pretty darn fast now especially to localhost.
3. Easy to plug into an TCP server regardless of your runtime environment. Do you need to host your source code on a linus system when your dev environment is running in windows in Visual Studios? No problem!
> Language support people still have to write language support code. Editor people still have to write editor support code.
Correct! Except it's which code gets duplicated. Could LSPs been implemented as .so & dlls that followed the C ABI calling conventions passing HSOURCE_FILE* back and forth in process? Yes! Would it have been easy to implement that for all languages in a safe and secure way that can run in an adversarial environment and allow different people to manage and debug different implementations while sharing standardized tooling? No, not easily.
This still doesn't seem superior to library calls from a complexity point of view.
On one hand you have a library to secure. On the other hand you have the same library (or at least the same logic and methods) now with a JSON-RPC server wrapping it, and both need to be secured.
I would feel a lot better if it weren't JSON, I guess. Binary protocols are just SO MUCH FASTER and require so much less memory. Parsing JSON is fast, sure. Reading and writing binary is probably 3 orders of magnitude faster, and it's easier to read and write binary, in my experience. (I also don't understand why protobuf exists.)