Hacker News new | past | comments | ask | show | jobs | submit login

In my experience, it's the worst way to do remote work. There are so many better solutions.

If TRAMP is too slow, just mount the remote filesystem locally using FUSE somehow. Use SSH to run processes on the remote system like compile and run the program. No need to run the text editor on the remote system.

You can also do it the other way around: have your remote system load your local data. I developed a small bare metal OS this way. Ran the cross compiler locally, had the output go to some NFS mount which was also available via TFTP. Booted the target system with PXE.

Running a text editor on a remote system is good for one off things and maybe as a last resort, but that's it.




> just mount the remote filesystem locally using FUSE somehow

This is the step that never works consistently for me. There is always some amount of random extra latency that makes the this workflow painful. I work with some extremely large data files, so random access to these is the primary issue.

In general, the idea is that it is often better to do compute where the data already is. My experience is that you should also do the programming closer to where the data is as well. This tends to make an iterative development loop tighter.

But this is highly dependent upon what you’re doing.


That's a different thing, though. You don't edit the data in a text editor interactively, do you? I would do any interactive editing with a local editor and then fire off remote processes to operate on the data.

It's funny because my reasons against using a text editor remotely are exactly the same: to make the development loop tighter. I am very upset by latency and always try to remove it where possible. I think this is the kind of thing where we'd need to look over each other's shoulders to understand our respective workflows.


> You don't edit the data in a text editor interactively, do you?

That’s exactly what I’m doing. The code is written on the remote server. VSCode’s remote setup is actually very good at this. Mainly because, it is really a web editor that is hosted remotely and you use a local browser (Electron) to interact with it. The processing loop then happens all remotely.

But really, I’m talking more about data analysis, exploration, or visualization work. This is when I need to have good (random) access to 100’s of GB of data (genomics data, not ML). For these programs, having the full dataset present during development is very important.

If I’m working on more traditional programming projects, I can work locally and then sync, but recently I’ve been using more docker based devcontainers. These are great for setting up projects to run wherever, and even in this case, the Docker containers could be hosted remotely or locally (or more accurately in a VM).


Yeah I used to work with genomics data and never did I think I needed to have part of my text editor running on the high performance cluster.

I think people are just talking about different things and confusing each other. The original comment I replied to was arguing against SSHing in (or vnc or something) and running the text editor there. VSCode isn't doing that. It is running the interactive part locally. It's hard for me to understand why it needs a server part, though. If you want to edit something locally it has to send it across the network. There's no way around it. It seems like six of one and half a dozen of the other.


Vscode remote has almost no visible latency period.


Because it's running the editor locally...


Is there an efficient way to do "Find in files" from a vim or vscode instance running locally and editing+compiling remote files via ssh ? Preferably something that runs instantly for 1 GiB repos ?


Haven't tried on exceptionally large repos, but in VSCode since actual find logic is on server, it should work simply fine. If I remember correct, even on vscode.dev (in browser with no server), your browser downloads the search index and then search and navigation are fast. Though it may struggle with very large repos.


I’m not sure what you mean by vscode running locally with editing via ssh. I’m fairly certain that when you do a remote connection in vscode, it literally runs the vscode program remotely and you are just connecting to a tunneled web interface. The only thing running locally is the Electron browser shell. So, remote “find in files” is running remotely, so it should be as efficient as it would be from that side.

That said, you can also open a terminal in vscode and use grep. If you’re running remotely, the terminal is also remote. That’s what I normally do.


VS Code uses ripgrep under the hood (locally and remote).


Have you actually used vscode remote? If not you should. If you have all I can is that I’ve personally used all the solutions you are mentioning and for me vscode remote is the top bar none even for very large repos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: