Hacker News new | past | comments | ask | show | jobs | submit login
SSH alternatives for mobile, low-latency or unreliable connections (console.dev)
132 points by nextaccountic on Nov 2, 2021 | hide | past | favorite | 101 comments



Is there any answer other than "mosh, duh"?

Keithw (who's now a prof at Stanford) wrote mosh for that use case (high loss high latency), if memory serves correctly. (which it might not! being as apparently an entire decade has passed since "what feels like last month")

Also, there's some terminal cleverness, which I admit, to great embarrassment, I still haven't bothered to deeply understand (my "sensai", who offered to help me grok it if I ever got stuck, left this world too soon).

Mosh's ability to preserve interactive responsiveness, even through the most high latency, low bandwidth, high packet loss environments is unparalleled! You know, for all those times when you're on a cross country flight, your only network access is a IP-over-DNS tunnel, and you simply MUST ~zephyr~ chat with your friends. Uh, I mean, do mission critical work... ;)


I agree.

I'm not sure what this author is talking about with tmux + mosh and adding the latency back in. Maybe because I run the tmux on the remote server and I mosh in. I don't miss native windowing or mouse controls at all, and still have scrollback. I modified the keybindings so it is easier for me (as a vim user) to remember -- including window splits, layout changes, and adjusting pane sizes.

I don't just use mosh in high latency, low bandwidth. The fact that mosh will reconnect even after I close the laptop (and put it in suspension), move from wifi network to cell and back, it's all fairly seamless. That makes it practical for my primary dev and ops environment on a remote server. I have had this setup for about 5 years now and use it every day.

I used to use something like autossh, and it did not work as well.


I was excited to learn about `mosh` through your comment because I'm dealing with X11 connections dropping out when IP addresses change.

Turns out that it has been a github issue since 7 March 2012 with no progress or assignees: https://github.com/mobile-shell/mosh/issues/41

Anyone have a robust way of forwarding X11 from a Docker container from a remote host via ssh, even when an IP address changes, or a connection drops?

(Right now I'm typing this in a Firefox container running on Docker on a minimal virtual machine, ssh'd from my workstation, so I can use any version of anything without having to `trash` my workstation.)


Probably just a typo but you likely meant "sensei" which is mentor/teacher/senior where "sensai" is detergent.


Thanks :) I'm embarrassed, I can't edit the comment anymore.

Making it more amusing, I know I suck at spelling so I even googled "sensai" first, before posting, quickly saw that it didn't have the "did you mean...?" prompt, and assumed it was the right spelling. Oops. :)


Probably just a typo but you likely meant "senzai" which is soap/detergent where "sensai" is "delicate" or "sensitive".


I remember picking up Mosh when it was shown on HN as an MIT student project. Apparently that was back in 2012 [1]. Time flies.

I've used it on and off over the years, mostly when having to work on a fast train and a flaky 4G connection.

[1]: https://news.ycombinator.com/item?id=3819382


SSH is problematic for several use cases.

It is easy to future-proof the crypto for anything other than a quantum computer:

    Ciphers chacha20-poly1305@openssh.com
    KexAlgorithms curve25519-sha256@libssh.org
That is also the fastest cipher for any hardware lacking AES acceleration, but clients configured this way will never talk to older servers (or vice versa).

File transfer is a real problem for SSH. The scp command is being slowly excised because of its security problems.

https://lwn.net/Articles/835962/

The replacement sftp protocol simply does not perform well, and the performance problems are explicitly not a development priority.

https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers...

Redesigns are on the table:

https://www.psc.edu/hpn-ssh-home

At this point, if someone baked the chacha20 cipher into vsftpd (replacing TLS), I would take it for internal use.

Wireguard is also an option, allowing us to simply use telnet and cleartext ftp (promiscuous mode on localhost would be a potential path to abuse).

In any case, people where I work have standardized on scp and sftp for communication between many disparate architectures, and few realized the problems this would bring.


rsync over ssh works perfectly (i.e. with the rsync option --rsh="ssh").

Not only it works much faster than scp or sftp, but it also does not lose any file metadata.

I have not verified if newer ssh versions have corrected this, but a few years ago scp and sftp failed to copy extended attributes and on some file systems they truncated timestamps.


I'm probably living in my own little bubble, but I didn't know anyone used sftp for ANYTHING, at least not in the last decade.

Looking at my terminal history, scp for a single file, rsync (which you don't need to pass that --rsh flag to.. during the 21st century ;P) for anything that requires the ability to be interrupted and resume gracefully (which is most cases), and, in the rare case when I need speed above all, the "tar lzop ssh bash pipe" trick (I don't know if it has a canonical name) is correct.

Isn't this what we all do?!? If there's a better way, I don't see how, but would like to know.


I use sftp every now and then I need to pull something and I don't know where exactly that is. A shortcut for finding the path via ssh and then pulling the file via scp, if you will.


That makes sense!

In that rare situation, I typically use tab completion to find the remote file. However, I bet that would SUCK on a high latency or low bandwidth link, where sftp would be just fine.


FWIW sftp vs scp is more about the protocol. Apparently new OpenSSH has an “scp” command that uses the sftp protocol.

But yeah Rsync is a good choice.


Note that sftp != ftps. sftp is scp. Rsync is better if you can get it, but I'm pretty sure that whenever I transfer stuff using WinSCP and File Explorer (Android) it's using sftp.


sftp still used extensively in banking, payments, transferring financial data, or datasets like student information from school districts or information on lawyers from bar associations.

Back in 1999, in one of my first professional gigs, people were still using ftp for sensitive data for automated data transfers.

Rsync+ssh would be great, but enterprise IT is slow to change.

AWS has hosted sftp backed by S3, and it's going to stick around for a while.


Sadly in 2021, ftp is still used extensively in financial services for stuff like clearing stock trades. I wish more institutions use sftp/scp for this and stop relying on allowing specific IP addresses for security.


> I didn't know anyone used sftp for ANYTHING, at least not in the last decade.

We're moving customers from FTP to SFTP. Most recent one was a well-known Fortune 500 company, which uses it to exchange some ASCII flat files with our software.

Mostly it's for XML files or PDFs.

It's simple, it works and it's easy for support to poke and prod.


You raise a good point, and I was incorrect. In my original comment I was thinking of any of us, not muggles.

Come to think of it, I once even set it up at a job in 2009. We needed to digest huge amounts of flat files from clients with little tech savvy. I put FTP on a server, put that behind OpenVPN, then sent the customers a pre-configured OpenVPN client installer. Never broke.

Tangentially, all this mention of FTP being used by banks and financial institutions is inspiring some choice nmap scans to run... ;)


Like someone else said sftp it’s huge for B2B and probably is part of how you get paid. *ftp is blunt but nice when dealing with other parties since it interoperates well, everyone has it, and (most of the time) it just works whether you are talking to a Linux box, IBM MQ, or something else altogether.


> sftp for ANYTHING

Solid Explorer (Android app) connects via sftp to Linux machines just fine. While I use Termux and rsync to do backups and bulk transfers, Solid Explorer allows me to rapidly browse remote files.


> rsync over ssh works perfectly

I use rsync+ssh a lot, and it doesn't exactly work perfectly on long-distance 10-100 gigabit networks: ssh has a buffering problem at high bandwidths which the maintainers won't accept patches to fix.


That is a limitation of ssh.

Using scp or sftp would limit the transfer speed to a value several times lower than rsync in any case.

On short distance 10 Gb/s Ethernet I did not see problems, but it does not surprise me that on long-distance links they appear.

In such cases, if full speed is desired, it is likely that ssh must be replaced with either IPsec or a tunnel over TLS.


scp has exactly the same limitations as ssh, it's the same code.

I don't know anyone in the HPC world who uses "IPsec or a tunnel over TLS".


Tried rclone yet? Does it all, parallel and fast.


What back-end do you use? Also on top of that back-end do use type = crypt, directory_name_encryption = true and so on? I tied various options and found rclone were slow for me.


> ssh has a buffering problem at high bandwidths

This is interesting, can this be the reason for why playing 720p video in Firefox run with ssh and X11 forwarding is fine bandwidth-wise (easily 1Gb/s) but it causes responsiveness to mouse/keyboard actions to go into abysmal 5-30s territory?

Any references to the ssh buffering problem out there?


tar over ssh also does well with file metadata.


I understand and sympathize with rsync advocacy, but there are two major problems here.

The first is the license. Because rsync is under the GPL, it will never be bundled in the OpenSSH.com distribution. It is even missing from OpenBSD base (just confirmed, and loaded it with pkg_add - there are two versions, one with iconv).

So let's take Microsoft's implementation, which will forever lack rsync (as they haven't touched it in well over a year).

The file transfer agent needs to be under the same license.


rsync is a different program than ssh, installed as a different package, so it does not matter what license it has. In most cases, ssh will run a bash at the remote end, which also has the GPL license, so it should be obvious that it is irrelevant what license have the programs launched by ssh.

When you run rsync over ssh, the local rsync starts the local ssh client, which connects to the ssh server, which runs the remote rsync, while the data is piped through the ssh connection.

rsync is available for most operating systems, but it is not usually installed by default. OpenBSD might be the only exception without a rsync package, but it has openrsync. I assume that it uses the same protocol so rsync to/from OpenBSD should work as well as to/from Linux, Windows, MacOS or any *BSD.

If you have the right to transfer files to a computer then normally it is very simple to install rsync, only for your own user if do not have admin rights.

In most cases, you can initially just copy the rsync executable file to the target computer using scp, and then do any other file transfers with rsync over ssh.

After I have abandoned both scp and sftp many years ago, I no longer had any problems with the performance or data integrity of file transfers over ssh.


Android will never, ever install rsync into base.

Nor will OpenBSD.

The rationale for this decision is not technical, it is idealogical.

A technical solution to this problem is insufficient.

To be honest, I was unaware of this:

https://www.openrsync.org/

For maximum utility, it should be bundled with OpenSSH.


I have already written that OpenBSD has openrsync, so there are no problems with OpenBSD.

Because openrsync has a BSD license, it can be used on any other system that completely avoids GPL programs.


> if someone baked the chacha20 cipher into vsftpd (replacing TLS), I would take it for internal use.

Um. TLS can do chacha20 just fine if that's what you want. In particular in TLS 1.3 you can insist on only doing TLS_CHACHA20_POLY1305_SHA256 if for some reason you really don't like AES or maybe block ciphers in general.


The overriding goal was simplicity for an embedded solution.

Yes, throwing up an stunnel in front of port 23 enables telnet-ssl, but this is a bit heavy for a small machine.

I actually have an emulated VAX running VMS 7.3, and the C compiler on this machine does not support 64-bit "long long" so AES-CTR is the best that it can do (and chacha20 won't compile here).

In this case, a version of chacha20 for K&R C would be very handy. As you can imagine, sftp for this machine has been a unique challenge.


Sftp is also not that great. To transfer a folder with children you have to create an empty folder of the same name on the server. Only then you can transfer the client folder, but it doesn't go into the server folder of the same name. Perhaps stack overflow was just entirely incorrect, but that's how I've had to do it with limited systems in the past.


You can also use lftp to transfer a directory tree using the sftp protocol:

  $ lftp sftp://hostname
  lftp ~> cd somedir
  lftp ~> mirror .
  lftp ~> quit


$ tar cf - foo bar | ssh user@host tar -C baz -xvf -


high five

It truly is the best, no? My pipelines look a bit different than yours (they usually have `lzop`/`zstd`, `cd`, and occasionally `pv`), but tar truly is the best file copy utility.

I rarely use `cp -a` even when copying locally. `tar` is so much faster.


Then why isn’t cp-a and alias for tar?


Because that's not how you use it. :) But you raise a decent point. I'll write a short script you could alias to cp in most cases, put it somewhere, and edit this comment in a day with a link.


That can bite you if the remote tar allows absolute paths.


I use this locally as well, simply because I know and prefer tar's behavior to that of cp. For example tar cf - . | (cd /some/where/else; tar xvf -). It's possible to do this using cpio as well but it always seemed like cpio needed so many more switches to do the same thing.


rsync has almost the same syntax as scp, and is really what scp should have been. Use it instead.


The two alternatives in the article were previously discussed in HN as well: mosh (https://news.ycombinator.com/item?id=28150287) and eternal terminal (https://news.ycombinator.com/item?id=21640200)


I think whoever wrote their title messed up -- "low latency" sticks out at being the one non-problem in the title (and the article deals with high latency).


This jumped out at me too!

It's the author's mistake, not whoever posted it here.

Which makes me question how much consideration should be given to one who doesn't know what latency is. :) Assuming the best, it's a typo, we're all human. On the other hand, it's been more than a month (article was updated 2021-09-22) and he still hasn't corrected it...


Yeah -- in article, the author seems to pretty consistently get it right. I almost wonder if it is something where the title is written by somebody else. For example, IIRC this sort of thing has lead to a couple head-scratchers of a title at Ars Technica, a place with pretty good writing in general...


Speaking of SSH, I thought of vim/neovim. I frequently use neovim over SSH to company's computer for development, but the experience is horrible if the connection is unstable. If I remember correctly, other editors such as VSCode SSH plugin uses a local buffer for editing which is much better when the network is slightly unstable or high latency. Wondering if there is a way to do this for vim/neovim...


For the same snappy "local buffer" experience with ssh/vi, you can mount remote repo using sshfs, then open with vi from that local mount:

  sshfs user@computer.company.com:/home/repo ~/repo
  vi ~/repo/file.py


'vim user@computer.company.com:/home/repo' also works afaik :-). Vim has support for this built in.


Thanks, I've heard of this earlier. However, I use ssh as I don't want to install some development packages on my local computer, and I need language server to work (which requires the development environment to work properly). sshfs is not an option here as the problem is the development environment rather than the files.


tramp-mode does this on emacs. It's nice that it's just opening a file with only the `/ssh:user@host:` prefix, but a few things need tweaks (project tree, shell). BTW `/sudo:user@machine:` works too.

It still feels like there's value in making this work simply outside of emacs so more programs can make use of the cached copies. Is there a more resilient alternative to `sshfs`?


Tramp mode is great. My only pain point with it is it operates purely in serial.When i save a buffer i have to wait for it to complete on the remote server before I can switch to another. It's almost enough for me to want to open a separate emacs instance for each remote file I'm editing. (Which in tmux might even be feasible ... hmm ...)


There's a neovim plugin in development which aims to solve this: https://github.com/chipsenkbeil/distant.nvim.


mosh + {emacs,vim} work pretty well on less-than-perfect links.

One thing that I might miss with a local buffer is good syntax checking support (I use "flycheck" in emacs) -- it requires the server's software, and so composing local editing with server-based syntax checking sounds pretty hard to get working well.


> The main downside with Mosh is that it emulates the terminal remotely and only sends the visible state of the terminal back to the client. This means if you have a lot of output from a command, you will only be able to see the result that would have been visible in the terminal. This is known as the scrollback buffer. It’s a known limitation in Mosh because it focuses on efficiency and solving issues with poor connectivity, so it makes sense to only send the visible portion of the terminal.

This is indeed one of the biggest downsides to mosh, and is a very often requested feature. But I am pleased that this article framed the lack of scrollback as an explicit design decision, and didn't simply call it a "missing feature".


One person's "bug" is another person's feature.

In my usage, and opinion of mosh, the remote terminal emulation is at least half the point. It lets you be functional with minuscule amounts of bandwidth, and makes your sessions more responsive when you have plenty of it. It's efficient (enlightened even...) to solve an engineering problem using the appropriate distribution of resources (bandwidth, cpu, memory).

A remote terminal that wasted resources sending things that would be invisible anyway strikes me as not just less useful, but really mediocre/lazy design.

(IMHO, of course!)


It's also a feature since if you accidentally send a lot of output to your screen, you don't need to wait for the buffer of incoming data to clear before your Ctrl-C will stop the data. You can Ctrl-C and stop it immediately.


This can also be solved with programs like screen and tmux. Requires some work though if you want the seemless scroll function.


Hadn't heard of Eternal Terminal before, but I use mosh quite regularly - for portability I remote into my home network via Wireguard on an iPad Pro, using mosh inside Blink shell (https://blink.sh) to connect to my machines. Over 5G and 4G it works very smoothly, but does lag a bit if I drop down to 3G speeds. I very much do not regret getting the 5G version of the iPad as it's super useful, and I prefer this setup to carrying around a hefty workstation laptop. With the magic keyboard attached I basically have a tiny but comfortable laptop, albeit quite top-heavy.

Having to use tmux isn't a hassle for me as I would be using it anyway, and it has the added benefit of being able to connect to my active sessions on another machine and pick up where I left off.


I've been using Termius https://termius.com/ios paired with Wireguard for a long while on my iPhone and generally I am happy with Termius as it has very pleasant to use touch interactions both for selecting text and scrolling and performing arrow keys up/down.

However, the big drawback of Termius is that in the free version it will terminate the connection after quite a short while as soon as you put the app in the background. They offer a subscription to have it not behave like this but I don't want to pay a subscription for that feature. (The subscription includes additional features as well, but I don't want any of those features.)

When I read about Blink shell now, I was kind of hoping that it might be a good replacement. I tried it out on my phone, but I am a bit bewildered about how to select text using Blink shell.

I am wondering if maybe Blink shell is mainly suitable for use on iPad, and not as great to use on iPhone. At least, my initial impression is that it is probably fine for use with a keyboard but that it is not made for touch interactions to the same degree that Termius is.

Edit: I figured out how to select text with Blink shell; double tap and drag [1]. Unfortunately it feels quite clumsy and awkward to me compared to Termius. Not just the motion itself, but also how the resulting selection is handled. I will keep trying to use Blink shell for a while though. After all, Blink shell is open source and so if I like it a lot I could maybe rewrite the touch interactions to be more in line with what I'd like them to be.

[1]: https://github.com/blinksh/blink/issues/1116


Perhaps a tangent, but I found this exchange between Bradley Fitzgerald & Dewitt Clinton fascinating. Tailscale manages to not drop ssh connections even while the remote machine is being updated, with vanilla TCP.

https://twitter.com/dewitt/status/1329104663975587840

I too remember the era of ssh connections feeling strangely robust. I don't know enough about networking to fully understand how this works though.

But if that's possible at a networking level and all we need is a stable IP, then could an appropriately configured VPN solve these issues?


The idea behind Mosh is almost perfect. Unfortunately, it's just a proof-of-concept project that has not been maintained for years. Although it has met my daily needs, I'm still looking for a better alternative because of some known bugs.

I had hoped on draft-bider-ssh-quic [1], but unfortunately it was marked as "no longer active".

[1]: https://datatracker.ietf.org/doc/draft-bider-ssh-quic/


Mosh isn't proof of concept. And it's not not maintained either. It's just "done" (in my opinion).

Not all software needs to continually change. There is the concept of software that does one thing, and does it well.

THAT BEING SAID... if your complaint is regarding the SSH agent forwarding, I might be convinced to maintain a fork with that, and only that problem being addressed. :) That one's personal to me, and I've been meaning to poke at it since... well before it was released to the outside MIT community.


What bugs? Mosh isn’t a proof of concept. It’s a stable project.

https://github.com/mobile-shell/mosh


It’s stable, but that doesn’t mean feature complete, no release in over four years in spite of various fixes / github commit activity. Additionally the support for jumpbox/bastion pass through have been languishing even longer than that (https://github.com/mobile-shell/mosh/pull/696)


Email the author? It does seem like ssh-over-quic would be a reasonable way forward.


I'm curious: what bugs are keeping mosh from meeting your daily needs?


> The client will also disconnect if your IP address changes, or there is a temporary loss of connectivity.

Not to mention CGNAT doesn't have large enough memory to respect TCP keepalive, it will drop your connection in a heartbeat if you even look at the session funny. I only have mobile internet at home and it was a pain until I wrapped it up in wiregaurd permanently.

Using SSH is now bliss... TCP keepalive is respected, an idle prompt will actually stay alive for the full 2 hours, which is basically impossible on any consumer internet connection today. I can switch physical connections without worrying about it dropping too... even when packet loss is crazy high wiregaurd ensures SSH doesn't know enough to drop the connection. I no longer worry about losing connections enough to usually not bother with tmux until I actually need multiplexing.

My internet experience in general has improved ever since using wiregaurd.


I have a related question to the OP.

What security would you recommend to run on an unreliable connection?

To be specific, I have IoT devices which connect to the cloud using MQTT-SN. We use MQTT-SN because the underlying connection is NB-IoT (required for high penetration into difficult to reach areas). The latency on NB-IoT is quite high, which can lead to problems with TCP timeouts - so we have been recommended by the network operator to use UDP not TCP, which in turn leads to MQTT-SN, as it supports QoS 0 (unacknowledged).

But we are also very aware of security issues - so how would you put something like TLS onto a UDP datastream?


I am not the OP but openvpn in its default UDP mode, with long timeouts and frequent keepalives set on both client and server side, works quite well on unreliable connections. All of the crypto available in whatever is the latest version of openssl is available for use. If setting it up in a new config these days you would want to limit it to TLS1.3 and associated cipher suites only.


For TLS over UDP there's DTLS [0] which is probably still problematic for your use case since it still requires the typical TLS handshake.

[0] https://datatracker.ietf.org/doc/html/rfc6347


In addition to what others have said, NNCP [1] is an interesting way forward also. Though NNCP may not be stable enough for your purposes.

[1]: http://www.nncpgo.org/


wireguard might be a good solution for you. https://www.wireguard.com/protocol/


Is there a way to reconcile the duality of tmux and i3wm fighting over terminal/window management?

I am very frequently logged into remote systems, running tmux, and multiple terminals. At the same time I have multiple window manager windows running multiple of these tmux sessions. Different types of window arrangement. Different types of clipboard. Different keybindings. Different support for focus follows mouse. It’s janky af.

This article alludes to an iterm2 specific solution to the problem. I would dearly love to figure out a general one for my window manager of choice.

Help.


The only thing missing in Mosh is SSH tunnel support.


For me, it's agent forwarding.


Basically why I stay with SSH, it's my poor man's VPN.


I've been looking for something like eternal terminal for a while. I really like using my terminal's scrollback compared to tmux and don't cancel/rejoin from other systems often. I really just want 1. Allow roaming (close laptop at one place, reopen at another) 2. Allow persistence (sleep laptop for the night). I get the Mosh advantages, but they aren't worth killing the scrollback buffer for me.


If you connect to Linux servers from a Mac, you should try out iterm2's tmux integration. Iterm2 knows how to manage Tmux transparently, so you don't have to even think about it. You just get resumable SSH sessions that save your windows and tabs, and use normal scrolling.

It's about as transparent/native feeling as you can get, and it works really well.


It's shocking how exactly zero terminal emulators on Linux support tmux -CC .


> However, tmux waits for the remote server to send the new screen contents which somewhat defeats the purpose of Mosh. Mosh removes any latency by not waiting for the remote server, but using scrollback through tmux adds that latency back in.

I don’t follow. Is this user running mosh inside a tmux session rather than running tmux inside the remote shell?

I don’t see how tmux adds latency back in.


I've been using this script for years, and it works great: https://mazzo.li/posts/autoscreen.html

It's very much like eternal terminal, but requires no upfront setup apart from having screen installed on the server.


I love Mosh and use it every day. One area it's failed on me however, is that I always thought it was supposed to self-recover in connection failures.

In practice, it means that every time my laptop goes to sleep, or loses wifi for an extended period of time, I need to do an interrupt sequence and re-start it. Not the biggest deal, but things would be so much better if there was a way to configure it to re-try every so often instead.

> mosh: Last contact 9:45:19 ago. [To quit: Ctrl-^ .]

I'm not sure what the use of this message is -- surely you would think after 9 hours, trying again would be worth it.


Hm, strange! I've had it recover from my laptop being asleep for hours at a time. I've seen the last contact message tick up until it suddenly manages to make a connection and spring back to life!

I do just use default settings (just always set up mosh server on my remote servers and forget about it), but is it possible that there's some timeout on the other end you could be experiencing? If for any reason you kill the session on the other side, then of course it doesn't work, but for me it's always magically connected again.

I've used Mosh for multiple years, always loved it, until recently when I started using tmux more. For some reason I had loads of problems with colours and the only solution I could find was to manually build it from source with some flags, or even potentially it was a fork or something like that. Just started SSH-ing again (usually have a stable and fast connection) and if I lose the connection, I'll just re-attach the tmux session anyway.


Interesting, maybe there's something up with my config. I also run a default setup but the backend is a macintosh, so that may be related. It would be really nice to have it magically reconnect!

I actually use tmux in combo and don't see any color issues that I notice. I wonder if it's something about the colors in your tmux config?


I run ssh/mosh on a Linux laptop to my Mac desktop all day every day. I suspend the laptop every night, and when I open it in the morning, it immediately reconnects.

I run tmux (locally, nesting another session on the Mac) but I don't think that would make a difference here.

I'm not aware of any special config, but I've been using this setup for several years with no major adjustments.


I wonder if there's an issue with my firewall (little snitch). I'll try disabling that and see if that helps any.


Sounds plausible. You definitely need UDP ports 6000-6010 (?) open incoming to the Mac.


I ended up deleting a few extra rules I found in little snitch, and things seem to be working much better. I guess maybe there was some contention in the multiple rules.


you can auto re-attach the tmux session or create new one with something like this in your bash_alias or bashrc.

# tmux auto attach or create new session [ -z "$TMUX" ] && { tmux attach || exec tmux new-session && exit;}


Is it possible that your server IP address is changing? Or that there is some statefull firewall between the client and the server that is getting in the way?


IP is the same, but it may be Little Snitch messing things up. Let me see if disabling that gets me any further..


If only mosh had ssh key forwarding, I'd use it more :/


no mention of gnu screen or termux to maintain a persistent session when your ssh link drops over a very flaky connection?

those are, in my opinion, essential tools to know how to use.


The multiplexer is called "tmux", "termux" is a terminal emulator for android.


Using Wireguard is also an good option to prevent the connection from being dropped when switching networks


Or solve at a lower layer - it’s possible to configure an IPsec VPN so that it can reconnect without dropping TCP connections. Even if you change IP address. I had great success doing this using FreeSwan to manage IPsec on a Linux server, and with the native macOS IPsec client.


> Guardian Agent: SSH forwarding means you can remotely connect on to other systems over SSH without needing to install Mosh on every one of them.

What? Isn't guardian just an agent? You can use ssh forwarding without it.

Guardian does look nice but seems abandoned.


Can you replace ssh with http/3 and websockets?


Does ssh require TCP transport? Wonder if you could do MPTCP... or if QUIC actually offered an un-encrypted mode, could be an option too.


I'd love to see something that used TCP so I could route it over Tor but offered local prediction like Mosh.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: