Hacker News new | past | comments | ask | show | jobs | submit login

One reason is that mosh adds a bunch of latency itself that is at least comparable to (and frankly somewhat heavier than) screen (if you don't believe me, I can find a quote from the primary author), a latency which is itself somewhat noticeable: once you layer screen into mosh, the situation tends to get sufficiently unreasonable that even on perfect connections the jarring latency effect becomes worse (as when a mis-predict occurs you are now going through two layers: mosh and screen; of course for the prediction case, you are still just dealing with the one layer).

(Then, once you are using screen to solve problem #3, my argument is then that unless you need the features from mosh that deal with packet loss, you can start attempting to whittle away at any other things that may have been less-than-perfect regarding mosh, as you might simply not need it anymore.)

As for your contention that #1 and #2 are "anecdotes", it isn't like I'm just saying "engh, didn't work": I'm explaining exactly why these things are issues from first principals so you can evaluate whether they will effect you or not: it isn't like whether mosh requires weird UDP ports is somehow subjective.

I mean, let's say I had said "one downside of deploying a mobile application instead of a website is that if a user doesn't have a smartphone your content might be inaccessible to them"... is that an "anecdote"? Maybe all your customers have smartphones, and if so you can ignore that, but at least you are aware of it.

(BTW, if you want another example of where the prediction goes insane: if you mosh to a server and then SSH from that server to another server, the latency between mosh and the remote client can no longer be measured by the protocol, and the prediction system gets very confused and mostly ends up turning off. I had a long conversation with the mosh developer about this issue when it affected me. However, that's even rarer for people to run into.)




I'm not sure I've ever seen screen add latency.

(And yes, I do regularly use mosh to deal with packet loss over a 3G/4G connection; that's its primary value for me.)

"Pervasive modern firewalls" is an anecdote. Even at cafes and such, mosh works, anecdotally, for me.


I would consider you to be very lucky to have not run into a firewall that auto-blocks all UDP. Both my high school and my first employer blocked UDP altogether. (The former went as far as only white-listing outbound TCP connections on ports 21, 80, and 443, with ports on the latter two being transparently redirected through a very restrictive filter.)


Funny, my school's network is very restrictive about TCP (out of the ports I've tested only 80 and 443 have worked properly), but it doesn't do any deeper filtering, and has no restrictions at all for UDP.


> "Pervasive modern firewalls" is an anecdote. Even at cafes and such, mosh works, anecdotally, for me.

We have a firewall rule set up to allow inbound ssh on port 443 to one of our office servers because several of our clients have networks that are so locked down that whenever one of us needs to work from their offices from time to time, it's the easiest way of ensuring they'll have ssh access...

Cafe's are not a good example. Cafe-run public wifi tends to be some of the least secured internet access you can find.


In theory, screen will absolutely add latency because it's another indirection terminal (your input has to traverse SSH, then SSH's terminal, then screen's terminal, then your process, then back). Again in theory, that addition should be fairly small on the surface.

In practice, and merely anecdotally on my part, I can attest that screen "feels" slower than a shell when I'm using it via SSH.


Too bad Mosh cannot work with X display redirects. That one would have been rather handy.


The X protocol is not particularly well-designed for this. (The X protocol is not particularly well-designed for lots of things.)

One of mosh's advantages is that it does screen rendering on the server, and then synchronizes the screen with the client. So, if there's been ten screenfuls of outputs, and those packets were lost or delayed or the connection couldn't keep up, the client only needs to receive what's currently on the screen. (This doesn't work in the other direction, but there's typically a lot less traffic in that direction.)

You can't quite do that with X, since the X protocol itself involves no way to get / diff / synchronize the contents of the screen -- it involves a bunch of rendering operations, so if there's a delay, it needs to re-send those operations.

Still, there's an open ticket: https://github.com/keithw/mosh/issues/41




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: