Hacker News new | past | comments | ask | show | jobs | submit login

I very often see "reconnect loops" in various codebases and I wonder are they necessary? Wouldn't the same effect be achieved by for example increasing timeouts or some other connection parameter?



They’re a bit of a feature of the connection-oriented nature of TCP as the other reply mentions. If the server process crashes and restarts for example, the client will be told that its previous connection is not valid anymore. Basically TCP lets client and server assume that all bytes put into the socket after connect()/accept() will end up at the other side in that same order. Each time there is an error that violates that assumption, the connection needs to be explicitly “reset”.


For TCP the state required to maintain the socket in the kernel is invalidated on error and needs to be reset. The only way to do this is to explicitly perform the connection setup again. An extended timeout only delays this process since the remote side will have invalidated its state as well.

UDP packets require no connection but you still might see some sort of re-synchronization code to reset application state which could be called "reconnect".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: