Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like them, they surprisingly easy to use..

One example where i found it to be not the perfect solution was with a web turn-based game.

The SSE was perfect to update gamestate to all clients, but to have great latency from the players point of view whenever the player had to do something, it was via a normal ajax-http call.

Eventually I had to switch to uglier websockets and keep connection open.

Http-keep-alive was that reliable.



With HTTP/2, the browser holds a TCP connection open that has various streams multiplexed on top. One of those streams would be your SSE stream. When the client makes an AJAX call to the server, it would be sent through the already-open HTTP/2 connection, so the latency is very comparable to websocket — no new connection is needed, no costly handshakes.

With the downsides of HTTP/1.1 being used with SSE, websockets actually made a lot of sense, but in many ways they were a kludge that was only needed until HTTP/2 came along. As you said, communicating back to the server in response to SSE wasn’t great with HTTP/1.1. That’s before mentioning the limited number of TCP connections that a browser will allow for any site, so you couldn’t use SSE on too many tabs without running out of connections altogether, breaking things.


>the browser holds a TCP connection open that has various streams multiplexed on top. One of those streams would be your SSE stream. When the client makes an AJAX call to the server, it would be sent through the already-open HTTP/2 connection

Very interesting ! I honestly didn't know that, or even think about it like that ! #EveryDayYouLearn :)


I wonder if websockets "accidentally" circumvented HTTP2's head-of-line blocking problem and therefore appeared to have better latency:

SSE streams are multiplexed into a HTTP2 stream, so they can suffer from congestion issues caused by unrelated requests.

In contrast, HTTP2 does not support websockets, so each websocket connection always has its own TCP connection. Wasteful, but ensures that no head-of-line blocking can occur.

So it might be that switching from SSE to websockets gave better latency behaviour, even though it had nothing to do with the actual technologies.

Of course, this issue should be solved anyway with HTTP3.


> Wasteful, but ensures that no head-of-line blocking can occur.

That’s not how head-of-line blocking works. Just having a single stream doesn’t guarantee no blocking. It’s not really about unrelated requests getting in the way and sucking up bandwidth (that’s a separate issue, and arguably applies regardless of how many TCP connections you have), head-of-line blocking is about how TCP handles retransmission of lost packets. Websocket suffers from head-of-line blocking too, which is a reason that WebTransport is being developed.

Certainly, if you have other requests in flight, you could have head-of-line blocking because of a packet that was dropped in a response stream that isn’t related to your SSE stream, but this only applies if there’s packet loss, and the packets that were lost could just as easily be SSE’s or websocket’s.

I agree that HTTP/3 should solve the issue of head-of-line blocking being caused by packets lost from an unrelated stream, but it doesn’t prevent it from occurring entirely.

My understanding (which could be wrong) is that WebTransport is supposed to offer the ability to send and receive datagrams with closer to UDP-level guarantees, allowing the application to continue receiving packets even when some go missing, and then the application can decide how to handle those missing packets, such as asking for retransmission. Getting an incomplete stream at the application level is what it takes to entirely avoid head-of-line blocking.

As alluded to earlier, there is zero head-of-line blocking if there is no packet loss. Outside of congested networks or the lossy fringes of cell service, I really wonder how much of an issue this is. I’m skeptical that it adds any latency for SSE vs websocket in the average benchmark or use case. The latency should be nearly identical. Your comment seems predicated on it definitely being worse, but based on what numbers? I admit it’s been a couple of years since I measured this myself, but I came away with the conclusion that websockets are massively overrated. There are definitely a handful of use cases for websockets, but… it shouldn’t be the tool everyone reaches for.

HTTP/3 is really meant to be an improvement for a small percentage of connections, which is a huge number of people at the scale that Google operates at. I don’t think there are really any big downsides to HTTP/3, so I look forward to seeing it mature, become more common, and become easier to find production grade libraries for.


> Certainly, if you have other requests in flight, you could have head-of-line blocking because of a packet that was dropped in a response stream that isn’t related to your SSE stream, but this only applies if there’s packet loss, and the packets that were lost could just as easily be SSE’s or websocket’s.

That was what I meant. Yes, head-of-line blocking can occur everywhere there is TCP, but with HTTP2, the impact is larger due to the (otherwise very reasonable) multiplexing: When a HTTP2 packet is lost, this will stall all requests that are multiplexed into this connection, whereas with websocket, it will only stall the websocket connection itself.


Yep, makes sense


>no new connection is needed, no costly handshakes.

No new connection and no low-level connection (TCP, TLS) handshakes, but the server still has to parse and validate the http headers, route the request, and you'd probably still have to authenticate each request somehow (some auth cookie probably), which actually may start using a non-trivial amount of compute when you have tons of client->server messages per client and tons of clients.


You just needed to send a "noop" (no operation) message at regular intervals.


that puts it instantly in the "fired if you ever use it" bin


Fired for using a keep-alive message???


Why's that?


I think it comes down to whether your your communication is more oriented towards sending than receiving. If the clients receive way more than they send, then SSE is probably fine, but if it's truly bidirectional then it might not work as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: