Hacker News new | past | comments | ask | show | jobs | submit login

Now I wonder. Do we have any actual streaming use of TCP, with purely streaming protocol.



Yes. When your messages need to be received and processed in the same order they are sent, then a stream is a good (or good enough) abstraction.

For most applications some re-orderings of messages don't matter, and others would need special handling. So as a one-size-fits-all-(but-badly) abstraction you can use a stream.

> Do we have any actual streaming use of TCP, with purely streaming protocol.

But to give you a proper answer: the stream of keyboard inputs from the user to the server in Telnet (or SSH).


Opens up the question that if you have "messages" are you a stream anymore? Can your stream of messages start mid message for example? Surely a stream can have this happen. Or are you instead messages send over a stream. In which case abstraction of stream instead of reliable message transportation is bit weird.

Wouldn't each input be a single albeit too short message? But this level of granularity really makes little sense...


If you have a reliable-byte-stream abstraction, you only need to add a way to serialise your messages into bytes, and you get a reliable-messages-in-order abstraction for 'free'. That's convenient! And you don't need to teach your lower level protocols about where your higher level messages start and end.

You can also stick your higher level messages into a structure that's more complicated than a stream, eg you can stick them into a tree. Anything you can serialise, you can send.

Of course, the downside to this is that when you don't need these strong guarantees, you are paying for stuff you don't need.


Think about what the feature (potentially) buys you:

(1) zero-copy, zero-allocation request processing

(2) up to a 2x latency reduction by intermingling networking and actual work

(3) more cache friendliness

(4) better performance characteristics on composition (multiple stages which all have to batch their requests and responses will balloon perceived latency)

If you have a simple system (only a few layers of networking), low QPS (under a million), small requests (average under 1KB, max under 1MB), and reasonable latency requirements (no human user can tell a microsecond from a millisecond), just batch everything and be done with it. It's not worth the engineering costs to do anything fancy when a mid-tier laptop can run your service with the dumb implementation.

As soon as those features start to matter, streaming starts to make more sense. I normally see it being used for cost reasons in very popular services, when every latency improvement matters for a given application, when you don't have room to buffer the whole request, or to create very complicated networked systems.


High quality VOD (ie streaming a 4K movie). HTTP block file systems, each block needs to be streamed reliably to fulfill the read() call plus read ahead.


For VOD can I just open connection and send single message and then stream will continue forever? And HTTP is message oriented protocol. I can't just send infinite length HTTP message. Which would be processed as it arrives or can I? Meaning can I upload something not that small like terabyte of video data over HTTP?


Yes for everything. In HTTP1 it’s a chunked response, in H2+ it’s just a bunch of data frames. This is how low latency HLS video works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: