Hacker News new | past | comments | ask | show | jobs | submit login
Plain Text Protocols (blainsmith.com)
194 points by tate on Feb 25, 2021 | hide | past | favorite | 171 comments



HTTP/2 is a good example of how to handle textual-to-binary transition.

The original HTTP/1 was textual (if a bit convoluted), and that helped it to become a lingua franca; helped cooperation and data interchange between applications proliferate; everybody was able to proverbially "scratch his itch". Gradually tooling grew up around it too.

The HTTP/2 is binary, and also quite complex - however most of that is hidden behind the already established tooling, and from developer's perspective, the protocol is seen as "improved HTTP/1 with optional extras". The protocol appears textual for all developmental intents and purposes - because the transition was handled (mostly) smoothly in the tooling. The key APIs remained the same for quick, mundane use cases - and got extended for advanced use cases.

There's a lesson in that having been a success. Unpopular opinion warning: contrast the success of HTTP/2 with the failure of IPv6 to maintain backward compatibility at the API level - which hampered its ability to be seamlessly employed in applications.


>however most of that is hidden behind the already established tooling

...and so everyone without Google-level funding is stuck with this tooling. Thus, control over the protocol that millions of people could use to communicate with one another directly (by making websites and possibly servers) is ceded to a handful of centralized authorities that can handle the complexity and also happen to benefit from the new features.

I remember how Node when it was just rising in popularity was usually demonstrated by writing a primitive HTTP server that served a "hello world" HTTP page. There were no special libraries involved, so it was super-easy to understand what's going on. We're moving away from able to do things of this sort without special tooling and almost no one seems to notice or care.


> I remember how Node when it was just rising in popularity was usually demonstrated by writing a primitive HTTP server that served a "hello world" HTTP page.

That is still possible in the exact same way.

But a toy is just a toy. All websites should encrypt their content with TLS. In fact, all protocols should encrypt their communications. The result? Sure, it is a binary stream of random-looking bits.

Yet to me, what matters about text protocols is not the ASCII encoding. It is the ability to read and edit the raw representation.

As long as your protocol has an unambiguous one-to-one textual representation with two-way conversion, I can inspect it and modify it with no headache.

An outstanding example of that is WASM, which converts to and from WAT: https://en.wikipedia.org/wiki/WebAssembly#Code_representatio...


>All websites should encrypt their content with TLS. In fact, all protocols should encrypt their communications.

I reject the notion that encryption should be mandatory for all websites. It should be best practice, especially for a "modern" website with millions of users, but we don't need every single website encrypted.


Strongly disagree. At the very least, all sites should serve HTTPS. I don't want to get ads and spyware injected from my ISP, nor do I want everyone tracking what I read, on news sites for example. Provide HTTP if you want, but only for backcompat.


> I don't want to get ads and spyware injected from my ISP

Honestly this is a notion that i do not understand. Why do you accept your ISP doing this? Why aren't people banding together and complain about it in an organized manner? If there aren't alternative ISPs in your area it means there will be a lot of people affected by this so more voices to be heard. Why are you just accepting ISPs adding ads and spyware to your content as some force of nature that everyone else must work to keep at bay?


In many places there are not alternatives, all available vendors have a history of hijinks. It's quite hard to band many small voices together and even when we do government agencies pick the wrong thing.

It's not really accepting. It's more like picking the least-shitty of a shitty set of options.


But as i wrote if there are no alternatives then it means there is a larger pool of people to band together. And you can start by making noise towards the company as one voice not with the government. Honestly this sounds like you've given up without trying and then blaming the sites for not trying to work around your ISP being shitty. Here the blame lies with your ISP, not the sites.


These are brilliant ideas. It's amazing it doesn't actually play out this way.


> Why aren't people banding together and complain about it in an organized manner?

Because the notion of collective organizing has been completely eroded and squeezed out of modern society at every level and replaced with "consumer choice". Is this supposed to be a difficult question or a rhetorical one? I'm not being factitious, that's the actual answer and it's very easy to observe if you look around. (If you check your watch I'm sure it's only a matter of time until someone here literally replies telling you to get a new ISP, in fact.)

That said, even beyond the need for collective organizing of regulation for cases like ISPs, it's been like a decade since Firesheep made waves over the internet because it turns out just being able to snoop passwords at Starbucks was in fact, not good, and actually was quite bad. So it's not like ISPs are the only unscrupulous actors out there, and unless you want to get into the realms of "This software is illegal to possess" (normally a pretty hot-button topic here) then someone has to deal, in this case. The whole pathway between a user and their destination is, by design, insecure; combine that with the fact the internet is practically a wild west, and you have a somewhat different problem.

Making system administrators adopt TLS en masse was probably the right course of action anyway, all things considered, and happens to help neutralize an array of problems here, even if you regulated ISPs and punished them excessively for hijinks like content manipulation (which I would wholeheartedly love to see, honestly.)

(The other histrionics about "simplicity" of HTTP/2 or text vs binary whatever are all masturbatory red herrings IMO so I'm just ignoring them)


This isn’t about having a shitty ISP specifically it’s the fact that the network path between your machine and the server is by definition untrusted. The much harder problem is securing the entire internet so that you don’t need encryption. Or you could just encrypt the content and be sure you have a clean connection.


> Why do you accept your ISP doing this?

You say this as if ISPs don't exist as natural monopolies that can unilaterally ignore customer complaints because "Who cares what you think? You're stuck with us.".


Indeed. When it comes to technology, I think resiliency and robustness in general should trump almost all other concerns.

It would be nice if HTTP were extended to accommodate the inverse of the Upgrade header. Something to signal to the server something like, "Please, I insist. I really need you to just serve me the content in clear text. I have my reasons." The server would of course be free to sign the response.


While I agree with you, it is best to be on the safe side. The damage from having a wrong website unencrypted could be massive vs. cost of simply encrypting everything. Demanding 100% encryption is an extra layer to protect against human mistakes.


Demanding 100% encryption also locks out some retrocomputing hardware that had existing browsers in the early Internet days. Not all sites need encryption. Where it's appropriate, most certainly. HTTPS should be the overwhelming standard. But there is a place for HTTP, and there should always be. Same for other unencrypted protocols. Unencrypted FTP still has a place.


HTTP/FTP certainly have their place, but that is not on the open internet. For retro computing and otherwise special cases a proxy on the local network can do HTTP->HTTPS conversion.


>HTTP/FTP certainly have their place, but that is not on the open internet.

Then it ceases to be the "open" internet.


It's unfortunate that there doesn't seem to be a turn-key solution for this at the moment. I'm currently using Squid so I can use web-enabled applications on an older version of OS X, and it's great, but figuring out how to set it up took a solid day of work (partly because their documentation isn't very good), and the result will only work on macOS.

Mitmproxy is much easier to set up, but too heavy for 24/7 use.

Ideally this would be a DDWRT package, or maybe a Raspberry Pi image, all preconfigured and ready to go...


You can always uses a MITM proxy that presents an unencrypted view of the web. As long as you keep to HTML+CSS, that should be enough. Some simple js also, but you can't generate https URLs on the client side. Which, for retrocomputing, is probably fine.

You wouldn't want to expose these "retro" machines to the Internet anyways.


> ...and so everyone without Google-level funding is stuck with this tooling.

...no? It's actually pretty easy to write an HTTP/2 client library yourself. There are tons and tons of implementations of HTTP/2; at this point, nearly as many as there are of HTTP/1. Presuming you're familiar with the spec, you can code one yourself in an evening or two.

(I'm in the Elixir ecosystem myself. We have https://hex.pm/packages/kadabra — which is, apparently, 2408LOC at present. That's without stripping comments/trivial closing-token lines/etc, because Elixir doesn't have a good sloccount utility.)

The only thing that could potentially get in the way of HTTP/2 implementation, is a language not having good support for parsing/generating binary data. Languages like Javascript or Python — i.e. languages where you have to deal with binary data as if it were a type of string, where the tools for dealing with bytes and with codepoints are all mushed together and confused — struggle with binary protocols of all kinds. People do nevertheless write binary protocols for these languages.

But that's why people don't tend to think of these as "backend server languages", and instead use languages like Go or Erlang or even C—as in these languages, there are first-class "array/slice of bytes" types, and highly-efficient operations for manipulating them (with strict, predictable "bits are bits" semantics), which make writing binary-protocol libraries a breeze.


> The only thing that could potentially get in the way of HTTP/2 implementation, is a language not having good support for parsing/generating binary data. Languages like Javascript or Python — i.e. languages where you have to deal with binary data as if it were a type of string, where the tools for dealing with bytes and with codepoints are all mushed together and confused — struggle with binary protocols of all kinds. People do nevertheless write binary protocols for these languages.

Python explicitly changed that between Python 2 and Python 3 (not without considerable pain for its users -- the fact that open("/dev/urandom").read(10) makes Python 3 crash¹ was what put me off of learning it for some years, for example).

This isn't to say that Python users or documentation have all made the switch, but modern Python is very capable of distinguishing bytes and character set codepoints, and typically insists that programmers do so in most contexts.

¹ This crash turns out to be extremely easy to fix, by adding the file mode "rb" to the open() call, but Python 2 programmers wouldn't have expected to have to do that.


> There are tons and tons of implementations of HTTP/2; at this point, nearly as many as there are of HTTP/1.

This is a ridiculous claim.


You’re right, unqualified that was kind of ridiculous.

What I meant/believe is specifically that there are nearly as many production-grade, actively-maintained, general-purpose HTTP/2 client and/or server libraries at this point, as there are production-grade, actively-maintained, general-purpose HTTP/1.1 client and/or server libraries.

The long tail of HTTP/1.1 client “libraries” consist of either 1. dashed-off single-purpose implementations (i.e. things that couldn’t be reused in any other other app than the one they’re in), or 2. long-dead projects for long-dead platforms.

Those dashed-off impls and long-dead platforms are the reason we will never (and should never) get rid of HTTP/1.1 support; and why every web server should continue to support speaking HTTP/1.1 to clients. But just because they exist, doesn’t mean they contribute to the “developer ecosystem” of HTTP/1.1. You can’t just scoop out and reuse the single-purpose HTTP/1.1 impl from someone else’s client. Nor can you make much use of an HTTP/1.1 library written for the original Macintosh.

Ignoring that pile of “impractical to use in greenfield projects” libraries—i.e. cutting off the long tail—you’re left with a set of libs (~6-8 per runtime, x N popular runtimes) that is pretty closely matched by the set of HTTP/2 libs (~1-2 per runtime, x N popular runtimes.) “Within an order of magnitude” is “nearly” in a programmer’s eyes :)

(Also, a fun fact to keep in mind: implementations of HTTP/1.1 clients and servers require very different architectures about connection establishment, flow-control, etc. But as far as HTTP/2 is concerned, every peer is both a client and a server—against an open HTTP/2 connection, either peer can initiate a new flow in which it is the client and the other peer is the server. [Browsers and servers deny the possibility of being an HTTP/2 server or client, respectively, by policy, not by mechanism.] As such, when you’re writing a production-grade HTTP/2 library, that library must be both a client and a server library; or rather, once you’ve implemented it to serve one role, you’ve done enough of the common work toward implementing for the other role that it is effectively trivial to extend it to also serve the other role. So every HTTP/2 library “matches up”, in some sense, with two HTTP/1.1 libraries—one HTTP/1.1 client library, and one HTTP/1.1 server library.)


Node has had a built in HTTP server since v0.1.17, are you sure those examples didn't use that? Because if they did then it was the same in those examples as it is now.

Source: https://nodejs.org/api/http.html#http_class_http_server


If you care about protocol simplicity and their afferant implementation costs, then the continuously creeping Web platform it a few magnitudes worse in this respect.


> contrast the success of HTTP/2 with the failure of IPv6 to maintain backward compatibility at the API level

Unfortunately, it was not possible for IPv6 to maintain backward compatibility with IPv4 at the API level. That's because the IPv4 API was not textual; it was binary, with fixed-size 32-bit fields everywhere. What they did was the next best thing: they made the IPv6 API able to also use IPv4 addresses, so that new programs can use a single API for both IPv4 and IPv6.


The API compatibility is pretty far down the list of bottlenecks with IPv6. There was some churn related to it 20 years ago.


> Unfortunately, it was not possible for IPv6...

Given hindsight, I think there is a ton of coulda-woulda-shoulda in bumpy transitions like ipv4 -> ipv6 or python 2->3


> Unpopular opinion warning: contrast the success of HTTP/2 with the failure of IPv6 to maintain backward compatibility at the API level - which hampered its ability to be seamlessly employed in applications.

Unpopular?

The gratuitous breaking of backwards compatibility by IPv6, inflicting staggering inefficiencies felt directly or indirectly by all internet users, should be a canonical case study by now. It should be taught to all engineering students as a cautionary tale: never, ever do this.


I would like to read about a better design that should be implemented instead - can you share some links? I cannot imagine how compatibility can be achieved given that address fields in IPv4 are fixed 32-bit.


I strongly believe IPv6 would be more fully deployed if they had only extended the address fields, and left everything else the same.

Things like Neighbor Discovery Protocol don't have to exist; ARP would work fine; the protocol already contemplates different length addresses, it would just need a constant assigned for IPv6.


i'm fairly sympathetic here - except part of the blame should really be on the socket layer and resolver interface. if they had been a bit better at modelling multiprotocol networks, this kind of transition would have been easier.


After all I've read on HTTP/2 I'm still not entirely sure what problem it is trying to solve.


The main benefit is multiplexing - being able to use the same connection for multiple transactions at the same time. This can have benefits in finding and keeping the congestion window at its calculated maximum size, reduce connection-related start-up, as well as overcome waiting for a currently-used connection to be free if you have a max connection per server model.

The other potential benefits were priorities and server-initiated push, but both I’d say largely went unused and/or were too much trouble to use. Priorities were redesigned in HTTP 3 - more at https://blog.cloudflare.com/adopting-a-new-approach-to-http-... - and Chrome recently decided push in HTTP 2 wasn’t worth keeping around - https://www.ctrl.blog/entry/http2-push-chromium-deprecation....

HTTP 2’s main problem is head-of-line blocking in TCP - basically, if you lose a packet, you wait until you get that packet and acknowledge a maximum amount of packets thereafter - slowing the connection down. With multiplexing, this means that a bunch of in-flight transactions, as well as potentially future ones, are blocked at the same time. With multiple TCP connections, you don’t have this problem of a dropped packet affecting multiple transactions.

HTTP 3 has many more benefits - basically, all the benefits of multiplexing without the head of line blocking (instead, only that stream is affected), as well as ability to negotiate alternative congestion control algorithms when client TCP stacks don’t support newer ones - or come with bad defaults. And the future is bright for non-HTTP and non-reliable streams as well over QUIC, the transport HTTP 3 is built on.


Right, all this kind of feels as if HTTP/2 is trying to solve transport layer problems in the application layer. Especially if you leave out the server initiated push. I can't really pretend to know much about this but I can't say I'm surprised that this causes problems when the underlying transport-layer protocol is trying to solve the same problem.

So is it correct to view HTTP/3 as basically taking a step back and just running HTTP over a different transport-layer protocol (QUIC)? (If so I think the name is a bit confusing, HTTP over QUIC would be much clearer)


That's true, but the transport layer has ossified, and the application layer is the only place we can still innovate. RIP SCTP.


Still sad, it would have been much nicer to just keep HTTP as is and just put in a different transport layer. Or maybe extend HTTP a little but right now we've got a protocol independent HTTP/1.1 and a new HTTP/3 which rather than being more general strictly relies on a single protocol.


In some ways, HTTP 3 is the same HTTP messages, just over QUIC.

But as you get into other features, there are differences. And your clients and servers need to both have fallbacks to HTTP 2, since UDP connectivity might not be available, and fallback is expected.

So, you have to build support for having working push, or not. Or having working priorities, or not. Or having long-lived non-HTTP sockets, or using fallbacks like web sockets or even long polling. There’s even more fun on the horizon, and I’m not looking forward to my colleagues thinking of a fallback strategy for some of those...


It was originally called HTTP over QUIC, and got renamed to HTTP/3 in order to avoid some other confusion.

https://en.wikipedia.org/wiki/HTTP/3#History


HTTP/2 is what you do if you're confined to using TCP. HTTP/3 is what you get if you use UDP to solve the same problems (and new problems discovered by trying it over TCP).


HTTP/2 does solve one issue very well. The server goodbye message indicates the last request seen, which enables the client to know if a request sent before closing can be retried or not.

In http/1.1, if you send a non-idempotent request on a keep-alive socket and the server closes it 'simultaneously', you won't know if the server didn't see it (safe to retry), or if the server crashed while processing it (not intrinsicly safe to retry).

TCP-in-TCP offers predictably bad worse cases, but does offer some nice effects in good cases.


HTTP/2.0 has TCP head-of-line issues, in practice that nullifies it's usefulness!

HTTP/1.1 is much more balanced and simple, and as I said all over this topic the bottleneck is elsewhere!


The obvious risk with plain-text protocols is that you don't write a rigorous spec, and don't write a strict parser, but some hack up least-effort thing with a few string.split() and whatever. This means there is a lot of slack in what is actually accepted, and unless you are in full control of both ends of the protocol, that slack will be taken advantage of, and unless you are more powerful than whoever is at the other end (which you aren't, if they're your clients and you are not Google or Facebook), you have to support it forever. So write plain-text protocols if you like, but make sure to have a rigorous spec, and a parser with the persicketyness of... I don't know.


I'm thinking here of "standards" like csv and mbox that are almost impossible to handle with 100% reliability if you don't control all the programs that are producing them. It can get even worse with some niche products. I used to work with a piece of legal software that defined its own text format, and had a nasty habit of exporting files that it couldn't import. There was a defined spec, but it was riddled with ambiguities.

I'm coming to think that, when it comes to text formats, it's LL(k) | GTFO.


Don't you have the same problem with ill-specified binary protocols?


To a certain extent, yes. However, with a binary protocol, at least the syntactic level tends to be fairly solidly locked down, simply because you have to. Two bytes bing-endian unsigned integer of that, four bytes big-endian signed integer of that, etc. The confusion happens at the semantic level. With text protocols, the confusion starts in the parser (see other comments about continuation lines in HTTP, for example).

And I forgot to say in my first comment: If you design a text-based protocol that can carry textual data, make sure it can handle any text you throw at it (and preferrably any byte sequence), i.e. decide and implement a good quoting convention from the start. I have seen a text format, initially used to store config data, but later extended to other things in the system because it was there, that used <<< and >>> as string delimiters. Then some client named something <<<foo>>>. (This was early 90:s, so JSON wasn't invented yet!). And yesterday I spent a large chunk of my working day sorting out problems originally caused by a system exporting semicolon-separated almost-CSV, and an enterprising tester putting a semicolon in a text field.


That works until next time someone considers the strictness of the existing protocol too overwhelming or complicated and invents a new "simpler" one.


TCP is strict. If you want something rich, you just run another protocol on top of the strict one. There can be many layers: TCP -> TLS -> HTTP -> JSON -> Base64 -> AES -> PNG -> Deflate -> Picture.


"Plaintext" is ASCII binary that is overwhelmingly English. The reason people like plaintext is that we have the tooling to see bits of the protocol as it comes over the wire. If we had good tooling for other protocols, then the barrier to entry would be lower as well.


Yes, exactly. I love binary protocols/formats. Plain text formats are wasteful, and difficult (or at least annoying) to implement with any consistency. But you really do need a translation layer to make binary formats reasonable to work with as a developer. There are very good reasons why we prefer to work with text: we have a text input device on our computers, and our brains are chock full of words with firmly associated meanings. We don't have a binary input device, nor do we come preloaded with associations between, say, the number 4 and the concepts of "end" and "headers." (0x4 is END_HEADERS in HTTP/2.)

Once you have the tools in place, working with binary formats is as easy as working with plaintext ones.

Of course making these tools takes work. Not much work, but work. And it's the kind of work most people are allergic to: up-front investment for long-term gains. With text you get the instant gratification of all your tools working out of the box.

I don't think I'd go so far as to say that plain text is junk food, but it's close. It definitely clogs arteries. :)


> "Plaintext" is ASCII binary that is overwhelmingly English.

I don't see any reason why "plaintext" must be limited to ASCII. Many "plaintext" protocols support Unicode, including the ones listed in this article. Some protocols use human language (as you said, overwhelmingly English), but many do not. There is nothing inherent about plaintext which necessitates the use of English.

> The reason people like plaintext is that we have the tooling to see bits of the protocol as it comes over the wire. If we had good tooling for other protocols, then the barrier to entry would be lower as well.

I disagree.

Humans have used text as the most ubiquitous protocol for storing and transferring arbitrary information since ancient times. Some other protocols have been developed for specific purposes (eg traffic symbols, hazard icons, charts, or whatever it is IKEA does in their assembly instructions), but none of have matched text in terms of accessibility or practicality for conveying arbitrary information.

I think your statement misrepresents the relationship between tool quality and the ubiquity of the protocol. Text has, throughout most of recorded human history, been the most useful and effective mechanism for transferring arbitrary information from one human to another. Text isn't so ubiquitous because our tooling for it is good; our tooling for text is good because it is so ubiquitous.

Text is accessible to anyone who can see and is supplemented by other protocols for those who can't (eg braille, spoken language, morse code). It is relatively compact and precise compared to other media like pictures, audio, or video. It is easily extended with additional glyphs and adapted for various languages. There's just nothing that holds a candle to text when it comes to encoding arbitrary information.


There is nothing inherently human readable about plain text. It's still unreadable bits, just like any other binary protocol. The benefits of plain text are the ubiquitous tools that allow us to interact with the format.

It would be interesting to think about what set of tools gives 80% of the plain text benefit. Is it cat? grep? wc? API?. Most programming languages I know of can read a text file and turn it into a string, that's nice. The benefit of this analysis would be that when developing a binary protocol, it will be evident the support tools that need to be developed to provide plenty of value.

I'm not afraid of binary protocols as long as there is tooling to interact with the data. And if those tools are available, I prefer binary protocols for it's efficiency.


> There is nothing inherently human readable about plain text. It's still unreadable bits, just like any other binary protocol. The benefits of plain text are the ubiquitous tools that allow us to interact with the format.

You seem to have glossed over my whole point about how the ubiquity of text is what drives good tooling for it, not the other way around. Text is not a technology created for computers. It has been a ubiquitous information protocol for millennia.

> I'm not afraid of binary protocols as long as there is tooling to interact with the data. And if those tools are available, I prefer binary protocols for it's efficiency.

I'm not afraid of binary protocols either and there are good reasons to use them. The most common reason is that they can be purpose-built to support much greater information density. However, purpose-built protocols require purpose-built tools and are, by their very nature, limited in application. Therefore, purpose-built protocols will never be as well supported as general-purpose protocols like text.

That isn't to say that purpose-built protocols are never supported well enough to be preferable over text. Images, audio, video, databases, programs, and many other types of information are usually stored in well-supported, purpose-built, binary protocols.


> I'm not afraid of binary protocols as long as there is tooling to interact with the data.

I agree with this premise but would also note how long it takes for such tooling to become widespread. Even UTF-8 took awhile to become universal — I recall fiddling with it on the command line as recently as Windows 7 (code page 1252 and the like).


> It would be interesting to think about what set of tools gives 80% of the plain text benefit.

My experience with binary protocols is that one of the first tools you write is one that converts it to a text format, and you then receive nearly 100% of the plain text benefit, as long as you can use that tool.


ASCII has a built-in markup language and a processing control protocol that most people aren't even aware of and most tools out there don't support. This is significant. Look at the parts that are used and parts that aren't. What is the difference between them?


I think the big reason the ASCII C0 characters never took off was because you can’t see or type them.[a] If I’m writing a spreadsheet by hand (like CSV/TSV), I have dedicated keys for the separators (comma and tab keys). I don’t have those for the C0 ones. I don’t even think there’s Alt-### codes for them.

[a]: Regarding “seeing” them, Notepad++ has a nifty feature where it’ll show the control characters’ names in a black box[0]

[0]: https://superuser.com/questions/942074/what-does-stx-soh-and...


> Notepad++ has a nifty feature

Most physical terminals had the ability to show hex or control characters instead of/in addition to text.


Heh. I used those control characters to embed a full text editor within AutoCAD on MS-DOS. Back in the day. Mostly because someone bet me it couldn't be done.


I don’t know. Can you tell me? ;)


I assume the parent is referring to the various control characters like "START OF HEADING", "START OF TEXT", "RECORD SEPARATOR", etc... I haven't seen most of these used for their original control purpose but they date back a long way:

https://ascii.cl/control-characters.htm


I've seen them in some vendor-specific data formats in the financial space.

They seem to be from an era when the formatting models were either fixed-width fields, or a serial set of variable width fields delineated by FIELD SEPERATOR, GROUP SEPERATOR, etc.

What both models lacked was a good way to handle optional/sparse fields. If you have a data structure with 40 sub-fields, a JSON, XML or YAML notation can encode "Just subfield 26" pretty efficiently, but the FIELD SEPERATOR model usually involves having dozens of empty fields to space the one you want, and a lot of delicacy if the field layout changes.


The bits that aren't used don't correspond to printable characters :).


The “C0” block (U+0000 through U+001F) https://en.wikipedia.org/wiki/C0_and_C1_control_codes

They’re almost never used in practice however.


I disagree that tooling makes up for the lack of human readability in a binary protocol. One of the reasons text-based protocols are so convenient to debug is that you can generally still read them when one side is screwing up the protocol. tcpdump: “oh, there’s my problem” Custom analyzer: “protocol error”


Pretend for a moment that HTTP used https://en.wikipedia.org/wiki/Esperanto instead of English. You would need tooling to translate Esperanto to English.


Yes. Please feel free to assume that everywhere I say “plain text,” I mean “plain text that is not intentionally obfuscated.” I apologize for not being clear.


Would that really cause a problem in determining if the text being sent is well-formed?

Having GET, POST, PUT, etc. and header names be in another language wouldn't prevent you from determining the well-formedness of the text.


I don't think we necessarily even need good tooling for other protocols, we just need good binary analysis tooling that visualizes any binary buffer.

I don't know of a single good app that exists for that.


Wireshark is somewhat usefull if the protocols in the binary blob are supported.

First, the buffer must be converted to ascii hex, then the following procedure is used to import it: https://www.wireshark.org/docs/wsug_html_chunked/ChIOImportS...


That's because you need to know the format to know how to interpret it. Otherwise the best you can really do is use a hex editor.

Or are you suggesting a tool that lets you easily specify the binary format? I'm pretty sure there are some that exist.


Not really. But you can, for example, guess at it.

Lets say I have a struct I'm looking for, and I know it has a UTF-8 string and a length, presumably an unsigned int.

Using a hex editor alone to visualize blocks of those structs is painful. Enough that it's not pleasant to do. I can do it but, man in 2021, I really want a tool to help visualize that for me.

I can provide the app hints at what I'm looking for so it doesn't have to attempt to coalesce bytes into various data types. Some apps actually do this! But none of them very well or in an attractive way.


Yet plaintext - again and again - stands the test of time.

I think the tooling analogy is like SLR cameras. I thought the camera was the important part of the equation, but it turns out the camera body is tossed out and replaced every couple of years. The lenses are the part that survive.


They are self documenting. Looking at a binary format that starts with 41 41 41 41, is that a string, unsigned it, signed it, float, struct? Who knows?


Don't forget Gopher!

Orignal: https://tools.ietf.org/html/rfc1436

Gopher+: https://github.com/gopher-protocol/gopher-plus

I feel like there is a lot of potential to "rejuvenate" Gopher somewhat in today's internet. No javascript, no cookies, no ads, no auto-play videos etc.

There are some nice new modern GUIs like https://gophie.org that are cross platform and modern.

Fun fact: Redis (yes - that Redis) just added Gopher protocol support (https://redis.io/topics/gopher)


I hope you are aware of Gemini which aims to do exactly that?


I sometimes wonder if the web would have been a better place had CSS and everything but the most basic HTML tags didn't exist.


yes


But also no hyperlinks, no mixing text with images, and no unicode support in the menus.


I think there is probably a lot of mileage that can be gained serving markdown over gopher with embedded gopher:// hyperlinks and images and utf8 and everything else markdown supports via gopher itself. Gopher0 already has sort-of support for HTML file types so this would not be such a wild divergence from the original design. Not serving HTML provides some basic guarantees (no JavaScript, no tracking pixelsnetc)

Gopher+ allows for quite flexible (albeit clunky) attributes for menu items so I can imagine an attribute on a menu directing compatible browsers to the markdown version of the menu, but allowing old clients to just view the traditional menu. This kinda relegates the gopher menu to a sort of old school directory listing type things we used to see in HTTP, but there is room for some fanciness via gopher+ to style menus themselves if browsers support that too!

All of this is possible in Gopher+ if clients support it (...and there is an appetite for it). Perhaps we need some sort of "agreement"/Python-PIP-style thing to define sets of common Gopher+ attributes for all of this sort of thing.


You might be interested in Gemini!

https://gemini.circumlunar.space/


Plain text protocols are:

- human readable

- good for quick protyping

- good for inspection while debugging

They are also:

- complicated and slow to parse

- more bloated than binary

We benefited from text protocols because we had plenty of headroom: memory was cheap enough, storage was cheap enough, network was cheap enough, power was cheap enough for our use cases. But that's not quite so true anymore when you have to scale to support many connected systems and handle lots of data. The honeymoon's almost over.

These are some of the reasons why I'm building https://concise-encoding.org


> We benefited from text protocols because we had plenty of headroom: memory was cheap enough, storage was cheap enough, network was cheap enough, power was cheap enough for our use cases. But that's not quite so true anymore when you have to scale to support many connected systems and handle lots of data. The honeymoon's almost over.

Sorry, but memory, storage and network are all orders of magnitude cheaper today than when most of these text protocols were originally developed.

We have significantly more capacity today than we did back then. Thats why we waste all that headroom on reinventing everything in javascript.


They also tend to become interop nightmares due to having an incomplete/ambiguous/non-existent grammar. Different implementations end up implementing slightly different rules which ends up requiring an accumulation of hacks to parse all the variations that end up in the wild.


Found the healthcare IT guy.

This is the HL7 spec's issue. Everyone interprets the spec slightly differently. It's given rise to the interface engine, which are a type of very powerful software that sits between systems and makes things work properly, which is why I love them.


> They are also:

- Get funny when you need to pass binary data.


They also all eventually encode control data as text, which then causes errors with parsing some data that coincidentally has those same control characters in it.

Just look at the garble URLs you see sometimes, more percent signs than a Macy's sale.


Memory, CPU, and bandwidth are cheap. The good reason to optimize a protocol for machines is when you spend more on computers parsing plain text than on humans reading binary dumps. Most companies are not near that kind of transition.


- complicated and slow to parse

- more bloated than binary

Not neccessarily, you can write fast compact protocols with text... sending integer and float data as text is not the bottleneck in any system:

See my root comment!


Floating point numbers are INCREDIBLY complicated to scan (and print).

https://code.woboq.org/userspace/glibc/stdio-common/vfscanf-...

Compare that with reading 4 bytes directly into an ieee754 float32.

If your messages are short, the benefits of fast codecs are outweighed by the inefficiencies in the communication system (most of your time, processing power and bandwidth are taken up by setting up and tearing down the communication medium). If it takes 7 "administration" packets to send 1 data packet, your codec won't be your bottleneck (in which case you probably don't care about efficiency anyway, and this discussion is not for you).


It's not that bad, maybe 10x of nothing.

There are much bigger fish to fry when building a large network solution, most prominently getting the thing to be debuggable on live machines!


>large network solution

You basically restrict networking to big monopolists, like Google, but Google likes binary protocols, like grpc, http2 and quic. And if you have a bug in a complex parser, having text won't help debuggability, because the bug is not in text.


I make my own open-source systems, google is going down a very wrong path recently with defaulting to HTTPS and deprecating HTTP.

GRPC, HTTP2, HTTPS, WebSockets, QUIC (HTTP3) are all desperate attempts for job safety. HTTP/1.1 is good enough for 99% of human network requirements.

If you are depending on google, make sure you have alternatives because they are VERY unreliable long term.


My favorite plain text protocol is HTML server sent events, within HTTP. It's really trivial to make a server produce these -- it's just some simple newline-delimited printfs() to the socket -- and they manifest client-side as simple event objects.

https://html.spec.whatwg.org/multipage/server-sent-events.ht...


Don't you have to deal with the problem of having a double newline in the field? Any time the value has a newline you have to restart with the field name, colon, and space. So it's not quite trivial to produce.


I thought server sent events were dead, or am I thinking about something else?


They are simple, but it's very slow, in my experience.


Slower than websockets or WebRTC?


The usage I've seen is to send log lines. In that case it's a lot slower than bundling stuff up into single requests/responses over HTTP.


You mean, using server-sent events for things that are not events but data, one row at a time?

Of course that'd be slow, but that's equivalent to sending one websocket frame for each row, or one file per row over HTTP multipart. Hardly talks to the speed of either protocol.


I disagree. SSE makes one design easier. If that's great, great. If not, not. But per-unit SSE is ships more slowly because it doesn't allow for easy batching.


HTTP is unbeatable when you remove optional headers, not because of bandwidth;

but because there are robust servers that can multi-thread joint-memory access with non-blocking IO and atomic concurrency.

I use comet-stream for real-time 3D Action MMO data, so I have my own text based protocol wrapped in 2x sockets HTTP:

F.ex. message = "move|<session>|<x>,<y>,<z>|<x>,<y>,<z>,<w>|walk":

### push (client -> server):

  "GET /push?data=" + message + " HTTP/1.1\r\nHost: my.host.name",
then you get back

  "200 OK\r\nContent-Length: <length>\r\n\r\n<content>". In this case "0\r\n\r\n".
### pull (server -> client):

Just one request pulls infinite response chunks:

  "GET /pull HTTP/1.1\r\nHost: my.host.name\r\nAccept: text/event-stream"
then you get back

  "200 OK\r\nTransfer-Encoding: chunked\r\n\r\n".

  while(true) {

    <hex_length> + "\r\n"

    "data:" + message + "\n\n\r\n\r\n"

  }
Simple and efficient!

Text- + Event- based protocols over HTTP way outscale Binary- + Tick- based ones for compressed (as in averaged, not zipped) real-time data.


These look easy, but I don’t think they always are.

Think about CSV for example. Looks simple to create and parse. In reality these simple implementations will give you lot of headache when they don’t handle edge cases (escaping, linefeeds etc).


If they only used the ASCII record separator instead of a comma.


You would still need to be able to handle escaping, right? Otherwise you couldn't have a string with a record separator within a CSV column.


I always preferred tabs to CVS, but how I wish our industry had made use of the ASCII record separator character. How many hours would I and my teams have saved in the last 20 years?


Then you couldn't type it in a text editor.


Standard unix-style input accepts ctrl-^ as ASCII RS and ctrl-_ as ASCII US. If you want your terminal to accept an ASCII US literally — so that you can use it as the -t argument to sort, for example — you would use ctrl-v ctrl-_ to give it the literal character.

  $ hd
  ^^^_
  00000000  1e 1f 0a    
  00000003

  $ sort -t "^_" -k 2,2n
  a^_42
  z^_5
  n^_7
  z5
  n7
  a42


I guess it depends on a text editor. Mapping a key to insert that character is one possible solution.


The problem with CSV isn't that it's text based, the problem is that "CSV" isn't a file format with an authoritative description.


There is a RFC (4180) for CSV, but the truth of the matter is that there are thousands of parsers written to whatever spec the author thought up.

In the real world the entire spec is contained in its three word name.

In the end I think the simplicity was also a weakness. Because the spec is so minimal a programmer goes "Oh, I'll just write my own, no problem", where a more complex protocol would have necessitated finding some exisiting library instead. Whatever the author of that library did would become the defacto standard and there would be less incompatibility between files.


That reminds me of the time I needed to parse some XML and ended up writing my own parser...


* Words muttered through the door of the insane asylum.


It was to talk to a computer named CU-CU, to control dragons in outer space. Also, it helped keep astronaut ice cream cold for folk in Alabama. I'm not crazy, just ask Elon Musk!


Plain text protocols are in serious danger as the (confused) desire for TLS-only everywhere spreads with the best intentions. The problem is that the security TLS-only brings to protocols like HTTP(s) also brings with it massive centralization in cert authorities which provide single points of technical, social, and political failure.

If the TLS-everywhere people succeed in their misguided cargo-cult efforts to kill of HTTP and other plain text connections everywhere, if browsers make HTTPS only default, then the web will lose even more of it's distributed nature.

But it's not only the web (HTTP) that is under attack from the centralization of TLS, even SMTP with SMTPS might fall to it eventually. Right now you can self sign on your mailserver and it works just fine. But I imagine "privacy" advocates will want to exchange distributed security of centralized privacy there soon too.

TLS is great. I love it and it has real, important uses. But TLS only is terrible for the internet. We need plain text protocols too. HTTP+HTTPS for life.


There’s a difference between the transport and the protocol. For instance I’ve used Redis fronted by TLS in the past. Initial connection did get more tricky for sure, needing to have the certs in place to first connect.

However after the connection was established with OpenSSL I was able to run all the usual commands in plain text and read all the responses in plain text. Having transport layer encryption on the TCP connection didn’t effect the protocol itself at all.


You can talk plain text protocol through a TLS or SSL-encrypted connection, even interactively.

Example:

        { echo GET / HTTP/1.0 ; echo ; sleep 1 ; } | openssl s_client -connect www.google.com:443
Or just :

        openssl s_client -connect www.google.com:443
then type interactively GET / HTTP/1.0 then press enter twice.


Using openssl s_client -ign_eof makes piping text a bit easier because connection won't be closed prematurely (so you don't need to use sleep 1)


Mixing trust and encryption that resulted in centralized TLS was probably a design flaw. Certificate pinning in DNS is an attractive "fix", but moves the problem up a layer. But DNS is already centralized, so there's that.

> Right now you can self sign on your mailserver and it works just fine

Well .. sort of. Until you have to interact with google or ms mail servers. After an hour of wondering why your mails are getting blackholed, one starts to reconsider one's life choices.


Yeah but no. I can demonstrate the necessity for HTTPS right now. Because Comcast is my only realistic option for high speed Internet service where I live I'm stuck with them.

Because of their ridiculous data cap policy I've been getting warnings I'm approaching my cap for the month. I dared to watch Netflix and do work this month!

So if I were to go right now and access a web page via HTTP I'll get a pop up inserted into the page by Comcast telling me I'm approaching my data cap. There was a time when ISPs were inserting their own ads and tracking into pages.

Ergo, your ISP cannot be trusted to deliver data to you unmolested. Unless you've got pre-shared signing keys, like your Linux distro's package signing keys, you have little assurance that what you received is what you requested.

Arguing that TLS enforces centralization is laying it on really thick. Self signed certs are a thing and both they and expired certs must be acceptable by browsers. They can throw up errors or warnings but they can't reasonably disable them.

Also because users have to be able to import corporate/government root certs it's entirely possible to add some grass roots root certs (bad cow pun) that aren't beholden to some "establishment" set of root certs. You can also have your own grass roots DNS root servers if you really want.


But you're really just trading one trusted party for another, right? Now you don't need to trust your ISP (as much), but you do need to trust the certificate authorities.


Many ISPs have repeatedly demonstrated they are untrustworthy. Unfortunately in the US they tend to have regional monopolies. I don't have a realistic option outside of Comcast. So I have no choice but to use a shitty ISP and all of my traffic has to pass through them.

I do have a choice in trusting certificates. I can revoke trust of certificates, chains, and even root certificates. I can also choose to trust self signed certificates. But I don't have to trust my shitty ISP not to meddle in my traffic. I also don't have to trust the networks between my shitty ISP and a server. I don't need to trust those networks because I can verify a server's traffic against a chain of signing keys.

With unencrypted traffic every network between nodes is going to peak at the content and you have no way of knowing if they modified it in transit*. TLS provides encryption and verification. I don't trust my ISP at all. I don't have unlimited trust of CAs but they have less ability to compromise all of my traffic like my ISP.

* Without pre-shared signing keys.


Gemini uses TLS and it is common practice for Gemini clients to use self-signed certificates and TOFU. No dependency on centralized CAs.


TOFU seems to work pretty well for SSH. AFAIK not many people actively verify host fingerprints on first use. It doesn't protect against MITM attacks on the first connection, but I wonder if that's not a case of better being the enemy of good to some extent?


The high value targets are much more spread about with SSH than with HTTP. Finding a place where you could inject yourself between, for example, a user ssh'ing into a banking service and the banking service is going to be difficult. Just blindly MITM'ing a bunch of users at a coffee shop will probably get you little to nothing of any real value.

And because SSH is rarely used for the public to connect in to services it's a lot easier to add additional layers of security on top. Most valuable targets won't even be exposed in the first place or will have a VPN or some other barrier that would prevent the attack anyway.

From the HTTP end though, it's easy to narrow down "valuable" targets--there are like 5 main banks in my country. They're, by design, meant to be connected to by the public so there are no additional layers of security implemented. If you set up in a coffee shop for a day there's a pretty reasonable chance you'd find at least one or two people that had just bought a new device or were otherwise logging in for the first time that you could nab.

You'd also run into the issue of what to do when sites needed to update their certificates for various reasons. If the SSH host key changes it's pretty easy to communicate that out-of-band within a company to let people know to expect it. If a website's certificate changes what do we do? We end up training users to just blindly click through the "this website's certificate has changed!" warning and we're back to effectively zero protection.


> If you set up in a coffee shop for a day there's a pretty reasonable chance you'd find at least one or two people that had just bought a new device

Sure, but it's easy to protect against this - just connect to the same service via a different endpoint and check that both endpoints get the same certificate. AIUI this is how the EFF SSL observatory detects MITM attacks in the wild, and similar approaches could be used to make TOFU-to-a-popular-service a lot more resilient, at least wrt. most attacks.


I sort of thought TLS everywhere was more about encryption than authentication.


If it was only authentication then they'd be perfectly fine with unsigned certs. But they're not.


Don't worry, the day HTTP is deprecated is the day civilization is over.


IRC is also a simple plaintext protocol. It’s simplicity helped an entire generation of programmers to appear.


IRC was the first protocol I implemented back in the days. It's so simple you don't even need a client, telnet is enough to get you chatting. I miss this directness.


SIP and its cousin SDP are wonderfully readable plaintext protocols used for VoIP. If you don’t think so, have a look at SIP’s predecessor H.323.


Please avoid plain text protocols!

> * Simple to implement. > * Fast to parse. > * Human readable.

I'll give him one out of three. They are human readable. The are clearly not fast to parse - binary protocols are obviously faster. The biggest problem is "simple to implement".

That's just nonsense. Using text introduces so many ambiguities and tricky edge cases around quoting, whitespace, case, line endings, character encodings, escaping, etc. etc. etc. Soooo much more complicated than a simple binary system (e.g. Protobuf).

There was a link here only recently about how Go might introduce security issues by lowercasing HTTP headers. Would a binary protocol have had that issue? I seriously doubt it.

Don't use plain text protocols!!


I’ve written hundreds of parsers at this point, for many kinds of protocols - http, mqtt, canbus, midi, various strange binary stuff, cli’s, just lots and lots of parsing. I’ve found that text protocols are usually easier to parse, depending on the situation. Some binary protocols are easy because you can just cast the read buffer to a struct, but even that can fail when doing a partial read (which is the norm more then the exception in embedded spaces). And even in the best of cases you usually need to do a lot of post-processing to make sure that the data you read is correct. In the worst of cases you’re bitshifting and you have conditionals everywhere and you’re just not going to have a fun time. Canbus and midi are a lot easier to write parsers for then e.g. protobuf and mqtt. For text, stuff like http, redis, and those types of protocols are easy, but the text parsing world has json, which is the absolute worst of pains.


Canbus is binary isn't it? And Protobuf is very simple.

In any case the problem is that it's easy to write a parser for text data that works for the data you have. It's extremely difficult to write a parser that exactly matches the spec and gets all the quoting/charset/etc. stuff right.


Late reply here but yes, canbus is binary, so is midi. They are both pretty simple. When I say that Protobuf is harder then canbus what I mean is that you have to deal with e.g. variable length integers (“varints”) and key offsets. And yes, that’s very true, full spec parsing is a pain, but that’s also a very rare special case. “In the real world” you usually don’t really care about full compatibility. If someone can find and send your printer data to get it to print securely and your customers can browse and login to your web page and your app, and you can send your Kafka messages to your order service, then everyone is happy, and none of those things require a full spec parser for anything.


Tangentially, I'm a bit surprised that we've completely dropped the ASCII separator control characters: 28/0x1C FS (File Separator), 29/0x1D GS (Group Separator), 30/0x1E RS (Record Separator), 21/0x1F US (Unit Separator).

It's a pity, because I usually see Space (32/0x20) as the record separator, which I suppose is convenient because it works well in a standard text editor, but it does mean we've built up decades of habit/trauma-avoidance about avoiding spaces in names, replacing them with underscores (_) or dashes (-)...


BTW, at least in a Unix terminal you can input the separator characters using Ctl with another char (because terminals inherit from the time when modifier keys just set/unset bits in the input), so:

- 28/0x1C FS = Ctl-\

- 29/0x1D GS = Ctl-]

- 30/0x1E RS = Ctl-^

- 31/0x1F US = Ctrl-_


While HTTP is plaintext alright, it's neither simple nor easy to parse. Probably wouldn't put it in the same group as statsd/InfluxDB line protocols which can be parsed in a few lines of code.


I really don’t understand what you mean by this. I’ve never thought http was difficult. What is it that you find problematic?


Legacy HTTP/1.1 suffers a few issues, see the current RFC errata:

https://www.rfc-editor.org/errata_search.php?rfc=7230&rec_st...

There are issues particularly around how whitespace and obsolete line folding should be handled

Various whitespace issues in node.js: https://github.com/nodejs/http-parser/issues?q=is%3Aissue+wh...

Spec clarification: https://github.com/httpwg/http-core/issues/53

Node.js's parser was at one point replacing all white space in headers with a single space character, even though until recently this was non-conformant (you were only supposed to do so with obs-fold). It did this so it didn't have to buffer characters (since http-parser is a streaming parser).

It's not as trivial as a few string splits. Node.js's parser is ~2,500 lines of C code.


It would be good to have a standard checklist of edge cases to handle with plaintext protocol design. Anyone know of one?

I'm thinking along the lines of:

1. Control characters

2. Whitespace normalization

3. Newline normalization

4. Options for compression

5. Escaping characters significant to the protocol

6. Encoding characters outside of the normal character range

7. Dealing with ambiguous characters (not really an issue with strict ASCII)

8. Internationalization (which is intertwined with the previous items)

9. Dealing with invalid characters

I'm not saying plain text doesn't have its advantages, I'm just saying there are issues you need to consider.



What's sort of interesting is that there aren't too many overwhelming reasons why someone couldn't come up with a piece of software that autodetected a binary format and translated it to something readable in a GUI.

I mean we know what the binary layout is of things, so I never understood (outside of the time that it would take to build such a utility) why I've never been able to find something that says, "Oh yeah, that binary string contains three doubles, separated by a NULL character, with an int, followed by a UTF-8 compatible string."

Such a tool would be incredibly useful for reverse engineering proprietary formats, and yet I don't know of a good one, so if it exists it's at least obscure enough for it to have escaped my knowledge for well over a decade.


There is a command-line program called "file" that attempts to determine the file type (format). It uses a series of known formats and returns the first matching one. I have found it useful to reverse engineer proprietary formats.


Yeah, but that's for known formats.

If I said I have a buffer of 512 bytes and piped it through to some cli, that would be fine if it could tell me how many ints, chars, floats, doubles, compressed bits of data, CRC32s, UTF-8 strings, etc. it contained, but there's few utilities out there that will do that.


I'm curious how you'd propose doing that.

If I give you a buffer of 5 bytes:

[0x68 0x65 0x6c 0x6c 0x6f]

there are a ton of ways to interpret that.

    - The ascii string "hello"
    - 5 single-byte integers
    - 2 two-byte integers and 0x6c as a delimiter
    - 1 four-byte integer and ending in the char "o"
    - 1 32-bit float, and one single-byte integer
etc. Or are you hoping for something that will provide you with all the possible combinations? That would produce pages of output for any decently-sized binary blob.


I'm sort of looking for something that will attempt to narrow down possibilities. The way I'd do it is by providing some visualizations based on the user selecting what data types and lengths they're looking for.

So for instance, if I know I'm looking at triangle data, I can guess that it's probably compressed, ask the app to decompress the data based on some common compression types, look at that data and guess that I'm looking at some floats or doubles.

Maybe I'm wrong, so then I can ask the app to search for other data types at that point.

To me, that would be a tremendous help over my experience with existing hex editors.

Edit: It's not fair for me to say there aren't tools that do exactly this, but to be more precise, a decent user experience is lacking in most cases.


Your post reminded me of the presentation on cantor.dust:

  https://sites.google.com/site/xxcantorxdustxx/

  https://www.youtube.com/watch?v=4bM3Gut1hIk - Christopher Domas The future of RE Dynamic Binary Visualization
    (very interesting presentation)
Looks like there's even been a recently open sourced plugin for Ghidra released by Battelle:

  https://github.com/Battelle/cantordust


POP, IMAP and NNTP are also plain text protocols. What's interesting about SMTP as well as NNTP is that the data phase in the former as well as the post phase in the latter allow for all ascii characters to be transmitted as is without any issue other than NUL. The period needs to be escaped in certain cases such that a CRLF.CRLF doesn't prematurely end the data phase or article post. Clients actually emply "dot-stuffing" to address that case, meaning that any line that starts with a period is modified such that it starts with 2 periods before being transmitted to the server.

When a client receives the email or article, it will remove the extra period so that the lines start with a single period.


Plaintext protocols are textbook case of a negative externality. Programmers who work on them capture value, but impose higher costs (vs a well-suited binary protocol) for parsing, serialising, transmission, storage, computation etc onto everyone else.


Binary protocols are their own negative externality as well. They can be much harder to debug, requiring specialized tooling that may also have its own bugs. They can also suffer from things like insufficiently specified byte ordering issues and differences in floating point behaviour between systems.

I know of at least one game that very close but not quite cross compatible between Windows and Linux due to differences in the way the floating point numbers are handled. People think they can just sweep all of that parsing complexity under the rug by just reading the bytes into their memory structures, but it comes back to bite them in the end.


> I know of at least one game that very close but not quite cross compatible between Windows and Linux due to differences in the way the floating point numbers are handled. People think they can just sweep all of that parsing complexity under the rug by just reading the bytes into their memory structures, but it comes back to bite them in the end.

There is no difference in the binary encoding of floating point numbers between Linux and Windows - any differences there are (default rounding mode, etc) affect calculations and would be the same with a text-based format.

In fact, text-based formats make things worse because people tend to forget that C parsing functions take the locale into account which affects the symbol used for the decimal point in floating point numbers.


> requiring specialized tooling that may also have its own bugs

The common ones all have Wireshark plugins. So I'm not sure what's special.

> They can also suffer from things like insufficiently specified byte ordering issues and differences in floating point behaviour between systems ... I know of at least one game that very close but not quite cross compatible between Windows and Linux...

I think this shows I did a poor job of explaining myself. I don't mean that everyone should create their own binary encoding. I'm saying that you should pick a well-known, well-supported encoding like protobufs, Avro, CBOR, flatbuffers ...

There are about a dozen strong contenders, all of which have tooling, library support and the ability to ROFLstomp plaintext on any measure of burden other than yours or mine.


Interesting perspective, but considering how much everyone seems to want software, I think we have to say that the commons also captures a lot of the same value from plain text that the programmers do. That might work out to less of a negative externality than just a trade-off for everyone, especially when you consider the positive effects on the commons of the network effects that text makes easier.


Interesting counterpoint. It would come down to how things net out. And it won't be stable over time. I'm still of the view that programmers systematically overvalue their convenience, because it's a value/cost tradeoff that they directly experience.


Fair. I do think the long term goal should be a compact binary format with equal or better tooling as plain text. Goodness knows there are enough formats and structural editors out there, so in principle we only need to standardize on one, but it seems none of them are actually quite good enough yet.


I am generally of the view that Avro is Good Enough for most things that plaintext is used for and is pretty well-supported.

Arrow looks very promising for cases where fast raw data shipment is the goal.


The tooling, though, we need the ubiquitous tooling. But that's not really a technical problem. :P Maybe when I pitch my hat in the structural editor ring I'll try to do an Avro editor.


[Edit] Ignore me, I thought we were talking plain text vs encrypted, not binary. :-)


I'm not sure how the protocol being text rather than binary helps the MITM? Are you misunderstanding "plain text" to mean "unencrypted" rather than "not binary"?


Oh yes, I guess I missed the point they were making. In that case, I don't have an opinion. Binary in most protocols has been a solved problem for a very long time.


OK - but what is Plain Text? Is it ASCII or maybe should it also include UTF8 or other Unicode encodings? What is the difference between the bits that form HTTP/1 and the bits that form a binary protocol like HTTP/2?


Plain text protocols, unless they can be expressed in, say, 3 grammar rules are almost always more pain than they're worth.

These days just go and use a flexible binary option like protobufs, flatbuffers, avro, etc.


I also wonder if letsencrypt will be dropped at some point so the SSL mafia can start squeezing everyone who has been shamed into using SSL.


I think the author would like TreeNotation [1]

https://faq.treenotation.org/


This holds up until you need to escape something.


Thanks for sharing this post.


Honestly, I dislike plaintext formats a lot. It is more accessible because it’s human readable. But, this only extends to humans who happen to speak the language the protocol uses for keywords. While it’s not a huge ask, I still suggest this is mostly not that interesting of a benefit.

Parsing and emitting plaintext formats, meanwhile, is a rabbit hole. It’s human readable which makes you tempted to make it human writable. Should you accept extraneous whitespace? Tabs vs spaces? Terminating new line? Unix or DOS line endings? Case sensitivity? Unicode normalized forms? Etc.

Binary data may seem less accessible, but I blame the libraries. There’s tons of easy (if inefficient) ways to parse text. You can use string.split, atoi and scanf in your language of choice. What is there for binary?

In Go, the encoding/binary package actually implements something really cool. A simple reflection-based mechanism that can read and write binary data into a structure in a defined and simple way.

lunixbochs extended this to struc[1], which adds additional tags for advanced reading and writing of binary structures, including variable length structures. I went further and maybe a bit off into the deep end with Restruct[2], a similar concept but with a lot more features, designed specifically so I could handle advanced structures quickly.

The end result is that I can define some Go structs with integers, strings, byte arrays and corresponding tags, and be able to serialize and deserialize from those structures to their corresponding binary representation. For an overdone demo of what you could do with Restruct for example, see this (incomplete) PNG demo: https://github.com/go-restruct/restruct/blob/master/formats/... (It is mainly incomplete because I had moved focus to develop a codegen for restruct, to improve runtime performance, although such work has since stalled.)

Most of my inspiration for newer advanced features like expressions comes from Kaitai Struct[3], an excellent project I have also contributed to a bit. I learned about Kaitai after having written Restruct thanks to a friend, though its lack of write capabilities has lead me to continue using Restruct for many hacks, such as some tools I wrote that modified FL Studio Project files.

[1]: https://pkg.go.dev/github.com/lunixbochs/struc

[2]: https://pkg.go.dev/github.com/go-restruct/restruct

[3]: https://kaitai.io


STOMP

POP3

SMTP

NNTP


None mention JSON? Alright, then I'll do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: