Hacker News new | past | comments | ask | show | jobs | submit | pja's comments login

> It references "Bug 1923344" but when I click the link I get "You are not authorized to access bug 1923344."

They usually make the bug reports public eventually.


“Do not let salesweasels anywhere near the bright shiny things” hopefully.


“Commoditise your complement” in action!


Use tailscale instead? It’s user friendly enough that you’re unlikely to mess it up.


OpenBSD refactored their system to use async signal safe re-entrent syslog functions, so it’s possible that the author of this code simply assumed that it was safe to make this change, forgetting (or completely unaware) that other platforms (which the openBSD ssh devs don’t actually claim to support) were still using async unsafe functions.


> Is this still possible? Are your emails getting delivered?

Mine are. Although it probably helps to have a static IP with a 25 year long clean history.

Are there very occasional glitches? Sure. But I've seen ISPs drop everything from GMail on the floor for no obvious reason. I've seen GMail drop GMail email before. Same for every other large email provider.

To date I haven't seen any reason strong enough to push me to switch to a centralised email host. That day may yet come of course.


Google Private Cloud I think.


At some point, all protocols include in-band signalling somewhere: You have to put packets on the line, and those packets are ultimately just a stream of anonymous bytes.

If it wasn’t a period, it would be something else & you’d have to handle that instead.


> all protocols include in-band signalling somewhere

That's an incredibly reductionistic view of the world that's utterly useless for anything (including actually engineering systems) except pedantry. It's obvious that the level at which you include control information is meaningful and significantly affects the design of the protocol, as we see in the submission. Directly embedding the control information into the message body does not lead to a design that is easy to implement.

> If it wasn’t a period, it would be something else & you’d have to handle that instead.

Yes, and there are many other design choices that'd be significantly easier to handle.


No, it's not reductionist and pedantic. It's a reminder that there is no magic. Building an abstraction layer that separates control and data doesn't win you anything if, like the people in the article, you then forget it's a thing and write directly to the level below it.


> No, it's not reductionist and pedantic.

It's very reductionistic, because it intentionally ignores meaningful detail, and it's pedantic because it's making a meaningless distinction.

> It's a reminder that there is no magic.

This is irrelevant. Nobody is claiming that there's any magic. I'm pointing out the true fact that details about the abstraction layers matter.

In this case, the abstraction layer was poorly-designed.

Good abstraction layer: length prefix, or JSON encoding.

Bad abstraction layer: "the body of the email is mostly plain text, except when there's a line that only contains a single period".

There are very, very few problems to which the latter is a good solution. It is a bad engineering decision, and it also obfuscates the fact that there even is an abstraction layer unless you carefully read the spec.

-------------

In fact, the underlying problem goes deeper than that - the design of SMTP is intrinsically flawed because it's a text-based ad-hoc protocol that has in-band signaling.

There are very few good reasons to use a text-based data interchange format. One of them is to make the format self-documenting, such that people can easily read and write it without consulting the spec.

If the spec is complex enough that you get these ridiculous footguns, then it shouldn't be text-based in the first place. Instead, it should be binary - then you have to either read the spec or use someone else's implementation.

Failing that, use a standardized structured format like XML or JSON.

But there's no excuse for the brain-dead approach that SMTP took. They didn't even use length prefixing,


I dont disagree with your criticisms of SMTP, but reading those early RFCs (eg 772) is a reminder of what a wildly different place the Internet was back then, and in that light, I feel it only fair to grant some grace.

MTP had one concern which was to get mail over to a host that stood a better chance of delivering it, where the total host pool was maybe a hundred nodes?

I speculate that Postel and Sluizer were aware of alternatives and rejected them in favor of things that were easily implemented on highly diverse, low powered hardware. Not everyone had IBM-grade budgets after all.

Alternative implementations of mail that did follow the kinds of precepts that you suggest existed at one time. X.400 is the obvious example. If I recall correctly, it did have rigorous protocol spec definitions, message length tags for every entity sent on the wire, bounds and limits on each PDU, the whole hog. It was also crushed by SMTP, and this was in the era when you needed to understand sendmail and its notoriously arcane config to do anything. So sometimes the technically worse solution just wins, and we are stuck with it.


> or JSON encoding

JSON needs to escape backslashes, SMTP needs to escape newline followed by period. If you're already accepted doing escaping, what's the issue?


Why not protobufs inside protobufs then?


> Good abstraction layer: length prefix, or JSON encoding.

> Bad abstraction layer: (...)

In this context, it shouldn't matter. Sure, "mostly plaintext except some characters in some special positions..." is considered bad in modern engineering practice, however it's not fundamentally different or more difficult that printf and family. You wouldn't start calling printf without at least skimming the docs for the format string language, would you?

> It is a bad engineering decision, and it also obfuscates the fact that there even is an abstraction layer unless you carefully read the spec.

There's the rub: you should have read the spec. You should always read the spec, at least if you're doing something serious like production-grade software. With a binary or JSON-based protocol, you wouldn't look at few messages and assume you understand the encoding. I suppose we can blame SMTP for design that didn't account for human nature: it looks simple enough to fool people into thinking they don't need to read the manual.

> There are very few good reasons to use a text-based data interchange format.

If you mean text without obvious and well-defined structure, then I completely agree.

> One of them is to make the format self-documenting, such that people can easily read and write it without consulting the spec.

"Self-documenting" is IMHO a fundamentally flawed idea, and expecting people to read and write code/markup without consulting the spec is a fool's errand.

> it should be binary - then you have to either read the spec or use someone else's implementation.

That's mitigating (and promoting) bad engineering practice with protocol design; see above. I'm not a fan of this, nor the more general attitude of making tools "intuitive". I'd rather promote the practice of reading the goddamn manual.

> But there's no excuse for the brain-dead approach that SMTP took. They didn't even use length prefixing,

The protocol predates both JSON and XML by several decades. It was created in times when C was roaming the world; length prefixing got unpopular then, and only recently seems to en vogue.


> No, it's not reductionist and pedantic. It's a reminder that there is no magic.

Exactly! This is an even better phrasing of my point.


That's not a very useful definition of "in-band signaling". For me, the main difference is an out-of-band protocol that says:

"The first two bytes represent the string length, in big-endian, followed by that many bytes presenting the string text."

and an in-band signalling protocol:

"The string is ended by a period and a newline."

In the second one, you're indicating the end of the string from within the string. It looks simpler, but that's where accidents happen. Now you have to guarantee that the text never contains that control sequence, and you need an escaping method to represent the control sequence as part of the text.


That isn't true at all. In most binary protocols, you put the size of the message in the header. Then any sequence of bytes is allowed to follow. There are no escape sequences.


That’s still in-band signalling: The metadata is in the same channel as the data.


From my experience, almost no protocols have in-band signaling. No protocol I’ve ever built has in-band signaling because it’s nuts.

You always know what the next byte means because either you did a prefixed length, your protocol has stringent escaping rules, or you chose an obvious and consistent terminator like null.


SMTP has stringent escaping rules. The authors of the code in the article were incompetent.


If your metadata about the data is in the same channel as the data then you’re doing in-band signalling.


Not in modern times.

The terms harken back from the day of circuit switched networks but now that we have heavily transitioned to packets, bands are an artificial construct on top of packets and applying the term isn’t very clear cut.

The main property of in-band data in the circuit-switched network days is that you could inject commands into your data stream. If we apply that criteria that to a modern protocol, even if you mix metadata and data in the same “band,” if your data can never be interpreted as commands then “out of band” makes an apt description.

See https://en.m.wikipedia.org/wiki/Out-of-band_data


That's only true if you're not breaking the protocol abstraction layer. There is no "out-of-band" once you serialize your messages. If you start injecting random bytes into the data stream on the wire, you can absolutely start introducing commands, or confuse the receiver where the next piece of metadata/control is.

In this case, somewhere the protocol abstraction layer got broken, and the message text ended up being treated as already serialized. It's not a problem with the protocol per se, but with bad implementation of its API (or no implementation at all, just printf-ing into the wire format).


Injecting random data into any protocol will break it.

When we’re talking about whether someone can inject data into the link, we’re talking about the end user and not the software. If we’re talking protocol design, then you wouldn’t want regular data to be able to inject commands by simply existing.


> Injecting random data into any protocol will break it.

It shouldn't, unless you're bypassing the actual protocol serialization layer (or hitting a bug in the implementation). Which is what's the case here. Protocol design can't address the case of users just writing out some bytes and declaring it's a valid protocol message.


Sure but I’m not replying to the thread.

I’m replying to a post where someone said most protocols have in-band signaling and therefore this problem is unavoidable.


Tony Finch is a real person: https://dotat.at/social.html

He used to be hostmaster@cam.ac.uk, amongst other things. Why he’s farming internet points on HN I’ve no idea!


A small world. I checked out his site and discovered I worked with his wife. He’s the Finch in Coleman Finch. Rachel is you are reading this, Hi! Hope you are well.


Because it destroys the tools of art by crushing them into a featureless grey rectangle.

Which is a little on the nose for the way artists are feeling right now...


I’m an artist and I feel great. As a singer-songwriter I’ve already come to terms with Swedish mega-producers, drum machines, Live Nation, and whatever drives people to consume corporate music.

What exactly makes things any harder for artists than it has ever been? Was there some glorious moment in the past when people didn’t look down at the average poets for being lazy and useless?

Sure, laud the best of the best, but you know for a fact that you’ve thought it a bad decision for someone you know who isn’t gifted with genius level talent to pursue a career in the arts.

It has never been easy.

Frankly, if AI makes a pop song or if Lana Del Ray’s producers make a pop song, it really is no different to me. No one is going to replace the folk singer because the audience is already selecting for the poet, not the product. Who cares what frat bros are chugging beer to?

Is part of the response to this ad the subconscious realization that one doesn’t make or actively appreciate organic art to begin with?

When was the last time most of us went to an open mic? Or bought a painting from a local artist?


Many tools can be used for art, even the featureless grey rectangle. Your attitude feels a lot like gatekeeping to me similar to when cameras replaced paintings, then digital replaced film, then phones replaced big bodies, etc…


For whatever reason I feel compelled to share my initial reaction to this comment:

Just because you managed to use "tool of art" as a literal phrase doesn't make your point more clear. Why should I care if a couple of these pieces are destroyed. Presumably they didn't destroy anything of historical, cultural, personal, or scarce significance. Are you sure you're not making an argument based only in emotions?


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: