This patch is setting OPENSSL_NO_HEARTBEAT, but openssl actually only disables heartbeats when you set OPENSSL_NO_HEARTBEATS (note trailing S).
Someone should make a thing which does sentiment analysis on commit messages, and flags vitriolic or angry commits as possibly containing more typos than usual.
Edit: Does that last "ok" line mean that two other committers reviewed and OK'ed this patch as well? Three openbsd committers approved a one-line change that didn't fix a single thing? (And people wonder how the OpenSSL bug could live on for two years =) )
It is but we should have a dose of humility in mocking them since many, many people [including everyone on this site that is a dev] make stupid mistakes from time to time. Such is the nature of the profession. None of us commit perfect code the first time, every time.
Wha-what? That's pretty crazy. At the very least, if OpenSSL goes through this trouble, why doesn't it just implement its own malloc and friends that are both fast and secure? In fact, shouldn't it include something like scurb_and_free() so that sensitive bits would actually be erased when not needed. That seems like a much better solution than the approach described in this link.
That's a fairly rude thing to say. Would you say that to their faces? This is a pretty large site, and a lot of people stop by here, possibly including some of the "monkeys" .
By all means, call the code what you want, but separate that from the people.
Well Ted Nugent (not to mention Ann Coulter) got away with calling Obama a "monkey," and he's the highly respected darling of the GOP, whose endorsement is sought after and appreciated by not only Republican senators and congressmen, but Mitt Romney himself.
(I'm not endorsing anyone's behavior here, just pointing out how low the right hand side of modern American politics has degenerated, to provide some perspective.)
you say that and yet I have had to teach people coming from several fairly top institutions that "no your lecturer was wrong, it is inappropriate to directly memcpy() from the network into a struct"
Yes and no. The major mistake is not a missing "S". The major mistake is not performing a test which would show that the change was made (or in this case that it wasn't made).
Well, he has good reason to be enraged. The bug is a horrendous one and on his operating system was due to the developers deliberately the operating system provided security protections.
Meanwhile his slightly buggy one liner failed in such a way that it did nothing, before being fixed 3 hours later.
"What matters is not to always be right, but how you correct errors."
In my opinion, heartbleed is a counterexample to your point. It's a case where making the mistake at all caused a lot of damage, no matter how quickly they patched it.
Well, it can be argued that this instance of mistake happened BECAUSE they didn't fix their review process the last time a silly bug managed to get in.
This patch is setting OPENSSL_NO_HEARTBEAT, but openssl actually only disables heartbeats when you set OPENSSL_NO_HEARTBEATS (note trailing S).
Having negative ifdefs in the first place is a pretty bad practice. This clearly shows why.
If all your IFDEFs are "positive" they have to be declared somewhere and then it's easy to "deactivate" the things you no longer want, by simply commenting a line out.
It reduces the possibility for error, and gives you a better picture of how many IFDEF conditions you are dealing with around your code.
A security bug may well require enabling a feature, rather than disabling one, and then you end up with exactly the same problem.
I personally detest #ifdef's, and would rather have multiple .c files that I can choose from to include in my build. Code infected with #ifdef'itis is unreadable, difficult to test, difficult to maintain, ...
It just goes to show, paying attention has a cost, energy. When energy is expended on a fiery commit message, the message can become a distraction. There is an energetic cost to all actions, and strategies.
That being said, I fully endorse this commit. There seem to be good intentions behind it, and it was fixed soon after.
There are smart people working in the TLS WG, but there are also people there that shouldn't be governing the development of the Internet's most important encrypted transport.
More importantly, the working group appears to be geared towards saying "yes" to new features, when exactly the opposite posture is needed. The Heartbeat feature is just the most obvious example, because its reference implementation was so slapdash.
The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?
I have read all the TLS WG discussions I can find on this RFC. The picture that emerges for me is that of an inherent, institutional failure in the nature of working groups.
As far as I can tell, there was only one person that raised the "we don't need this" objection (Nikos Mavrogiannopoulos, GnuTLS principal author). He asked for a defense of the feature on three occasions [0] [1], and received a response twice, both responses from the same person. There was no further discussion, so either his objection was addressed or he felt further discussion would be unproductive. I would be interested to know which. GnuTLS does ship with heartbeat disabled, so it's possible Nikos continued to express skepticism about the feature.
However this document was eyeballed by many security experts, who I will not name and shame here, who proofread or made comments or edits to the document, without objecting to the general premise.
It seems to me that, inherent in the nature of a working group, the specific parties who are in favor of the feature will always be better represented than the interests of a wider audience that faces the nonspecific harm that the feature might be implemented poorly and introduce a security vulnerability. Sort of a reverse-NIMBY situation if you like. The software community cannot mobilize to combat every new TLS proposal on the grounds of nonspecific harm, but everybody with a pet feature can mobilize to make sure it is standardized.
Bear in mind, the feature itself isn't insecure. The experts who OK'd the feature failed to police an unstated and controversial norm for the group (that it should default to "no" answers). It's hard to fault people for that. I agree: the problem is the IETF process.
> The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?
You're essentially asking the same exact question Theo de Raadt is asking, hence his extreme suspicion of IETF.
Presumably people want TLS heartbeats for TCP because, by default, on Linux, TCP keep alive doesn't kick in until the connection has been idle for 2 hours.
Why write a call to setsocketopt() when we can just reinvent new features?
It's worse than that. Keepalives are only really useful for (a) long-lived connections that (b) are expensive (usually: requiring manual intervention or renumbering) to reestablish.
Those parameters describe no connection made by browsers.
In other words: application-layer keepalives are only valuable to applications that already have the freedom to define their own truly application-layer keepalive anyways.
Because it's the same heartbeat message used for DTLS, where the heartbeat and padding allows for variable-length probes with request and response having varying size.
Not that I'm disagreeing with you, but aren't SSL connections rather expensive to establish because of the public key encrypted key exchange? Ofcourse anno 2014 that doesn't matter, but the whole library seems a bit engineered for 1998 when establishing SSL was probably a pretty significant thing.
They should be regarded as expensive today, because key exchange is one of distinct parts of the attack surface of an SSL implementation. The less often this exchange is visible to eavesdroppers, the better.
Would you please link to this supposedly slapdash reference implementation? Taking RFC6520 at face value, it seems fairly reasonable and not out of line with other existing heartbeat protocols.
I've not read as much as I would like on the IETF's involvement in this but as I read the situation, Theo has just hurled vitriol at them for specifying a completely reasonable feature that the openSSL team implemented incredibly poorly.
It's a feature that had a sensible use case in DTLS (datagram TLS, TLS not run over TCP). It's unclear as to whether the use case was best addressed at the TLS layer, whether it could have been specified as something run alongside TLS, or whether it was something that applications that cared about path MTU could have handled for themselves.
The TLS WG actively discussed whether Heartbeat had reasonable use cases in TCP TLS. Some people agreed, some disagreed. Nobody threatened a temper tantrum if the feature was added to TCP TLS. Therefore: the extension was specified for both TCP TLS --- which does not need extra path MTU discovery, and which already has a (somewhat crappy) keepalive feature --- and DTLS.
The larger problem is in treating TLS like a grab-bag of features for corner-case applications.
Okay, I definitely agree regarding crufty specs. The decision to include heartbeats could easily have left left it as a variant feature rather than core spec.
I still don't agree with the OpenSSL implementation being reference spec if you were refering to that as pygy_ pointed out. Unless the IETF released that code as an institution, I would consider it the same as any other 3rd party implementation - ie. not necessarily correct. How am I to know that the RFCs author wrote it unless I go digging? Why should I trust anything they wrote which may or may not have gone through any rigorous checking?
This isn't so much directed at you, tptacek (since your pedigree is well known), as it is the others bashing the IETF for implementation flaws - bash them for what they actually did and perhaps instead of getting angsty at the powers that be, try getting involved in projects like this if you believe they are so important.
Last line, can we make that 72 pt font and top of the page?
(HN feature request: soundbits.)
This whole approach to diluting ostensibly one of the most important standards is beyond unprofessional, it's an example of failures on many layers of meta.
I would really like to see a compelling alternative to keep TLS honest, perhaps with Convergence-like ideas and bare-bones level of features.
The problem is always adoption, corporate fear of change ($upportabi£it¥) and endpoint adoption, but the threat of an alternative might be enough to scare the bejebus of the WG back to task.
Was that code ever actually sanctioned by the IETF and included in their resources though? SSL had SSLREF but I've never seen (and didn't find anything just now) a similar thing for TLS.
I won't judge whether the heartbeat feature itself is reasonable or not. But the inclusion of a payload was unnecessary, and specifying that it has to be copied to the reply message is so useless as to defy description. "Flexibility" my ass. It's downright suspicious.
The payload is meant to let you distinguish responses to different Heartbeats, right? I'm no expert, but sounds reasonable enough. I could see a lower limit on max payload size, but could you please elaborate a more detailed explanation why it's unnecessary and useless?
Okay, that explains things a bit. Could be done with a fixed-size payload though. Also I understand that the flexible size is intended to help with MTU discovery, but I think that would be a separate thing from a heartbeat.
If you are using data-grams (i.e., UDP) you have no guarantees of ordering or delivery, so you have no way to determine which transmission the echo reply you just received corresponds to without a payload that is returned to you.
Now, a 64kbyte payload, that's unnecessary for simply making each packet unique. That size was likely chosen to allow for the path MTU discovery aspects.
One could argue, however, that "keepalive" and "path MTU discovery" should not have been commingled, but they were.
I always assumed ping's payload was for detecting packet fragmentation/size limit issues and as a trivial mechanism for spotting things like packet corruption.
Is it that debate (even as devil's advocate) is structurally limited when participants self-select for willingness to act?
That dynamic reminds me of the bias labeled "sympathetic point of view" as described in the essay "Why Wikipedia is not so great": "Articles tend to be whatever-centric. People point out whatever is exceptional about their home province, tiny town or bizarre hobby, without noting frankly that their home province is completely unremarkable, their tiny town is not really all that special or that their bizarre hobby is, in fact, bizarre. In other words, articles tend to a sympathetic point of view on all obscure topics or places."
That is surely not how we want decisions made about TLS.
so -- would it be reasonable to create a minimal implementation of ssl focused on web browsers and web servers? How much work is it to create such a minimal implementation that is reasonably performant, written by competent devs, well tested, fuzzed, etc? Just because random shit gets thrown into the protocol that doesn't mean it needs to be implemented in my copy of chrome or nginx...
OpenSSL itself is highly configurable, so I think it would be better to pare it down to a minimal build that is both secure and compatible with all widely deployed TLS implementations.
Not really. There are only a handful of features that are necessary. Furthermore, JEOS-like minimal systems should only enable features that are absolutely necessary. Minimalism means a smaller attack surface.
It's certainly not without its problems (see, e.g. news in the last few weeks) but GnuTLS is (in my opinion, of course) technically superior to OpenSSL. Unfortunately, its license will prevent more widespread adoption.
When after all the Snowden and RSA revelations they're not willing to get rid of the putrid influence of NSA inside of IETF, well then...not much trust left there.
I was confused what RFC 520 ("A Proposed File Access Protocol Specification") had to do with any of this. It looks like Theo meant to refer to RFC 6520: https://tools.ietf.org/html/rfc6520.
What does that have to do with this vulnerability? The problem was a missing bounds check in the implementation. It's not as though they suspect heartbeats are inherently insecure, right?
Why does a heartbeat acknowledgement need a variable sized payload embedded in the reply? Can someone explain this in a way that is accessible to a programmer without much security experience.
1) You send a heartbeat at time t0;
2) You wait until time t1;
3) You send another heartbeat at time t2;
4) You receive a heartbeat reply at time t3;
Was your t3 receipt the result of your t0 heartbeat, or your t2 heartbeat?
That is the purpose of the payload, to distinguish which reply matches with which transmission.
Now, why variable sized with a max of 64k vs. say an 8-byte integer? The variable sized with max of 64k was most likely intended to support the second purpose in the RFC, path MTU discovery. To discover the path MTU, you need to be able to send a "too big packet", as well as adjust the packet size until you find the proper MTU value.
I understand needing a unique identifier to distinguish between heartbeats ... but why conflate the heartbeat with Path MTU, which is an orthogonal process.
Is it really that much less efficient to do Path MTU with a different message/system/module? Why absorb this function into the OpenSSL pacakge?
I feel I am still missing something about the way this system works. Perhaps I just need to educate myself more on security and networking.
> I understand needing a unique identifier to distinguish between heartbeats ... but why conflate the heartbeat with Path MTU, which is an orthogonal process.
The only people who can accurately answer that are the author of the RFC/code, and the TLS committee members who discussed the changes.
From a security standpoint, it is more dangerous to commingle the two, because a bug in one side (path MTU) will also effect the other half (heartbeat). And that is exactly what happened.
> Why absorb this function into the OpenSSL pacakge?
Unknown. Path MTU discovery is supposed to be handled at a low layer in the OSI network stack abstraction (closer to the physical hardware) such that higher level layers/apps should not need to care. Putting it into TLS the protocol is a blatant layering violation.
I don't think he was impugning SSH's security record at all: just the perceived abuse of the protocol.
The entire quote, in context:
10) Biggest problem with Unix - by akaina
Recently on the Google Labs Aptitude Test there was a question: "What's broken with Unix? How would you fix it?"
What would you have put?
Pike:
Ken Thompson and I started Plan 9 as an answer to that question. The major things we saw wrong with Unix when we started talking about what would become Plan 9, back around 1985, all stemmed from the appearance of a network. As a stand-alone system, Unix was pretty good. But when you networked Unix machines together, you got a network of stand-alone systems instead of a seamless, integrated networked system. Instead of one big file system, one user community, one secure setup uniting your network of machines, you had a hodgepodge of workarounds to Unix's fundamental design decision that each machine is self-sufficient.
Nothing's really changed today. The workarounds have become smoother and some of the things we can do with networks of Unix machines are pretty impressive, but when ssh is the foundation of your security architecture, you know things aren't working as they should.
My disagreement with SSH is more specific. It is a securitymonger's
plaything, so has been stuffed with every authentication and encryption
technology known, yet those that are configured when it is installed is
a random variable. Therefore both sides must negotiate like crazy to figure
how to talk, and one often finds that there is no shared language. This is
idiocy. The complexity is silly, but much worse is that there isn't at least
one guaranteed protocol for authentication and encryption that both
ends always have and can use as a fallback. I would argue that that
would always be sufficient, but I know I'm in the minority there. I do
argue that it's demonstrably necessary.
Algorithms everywhere, and not a byte to send.
By making the thing too complicated, they defeat
the very purpose of security. Difficult administration results in
incorrect or inadequate installation. There are cases when I can't
use ssh, a direct consequence.
-rob
Russ Cox chimes in
we're stuck with ssh, but let's not delude
ourselves into thinking it's a good protocol.
That's not a criticism of OpenSSH itself, but of the way Unix systems hadn't progressed from being a set of standalone systems to one networked system. The mention of SSH is not as a criticism of the protocol or OpenSSH. It's a criticism of the Unix world's lack of progress in moving towards what Pike sees as a better way of building large systems.
This strikes me as 100% backwards, isn't the entire point of the IETF RFC process to memorialize existing practice, not be a design studio for new protocols? Wasn't it followed here (RFC6520 existing to document the OpenSSL implementation of heartbeat)?
If the IETF is the only thing standing between us and any jackass introducing vulnerabilities into everyone's network stack we're in bigger trouble than I thought.
RFC stands for "Request For Comments", which I always thought meant new stuff up for peer review. Wikipedia also describes them as:
An RFC is authored by engineers and computer scientists in the form of a memorandum describing methods, behaviours, research, or innovations applicable to the working of the Internet and Internet-connected systems. It is submitted either for peer review or simply to convey new concepts, information, or (occasionally) engineering humour. The IETF adopts some of the proposals published as RFCs as Internet standards.
Is the IETF the implementors of OpenSSL? If not, this is pure crazy. The heartbeat feature seems useful to me. I mean, keep alive at TCP level leaks information. Even when this information is that there is no transmission. Transmitting a heartbeat tells the other side is alive and is not distinguishable from normal traffic to a passive attacker.
If it's configured to do NAT instead of routing, then it's not being a router. Routing is a job and "router" describes a role, not a static property of a box. Like firewall, bridge, etc.
Routing is defined as shuffling packets where they need to go unmolested - without messing with the address or port fields. It's specced in the IP RFC's. If you're mangling the traffic, you're violating router requirements.
In retrospect, I think using TCP_KEEPALIVE would be a better solution to this specific use case. Are TLS heartbeats truly indistinguishable from normal traffic?
My understanding, from the heartbleed page, is that the packets for the heartbeat have a specific packet type identifier and that you can use this to train an IDS to watch for the heartbleed attack.
Sure it does. SSH to a remote host, do nothing, then disconnect the cable connected to your router's uplink. SSH (for example) won't notice as long as you do nothing in your SSH session.
On some connections where I have SSH keepalives disabled, I can suspend my laptop, go for a drive around town with it, come home, resume, and still have my session connected.
TCP absolutely needs a heartbeat/keepalive to tell if the connection went down. There is no way to distinguish between no traffic being sent and session going down uncleanly. It's very common that some NAT drops the binding of an idle TCP connection and the peers don't discover that until their timeouts expire (could be anything from minutes to hours).
If you want to know if the connection went down in a reasonable timeframe e.g tens of seconds, then you need to implement heartbeating over TCP. This is the first thing any new protocol on TCP usually does.
the IETF stands for Internet Engineering Task Force. They are the people who design protocols that run the internet. Note however that you don't necessarily have to listen to them, it's more like a defacto standard. Also they have nothing to do with implementing them necessarily.
Is there any argument in defense of the TLS heartbeat extension? It seems completely inappropriate thing to stick into TLS (wrong level of abstraction), and the idea of sending any kind of payload as part of heartbeat seems completely redundant (what does this achieve over just basic fields "this is heartbeen #17" -- "this is response to heartbeat #17"?
How can we get developers to keep it simple, and stop introducing new "features"? [This applies to OPENSSL as well as most software out there that has a new "release" all the time]
Certainly wasn't feigned, though I understand your reference. Absolutely no offense meant.
I'm genuinely surprised. Just as I would also be surprised if Windows 3.1 or magnetic tapes or punch cards were still in use. I thought these technologies were supplanted long ago.
CVS is in use many places, especially in decades old source trees where migration would result in the loss of to much metadata or to much build automation is dependent on CVS. I personally know of several instances of this.
Windows 3.1? I run into it occasionally in places like manufacturing where upgrade cycles are decades long (was recently at a client who's most critical machine was built in 1905). FWIW, DOS can still be found everywhere.
Magnetic tape? Really? LTO-6 is up to 2.5TB uncompressed and LTO-7 is imminent.
Punch cards? See: Scantron.
Just because there's a cool new replacement for something, doesn't mean you should jump on it, or that the old tech is now worthless.
Someone should make a thing which does sentiment analysis on commit messages, and flags vitriolic or angry commits as possibly containing more typos than usual.