Hacker News new | past | comments | ask | show | jobs | submit login
OpenBSD disables Heartbeat in libssl, questions IETF (openbsd.org)
243 points by mschuster91 on April 10, 2014 | hide | past | favorite | 136 comments



This patch is setting OPENSSL_NO_HEARTBEAT, but openssl actually only disables heartbeats when you set OPENSSL_NO_HEARTBEATS (note trailing S).

Someone should make a thing which does sentiment analysis on commit messages, and flags vitriolic or angry commits as possibly containing more typos than usual.


That's almost too funny.

Edit: Does that last "ok" line mean that two other committers reviewed and OK'ed this patch as well? Three openbsd committers approved a one-line change that didn't fix a single thing? (And people wonder how the OpenSSL bug could live on for two years =) )



It is but we should have a dose of humility in mocking them since many, many people [including everyone on this site that is a dev] make stupid mistakes from time to time. Such is the nature of the profession. None of us commit perfect code the first time, every time.



Wha-what? That's pretty crazy. At the very least, if OpenSSL goes through this trouble, why doesn't it just implement its own malloc and friends that are both fast and secure? In fact, shouldn't it include something like scurb_and_free() so that sensitive bits would actually be erased when not needed. That seems like a much better solution than the approach described in this link.


It gets worse, see http://www.tedunangst.com/flak/post/analysis-of-openssl-free...

Essentially the code mistakenly relies on the freelist being LIFO and not being scrubbed.


Whoever said that OpenSSL was written by monkeys continues to be proven right.


That's a fairly rude thing to say. Would you say that to their faces? This is a pretty large site, and a lot of people stop by here, possibly including some of the "monkeys" .

By all means, call the code what you want, but separate that from the people.


Well Ted Nugent (not to mention Ann Coulter) got away with calling Obama a "monkey," and he's the highly respected darling of the GOP, whose endorsement is sought after and appreciated by not only Republican senators and congressmen, but Mitt Romney himself.

(I'm not endorsing anyone's behavior here, just pointing out how low the right hand side of modern American politics has degenerated, to provide some perspective.)


something tells me a similar snafu made by a CS student would get "fairly rude" comments in reply by any competent instructor or advisor.


you say that and yet I have had to teach people coming from several fairly top institutions that "no your lecturer was wrong, it is inappropriate to directly memcpy() from the network into a struct"


The really odd bit is that all the buffer management operations in OpenSSL memset to 0, even when it appears not entirely necessary.

However, the freelist code somehow fails to follow this convention.


Many projects that care about performance and memory fragmentation use per-purpose free lists, and not malloc/free.

The performance is significantly better, and there is a much smaller danger of running out of space due to fragmentation issues.


Yes and no. The major mistake is not a missing "S". The major mistake is not performing a test which would show that the change was made (or in this case that it wasn't made).


Test our changes before we commit? What blasphemy is this?


Good advice for for the author of that rage-filled patch as well.


Well, he has good reason to be enraged. The bug is a horrendous one and on his operating system was due to the developers deliberately the operating system provided security protections. Meanwhile his slightly buggy one liner failed in such a way that it did nothing, before being fixed 3 hours later.


Absolutely.

What matters is not to always be right, but how you correct errors.


"What matters is not to always be right, but how you correct errors."

In my opinion, heartbleed is a counterexample to your point. It's a case where making the mistake at all caused a lot of damage, no matter how quickly they patched it.


Well, it can be argued that this instance of mistake happened BECAUSE they didn't fix their review process the last time a silly bug managed to get in.


So there's enough blame and rage to go around. Any leader with a spine has to first contemplate "how did I fuck up?"


What if your error was running someone over with your car?


At the very least the organization should look at itself very carefully, find out how this this happened.


Hindsight is 20/20.


So, basically, the patch wasn't tested. This whole thing is getting more and more embarrassing.


This patch is setting OPENSSL_NO_HEARTBEAT, but openssl actually only disables heartbeats when you set OPENSSL_NO_HEARTBEATS (note trailing S).

Having negative ifdefs in the first place is a pretty bad practice. This clearly shows why.

If all your IFDEFs are "positive" they have to be declared somewhere and then it's easy to "deactivate" the things you no longer want, by simply commenting a line out.

It reduces the possibility for error, and gives you a better picture of how many IFDEF conditions you are dealing with around your code.


A security bug may well require enabling a feature, rather than disabling one, and then you end up with exactly the same problem.

I personally detest #ifdef's, and would rather have multiple .c files that I can choose from to include in my build. Code infected with #ifdef'itis is unreadable, difficult to test, difficult to maintain, ...


They caught it. It is in a newer commit now. Still kind of "funny".


Congratulations, you have hereby been granted commit privileges to libssl. Have fun!


It just goes to show, paying attention has a cost, energy. When energy is expended on a fiery commit message, the message can become a distraction. There is an energetic cost to all actions, and strategies.

That being said, I fully endorse this commit. There seem to be good intentions behind it, and it was fixed soon after.


I bet big dev shops would actually pay for this to know whom needs a vacation, etc.


That will just drive people to make sarcastic commit messages.


There are smart people working in the TLS WG, but there are also people there that shouldn't be governing the development of the Internet's most important encrypted transport.

More importantly, the working group appears to be geared towards saying "yes" to new features, when exactly the opposite posture is needed. The Heartbeat feature is just the most obvious example, because its reference implementation was so slapdash.

The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?


I have read all the TLS WG discussions I can find on this RFC. The picture that emerges for me is that of an inherent, institutional failure in the nature of working groups.

As far as I can tell, there was only one person that raised the "we don't need this" objection (Nikos Mavrogiannopoulos, GnuTLS principal author). He asked for a defense of the feature on three occasions [0] [1], and received a response twice, both responses from the same person. There was no further discussion, so either his objection was addressed or he felt further discussion would be unproductive. I would be interested to know which. GnuTLS does ship with heartbeat disabled, so it's possible Nikos continued to express skepticism about the feature.

However this document was eyeballed by many security experts, who I will not name and shame here, who proofread or made comments or edits to the document, without objecting to the general premise.

It seems to me that, inherent in the nature of a working group, the specific parties who are in favor of the feature will always be better represented than the interests of a wider audience that faces the nonspecific harm that the feature might be implemented poorly and introduce a security vulnerability. Sort of a reverse-NIMBY situation if you like. The software community cannot mobilize to combat every new TLS proposal on the grounds of nonspecific harm, but everybody with a pet feature can mobilize to make sure it is standardized.

[0] http://ietf.10.n7.nabble.com/tsv-dir-review-of-draft-ietf-tl...

[1] http://ietf.10.n7.nabble.com/Working-group-last-call-for-dra...


Bear in mind, the feature itself isn't insecure. The experts who OK'd the feature failed to police an unstated and controversial norm for the group (that it should default to "no" answers). It's hard to fault people for that. I agree: the problem is the IETF process.


> The weirder thing here is that it would even be controversial or newsworthy to disable TCP TLS heartbeats, a feature nobody needs or really ever asked for. Why was it enabled by default in the first place? What sense did that decision ever make?

You're essentially asking the same exact question Theo de Raadt is asking, hence his extreme suspicion of IETF.


I know, I'm agreeing with him.


Presumably people want TLS heartbeats for TCP because, by default, on Linux, TCP keep alive doesn't kick in until the connection has been idle for 2 hours.

Why write a call to setsocketopt() when we can just reinvent new features?


It's worse than that. Keepalives are only really useful for (a) long-lived connections that (b) are expensive (usually: requiring manual intervention or renumbering) to reestablish.

Those parameters describe no connection made by browsers.

In other words: application-layer keepalives are only valuable to applications that already have the freedom to define their own truly application-layer keepalive anyways.


And that still leaves the question of why spec a payload (much less 64k "for flexibility") in a TCP heartbeat exchange.


Because it's the same heartbeat message used for DTLS, where the heartbeat and padding allows for variable-length probes with request and response having varying size.


That is understood, Tom. The question remains why spec the same for two distinct transport layer protocols.

[edit: actually I was under the impression that the payload addressed response order concerns in UDP.]


(It's Thomas). Because it would have made even less sense to define a TLS-specific heartbeat and a DTLS-specific heartbeat.

In the hierarchy of sensible TLS decisions, you have, from most to least reasonable:

1. Not adding new heartbeat extensions to DTLS or TLS.

2. Adding new heartbeat extensions to DTLS only.

3. Adding the same new heartbeat extensions to DTLS and TLS.

4. Adding two different new heartbeat extensions, one for DTLS and the other TLS.


Not that I'm disagreeing with you, but aren't SSL connections rather expensive to establish because of the public key encrypted key exchange? Ofcourse anno 2014 that doesn't matter, but the whole library seems a bit engineered for 1998 when establishing SSL was probably a pretty significant thing.


They should be regarded as expensive today, because key exchange is one of distinct parts of the attack surface of an SSL implementation. The less often this exchange is visible to eavesdroppers, the better.


Repeated TLS reconnections do not necessarily invoke the entire key exchange.


.. I don't think that's a very realistic concern, is it?


Aren't websockets ping/pong messages similar?


Websockets messages are happening two layers up from TLS.


IOW: totally worthless.

The penalty for security fails of useless features should be a slow, painful, humiliating death.


You still need them for UDP.


Would you please link to this supposedly slapdash reference implementation? Taking RFC6520 at face value, it seems fairly reasonable and not out of line with other existing heartbeat protocols.

I've not read as much as I would like on the IETF's involvement in this but as I read the situation, Theo has just hurled vitriol at them for specifying a completely reasonable feature that the openSSL team implemented incredibly poorly.


It's not a completely reasonable feature.

It's a feature that had a sensible use case in DTLS (datagram TLS, TLS not run over TCP). It's unclear as to whether the use case was best addressed at the TLS layer, whether it could have been specified as something run alongside TLS, or whether it was something that applications that cared about path MTU could have handled for themselves.

The TLS WG actively discussed whether Heartbeat had reasonable use cases in TCP TLS. Some people agreed, some disagreed. Nobody threatened a temper tantrum if the feature was added to TCP TLS. Therefore: the extension was specified for both TCP TLS --- which does not need extra path MTU discovery, and which already has a (somewhat crappy) keepalive feature --- and DTLS.

The larger problem is in treating TLS like a grab-bag of features for corner-case applications.


Okay, I definitely agree regarding crufty specs. The decision to include heartbeats could easily have left left it as a variant feature rather than core spec.

I still don't agree with the OpenSSL implementation being reference spec if you were refering to that as pygy_ pointed out. Unless the IETF released that code as an institution, I would consider it the same as any other 3rd party implementation - ie. not necessarily correct. How am I to know that the RFCs author wrote it unless I go digging? Why should I trust anything they wrote which may or may not have gone through any rigorous checking?

This isn't so much directed at you, tptacek (since your pedigree is well known), as it is the others bashing the IETF for implementation flaws - bash them for what they actually did and perhaps instead of getting angsty at the powers that be, try getting involved in projects like this if you believe they are so important.


Last line, can we make that 72 pt font and top of the page?

(HN feature request: soundbits.)

This whole approach to diluting ostensibly one of the most important standards is beyond unprofessional, it's an example of failures on many layers of meta.

I would really like to see a compelling alternative to keep TLS honest, perhaps with Convergence-like ideas and bare-bones level of features.

The problem is always adoption, corporate fear of change ($upportabi£it¥) and endpoint adoption, but the threat of an alternative might be enough to scare the bejebus of the WG back to task.


The reference implementation is the code that ended up in OpenSSL.

It was actully included in OpenSSL before the RFC was published.

Both the RFC and the code were written by the same person (who denies planting the bug intentionally).


Was that code ever actually sanctioned by the IETF and included in their resources though? SSL had SSLREF but I've never seen (and didn't find anything just now) a similar thing for TLS.


I won't judge whether the heartbeat feature itself is reasonable or not. But the inclusion of a payload was unnecessary, and specifying that it has to be copied to the reply message is so useless as to defy description. "Flexibility" my ass. It's downright suspicious.


The payload is meant to let you distinguish responses to different Heartbeats, right? I'm no expert, but sounds reasonable enough. I could see a lower limit on max payload size, but could you please elaborate a more detailed explanation why it's unnecessary and useless?


Okay, that explains things a bit. Could be done with a fixed-size payload though. Also I understand that the flexible size is intended to help with MTU discovery, but I think that would be a separate thing from a heartbeat.


Why would you need to send multiple different heartbeats over one connection? It's for keeping the connection alive.


If you are using data-grams (i.e., UDP) you have no guarantees of ordering or delivery, so you have no way to determine which transmission the echo reply you just received corresponds to without a payload that is returned to you.

Now, a 64kbyte payload, that's unnecessary for simply making each packet unique. That size was likely chosen to allow for the path MTU discovery aspects.

One could argue, however, that "keepalive" and "path MTU discovery" should not have been commingled, but they were.


Ping manages the same thing by incrementing an integer. For example.


No, ICMP echo request packets have variable-length payloads, which the receiver copies into the echo response.


Not much more suspicious than ping(8)


I always assumed ping's payload was for detecting packet fragmentation/size limit issues and as a trivial mechanism for spotting things like packet corruption.


Is it that debate (even as devil's advocate) is structurally limited when participants self-select for willingness to act?

That dynamic reminds me of the bias labeled "sympathetic point of view" as described in the essay "Why Wikipedia is not so great": "Articles tend to be whatever-centric. People point out whatever is exceptional about their home province, tiny town or bizarre hobby, without noting frankly that their home province is completely unremarkable, their tiny town is not really all that special or that their bizarre hobby is, in fact, bizarre. In other words, articles tend to a sympathetic point of view on all obscure topics or places."

That is surely not how we want decisions made about TLS.


TLS is the zombie emperor of design-by-committee feature creep.


so -- would it be reasonable to create a minimal implementation of ssl focused on web browsers and web servers? How much work is it to create such a minimal implementation that is reasonably performant, written by competent devs, well tested, fuzzed, etc? Just because random shit gets thrown into the protocol that doesn't mean it needs to be implemented in my copy of chrome or nginx...


OpenSSL itself is highly configurable, so I think it would be better to pare it down to a minimal build that is both secure and compatible with all widely deployed TLS implementations.


I do this already on OSX, and even submitted a patch to allow Ruby to run without OpenSSL engine support. [0]

How I compile OpenSSL. [1]

[0] https://bugs.ruby-lang.org/issues/9714

[1]

    ( export CONFIGURE_OPTS='no-hw no-rdrand no-sctp no-md4 no-mdc2 no-rc4 no-fips no-engine' \
      brew install https://gist.github.com/steakknife/8228264/raw/openssl.rb )


That looks like a good start, but can we drop even more? DES? Camellia? IDEA? MD5?


Good start my ass, you haven't actually tried any of your "suggestions," have you?

no-md5 doesn't currently work:

    "_EVP_md5", referenced from ...

no-des doesn't currently work either, the following test fails the build:

    enveloped content test streaming S/MIME format, 3 recipients: generation error


ime, it's much harder to write, understand, test, and debug #include soup


Not really. There are only a handful of features that are necessary. Furthermore, JEOS-like minimal systems should only enable features that are absolutely necessary. Minimalism means a smaller attack surface.


It's certainly not without its problems (see, e.g. news in the last few weeks) but GnuTLS is (in my opinion, of course) technically superior to OpenSSL. Unfortunately, its license will prevent more widespread adoption.


Just go read polarssl docs and fuzz that.

Its small modular documented and tested.

We would all reach further by improving polarssl than starting another library.


unfortunately it's licensed gpl2, so no love. which is too bad because I agree.

https://github.com/polarssl/polarssl/blob/master/library/ssl...


It's actually GPL-2+, but yes, this unfortunately limits the scope of outside contributors.


Hmm. Why does it say Dual-Licensed in the description?


Because you can also pay them for a commercial license.


What is wrong with gpl v2?


I stopped trusting IETF since this:

http://arstechnica.com/security/2014/01/nsa-employee-will-co...

When after all the Snowden and RSA revelations they're not willing to get rid of the putrid influence of NSA inside of IETF, well then...not much trust left there.


I was confused what RFC 520 ("A Proposed File Access Protocol Specification") had to do with any of this. It looks like Theo meant to refer to RFC 6520: https://tools.ietf.org/html/rfc6520.


Thanks, I was going crazy trying to figure out the connection!


What does that have to do with this vulnerability? The problem was a missing bounds check in the implementation. It's not as though they suspect heartbeats are inherently insecure, right?


Why does a heartbeat acknowledgement need a variable sized payload embedded in the reply? Can someone explain this in a way that is accessible to a programmer without much security experience.

Thanks!


  1) You send a heartbeat at time t0;
  2) You wait until time t1;
  3) You send another heartbeat at time t2;
  4) You receive a heartbeat reply at time t3;
Was your t3 receipt the result of your t0 heartbeat, or your t2 heartbeat?

That is the purpose of the payload, to distinguish which reply matches with which transmission.

Now, why variable sized with a max of 64k vs. say an 8-byte integer? The variable sized with max of 64k was most likely intended to support the second purpose in the RFC, path MTU discovery. To discover the path MTU, you need to be able to send a "too big packet", as well as adjust the packet size until you find the proper MTU value.

[edit, formatting]


I understand needing a unique identifier to distinguish between heartbeats ... but why conflate the heartbeat with Path MTU, which is an orthogonal process.

Is it really that much less efficient to do Path MTU with a different message/system/module? Why absorb this function into the OpenSSL pacakge?

I feel I am still missing something about the way this system works. Perhaps I just need to educate myself more on security and networking.


> I understand needing a unique identifier to distinguish between heartbeats ... but why conflate the heartbeat with Path MTU, which is an orthogonal process.

The only people who can accurately answer that are the author of the RFC/code, and the TLS committee members who discussed the changes.

From a security standpoint, it is more dangerous to commingle the two, because a bug in one side (path MTU) will also effect the other half (heartbeat). And that is exactly what happened.

> Why absorb this function into the OpenSSL pacakge?

Unknown. Path MTU discovery is supposed to be handled at a low layer in the OSI network stack abstraction (closer to the physical hardware) such that higher level layers/apps should not need to care. Putting it into TLS the protocol is a blatant layering violation.


You're correct, it is a case of Not Invented Here syndrome.

To discover MTU you could ping, or send packets on port 80 or any other myriad of ways, but no, its not exactly within ssl so...


Maybe OT, but are the OpenSSL project and OpenBSD project related in any way? For some reason I was thinking it was like the OpenSSH project.


No, they're not. OpenSSL just shares the same naming convention with OpenBSD and OpenSSH (which IS developed by the same people that do OpenBSD).


No, they aren't related at all. OpenSSH is everyone's poster child for a good, open implementation of a security protocol.


For some values of "everyone"

Rob Pike Responds - Slashdot

18 Oct 2004 ... ... but when ssh is the foundation of your security architecture, you know things aren't working as they should

http://slashdot.org/story/50858


I don't think he was impugning SSH's security record at all: just the perceived abuse of the protocol.

The entire quote, in context:

10) Biggest problem with Unix - by akaina

Recently on the Google Labs Aptitude Test there was a question: "What's broken with Unix? How would you fix it?"

What would you have put?

Pike:

Ken Thompson and I started Plan 9 as an answer to that question. The major things we saw wrong with Unix when we started talking about what would become Plan 9, back around 1985, all stemmed from the appearance of a network. As a stand-alone system, Unix was pretty good. But when you networked Unix machines together, you got a network of stand-alone systems instead of a seamless, integrated networked system. Instead of one big file system, one user community, one secure setup uniting your network of machines, you had a hodgepodge of workarounds to Unix's fundamental design decision that each machine is self-sufficient.

Nothing's really changed today. The workarounds have become smoother and some of the things we can do with networks of Unix machines are pretty impressive, but when ssh is the foundation of your security architecture, you know things aren't working as they should.


He just didn't say so in that interview, however :

From: 9fans@cse.psu.edu (rob pike)

Date: Mon, 1 Jan 2001 09:37:12 -0500

Subject: [9fans] Re: The problem with SSH2

Message-ID: <20010101143731.DB61F199E7@mail.cse.psu.edu>

My disagreement with SSH is more specific. It is a securitymonger's plaything, so has been stuffed with every authentication and encryption technology known, yet those that are configured when it is installed is a random variable. Therefore both sides must negotiate like crazy to figure how to talk, and one often finds that there is no shared language. This is idiocy. The complexity is silly, but much worse is that there isn't at least one guaranteed protocol for authentication and encryption that both ends always have and can use as a fallback. I would argue that that would always be sufficient, but I know I'm in the minority there. I do argue that it's demonstrably necessary.

Algorithms everywhere, and not a byte to send.

By making the thing too complicated, they defeat the very purpose of security. Difficult administration results in incorrect or inadequate installation. There are cases when I can't use ssh, a direct consequence.

-rob

Russ Cox chimes in

we're stuck with ssh, but let's not delude ourselves into thinking it's a good protocol.

(i'm talking about ssh1; ssh2 looks worse.)

russ


Thanks, that makes more sense.


How does Plan 9 address this, exactly? It doesn't seem much different than Unix's networking.



That's not a criticism of OpenSSH itself, but of the way Unix systems hadn't progressed from being a set of standalone systems to one networked system. The mention of SSH is not as a criticism of the protocol or OpenSSH. It's a criticism of the Unix world's lack of progress in moving towards what Pike sees as a better way of building large systems.


Thanks for posting that, is a damn good read.


That's not related to the previous post. He's talking about code, Rob Pike is talking about protocol.


They are not related except in the fact that OpenSSH uses OpenSSL code.


No they are not.


This strikes me as 100% backwards, isn't the entire point of the IETF RFC process to memorialize existing practice, not be a design studio for new protocols? Wasn't it followed here (RFC6520 existing to document the OpenSSL implementation of heartbeat)?

If the IETF is the only thing standing between us and any jackass introducing vulnerabilities into everyone's network stack we're in bigger trouble than I thought.


RFC stands for "Request For Comments", which I always thought meant new stuff up for peer review. Wikipedia also describes them as:

An RFC is authored by engineers and computer scientists in the form of a memorandum describing methods, behaviours, research, or innovations applicable to the working of the Internet and Internet-connected systems. It is submitted either for peer review or simply to convey new concepts, information, or (occasionally) engineering humour. The IETF adopts some of the proposals published as RFCs as Internet standards.


Is the IETF the implementors of OpenSSL? If not, this is pure crazy. The heartbeat feature seems useful to me. I mean, keep alive at TCP level leaks information. Even when this information is that there is no transmission. Transmitting a heartbeat tells the other side is alive and is not distinguishable from normal traffic to a passive attacker.


The RFC and the implementation are from the same guy.


Heartbeats are used for UDP and other connectionless protocols. TCP doesn't need heartbeats to tell if the connection went down.


I think it can be useful when the TCP connection is going through NAT which may drop the rule and stop passing packets after idling, no?


Yep, crappy routers tend to do this :/


It's not a real router if it does NAT. NAT is just a half assed proxying hack for hosts not on the internet.


A Juniper MX960 can do NAT. Is that not a router?


If it's configured to do NAT instead of routing, then it's not being a router. Routing is a job and "router" describes a role, not a static property of a box. Like firewall, bridge, etc.

Routing is defined as shuffling packets where they need to go unmolested - without messing with the address or port fields. It's specced in the IP RFC's. If you're mangling the traffic, you're violating router requirements.


You're never seen a router have to do NAT for two subnets that it's routing between?


While I certainly share your dislike of NAT, this is just absurd.


In retrospect, I think using TCP_KEEPALIVE would be a better solution to this specific use case. Are TLS heartbeats truly indistinguishable from normal traffic?


TCP Keepalive is a much better solution for this problem.

Nearly all bitcoin pools utilize TCP keepalive to maintain connections with miners.

These are systems in which performance and DoS resistance are paramount.


My understanding, from the heartbleed page, is that the packets for the heartbeat have a specific packet type identifier and that you can use this to train an IDS to watch for the heartbleed attack.

I would guess that makes them identifiable?


Sure it does. SSH to a remote host, do nothing, then disconnect the cable connected to your router's uplink. SSH (for example) won't notice as long as you do nothing in your SSH session.

On some connections where I have SSH keepalives disabled, I can suspend my laptop, go for a drive around town with it, come home, resume, and still have my session connected.


Yup. But probably wise to run screen or tmux to avoid processes on the other end getting HUPed in case of romantic interludes, etc.


TCP absolutely needs a heartbeat/keepalive to tell if the connection went down. There is no way to distinguish between no traffic being sent and session going down uncleanly. It's very common that some NAT drops the binding of an idle TCP connection and the peers don't discover that until their timeouts expire (could be anything from minutes to hours).


If you want to know if the connection went down in a reasonable timeframe e.g tens of seconds, then you need to implement heartbeating over TCP. This is the first thing any new protocol on TCP usually does.


That is theory but not practically correct. http://0xdata.com/blog/2013/08/tcp-is-not-reliable/

What is not necessary for a TCP heartbeat is a payload that would address out of order request/reply sequence (as would be necessary in UDP).


the IETF stands for Internet Engineering Task Force. They are the people who design protocols that run the internet. Note however that you don't necessarily have to listen to them, it's more like a defacto standard. Also they have nothing to do with implementing them necessarily.


Is there any argument in defense of the TLS heartbeat extension? It seems completely inappropriate thing to stick into TLS (wrong level of abstraction), and the idea of sending any kind of payload as part of heartbeat seems completely redundant (what does this achieve over just basic fields "this is heartbeen #17" -- "this is response to heartbeat #17"?


The commit message contains a rather succinct, albeit drastic description-cum-definition of Heartbleed:

> [A] 64K Covert Channel in a critical protocol.


How can we get developers to keep it simple, and stop introducing new "features"? [This applies to OPENSSL as well as most software out there that has a new "release" all the time]


Sharp guy.


Oh, and they're using CVS because it's so secure!


CVS is still in use by anyone?


Feigned surprise! I read about this today!


Certainly wasn't feigned, though I understand your reference. Absolutely no offense meant.

I'm genuinely surprised. Just as I would also be surprised if Windows 3.1 or magnetic tapes or punch cards were still in use. I thought these technologies were supplanted long ago.


CVS is in use many places, especially in decades old source trees where migration would result in the loss of to much metadata or to much build automation is dependent on CVS. I personally know of several instances of this.

Windows 3.1? I run into it occasionally in places like manufacturing where upgrade cycles are decades long (was recently at a client who's most critical machine was built in 1905). FWIW, DOS can still be found everywhere.

Magnetic tape? Really? LTO-6 is up to 2.5TB uncompressed and LTO-7 is imminent.

Punch cards? See: Scantron.

Just because there's a cool new replacement for something, doesn't mean you should jump on it, or that the old tech is now worthless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: