Hacker News new | past | comments | ask | show | jobs | submit login
Heartbleed in Rust (tedunangst.com)
245 points by glass- on Feb 2, 2015 | hide | past | favorite | 134 comments



As the offending commentor, I apologize. Particularly to the Rust team for generating this negative publicity, and to the person I replied to, for asserting a lie.

I misunderstood Heartbleed, exactly as Ted summarizes. I've no excuse other than commenting when I shouldn't. I am happy though to have my idiocy corrected as I'll comment better in the future.

The rest of the original thread does point out that I did examine every security advisory published by Microsoft over a year or two span, and that, from the descriptions, Rust would have prevented basically every serious (code exec) one. (Notable exceptions being failures in the sandboxed code loading, similar to the various Java in browser bugs.)


Others at Mozilla and I have done similar studies with our Gecko critical security bugs. The number of them that Rust would have prevented are staggering.

I've always been careful not to claim that Servo will have no security flaws. It will, and some of them will be critical. But nobody denies that sandboxing is a powerful defense because Pinkie Pie found some sandbox escapes in the Pwnium contest. Likewise, let's not understate the benefits of memory safety: our studies have shown that the vast majority of critical security bugs in Gecko layout and graphics code are simple memory safety vulnerabilities like buffer overruns or use-after-free.


Of course. But subtextually: had you replaced C++ with Python, you'd have had the same outcome. Obviously, a significant advancement in Rust is that it's feasible to replace performant C++ programs with Rust, and not feasible to do that with Python. But that's perhaps a performance and logistical advance, not a security advance.

I get that the combination of [memory-safe, non-garbage-collected, performant, natively-compiled] has security implications for systems programming. But developers shouldn't be swapping their Python code for Rust if security is their primary objective.


I think Rust is a security advance because it makes memory-safe programming practical in more domains than it used to be. The fact that Python has memory safety doesn't help (for example) browser engines, because it's too slow and requires too many dependencies. Rust didn't invent memory safety, of course, but it's bringing the security benefits of other languages to places they couldn't reach before, and I would argue that that itself advances security. To make an analogy, Rust is like the effort to add stronger ciphers to TLS as opposed to the effort to invent new ciphers; both are important, the latter because it creates stronger systems and the former because it makes those stronger systems more widely deployed.


I hope that Rust will be more memory safe than Python in practice. Python just doesn't have that "culture", and third party modules often have memory safety bugs.


Isn't that setting up a straw-man?

Nobody sane would use C/C++ when they could get away with Python and nobody sane would use Python if they need to control the memory layout and access patterns like in C/C++.

Obvious candidates for such use-cases, besides kernel space, are games or video/audio systems or other stuff in which dropping frames is unacceptable, real time systems which includes some web services, or simply applications that have to use a lot of RAM, like databases, since even the most advanced mainstream garbage collectors are still awful at managing huge heap sizes.

And it seems to me like the target for Rust is pretty clear - that people are even considering it as a replacement for GC-ed languages is a pretty good achievement, but I don't think somebody wouldn't notice the stark contrast between what is a systems language and a GCed one right from the 10 minutes tutorial.

And btw, there are some instances in which Rust is safer than a language like Java in a multi-threading scenario. In Java or other JVM languages, you can happily capture a mutable object inside a closure and then mutate it asynchronously, possibly in another thread, while not synchronizing or releasing control in the thread in which the capturing happens. Even if you're using high-level abstractions, like actors, futures, streams and what not, this possibility of capturing mutable objects from the context by mistake in closures that execute asynchronously is always there, so to write safe code that protects against accidents you have to get religious with immutable data-structures and whatnot. This isn't to say that Rust is safer than Java - I doubt that. But it has some interesting ideas in it nonetheless.


> But that's perhaps a performance and logistical advance, not a security advance.

Isn't it both, depending where you come from? And isn't decreasing the performance/security tradeoff a security advance when developers in the domain do not want to compromise on performance?

> But developers shouldn't be swapping their Python code for Rust if security is their primary objective.

OTOH developers building up security primitives/building blocks aren't doing so in Python in the first place, and doing so in Python would mean it's not easily accessible for {anyone not using Python}.


Yes! I'm struggling to articulate the sentiment that boils down to "if you're already deploying Python or Scala or Ruby, Rust is unlikely to drastically improve your security, and rewrites are sure to harm security at least somewhat".

If you're currently shipping C/C++, Rust seems like a great bet. Don't write C/C++ in 2015.


Yeah Rust isn't safer than an already memory-safe language (though it might be faster, then again it's also quite a bit more work wrt e.g. Go). but

> Don't write C/C++ in 2015.

That probably isn't going to stop, even ignoring performance ricers the bottom of the stack does need control, does need good performances, and more importantly does need to be usable from just about everywhere.

You can't really/easily use a scala library from MRI, or a CPython one from Go (it's already hard enough to use a CPython library from Pypy). If there's a C interface and little to no runtime assumption however you're good to go. Currently that means C or C++ (with a nice extern C ABI/API). If it could be something safer in the future, that would probably be a good idea.


> You can't really/easily use a scala library from MRI

You can easily link Scala with Ruby by running both on the JVM. Assuming you really want MRI, on the client-side that implies running 2 processes with their own runtime at least, instead of just one process. There is a reason for why I've heard of Ruby developers choosing JRuby to deploy Ruby apps, because having only one GC for managing a long-running process is expensive enough, having more than one gets awful fast.

On the server-side on the other hand, you have the freedom of deploying your stuff on multiple servers, which is often the case so you can use a micro-services approach - in our current project the front-end is developed in Ruby / Javascript and the backend in Scala, linked through a web service API. The great thing about this, besides using the best tool for the job and so on, is that people can work on them in parallel.

IMHO, I think the reusability of C is overstated. I haven't used C libraries in any of the JVM apps I ever developed and this reduced the headaches I've had back in the days when I did stuff with CPython or Ruby MRI. And given a potent runtime, you can make both Python and Ruby run on top of it with reasonable performance (compared to their reference implementations).


I strongly disagree.

I think that switching from Python or Ruby to Rust will generally give a drastic security improvement.

The difference between having a complete type system and not is night and day. If you look at bugs in Ruby, it's frequently because of dynamic typing.

Having strong and strict typing is a wonderful way to catch logic errors at compile-time, and every logic error is a potential security flaw.


Wow, strong statement! I am not going to weigh on this other to ask for clarification - do you mean not use C/C++ at all, or only in the context of security applications?


Almost all applications are "security applications", in the sense they are security sensitive, these days. If I pull data over the internet, my first person shooter now can't allow buffer overruns. Bad Things may happen in almost any environment.


Don't use C/C++ at all.


"Don't write C/C++ in 2015"

That's quite bold.


Some people have to write C/C++. Kernel developers, for instance. But most people who write C/C++ in 2015 don't really have to. For every low-level game programmer or RTOS embedded systems use case you'll cite, I'll cite an unforced error that occurs 5x as often, such as "we wrote our custom database engine that only ever runs serverside in C".


"But most people who write C/C++ in 2015 don't really have to."

While greenfield projects might be able to start with a new language established products have quite a lot of inertia. Any production environment needs to factor in personnel training (or, hey, fire the tens of C++ developers and replace them with equally skilled Rust developers who are as famimiar with the domain requirements as the last lot..) and the costs and risks in architecture changes. Switching languages might not be even then feasible, given a common big ball of mud structure with computational modules, UI and domain logic cooked together into a delicious mess.

Even if the long term costs of a codebase written in Rust were, say, the tenth of the costs of a similar codebase written in C++ often the product development costs are not that large percentage for established ISV:s.

When the next greatest browser gets written in Rust, then industrial users start paying notice.

Why would I care of how organizations develop software? Because I need to get paid and I work inside an organization... I write C++ to earn my living, I don't want to write C++ home for fun.

So at home I wouldn't write C++ any way and at work I really don't have a choice :) - Of course, peoples situation vary.


On the other hand, Rust is not ready -- either from a stability or library perspective -- in 2015, most likely. What are you recommending the C/C++ programmers go to?


Java. Python. Golang. Lua.


If you are using C or C++, it probably means you cant use those you have pointed out.

People can port Python and Ruby stuff to Go for instance and have advantages doing it, because the logic was already suited for managed languages.

But for C and C++ stuff is unlikely.. and for Rust the other possible option.. its kind of a productivity killer compared to C (even with the headers nonsense).

Its always a matter of tradeoff, and its not a casuality that C are alive and kicking til this day.. the language really has its strenghts.. and while every new lang in town claim they are the next C killer, well, in real life is much harder to make such a claim, and its not just because of the "old code that has to be maintained".

I know you are a security focused person, but security and safety is not the only one reason we choose a language to start a new project.

So until a really strong contender shows up (i particularly dont think Rust is it, but im sure there is a golden niche for it), they will still be with us, til we brake the current computation paradigm and algol and lisp langs doesnt make sense


(Just trying for clarity, not arguing against or opposing your point)

In an earlier comment you used as an example of an unforced error "we wrote our custom database engine that only ever runs serverside in C".

Are you saying (with this comment) that for such use cases, developers should use Java/Python/Go/Lua (vs C/C++)?


Yes, because you remove a whole class of errors that might be exploited. Might be hard, might even be impossible, but they are there waiting to be taken advantage of.

Interestingly enough, very high performance java is a bit like rust. Almost all memory safe, except a few (10-100) lines off crazy unsafe stuff.

Of course it is possible that your C code has no memory issues and is verified to be safe, but I would not bet any money on it.

Also at the moment custom code is the last entry point for a hacker, but once all other things are hardend its the last guaranteed leak in the ship.

I also understand where C semantics can lead to faster code than e.g. java today. But I suspect that edge to disappear within the next 3 years. Just like java is going to make heterogeneous compute easy its also going to improve streaming over memory for speed where needed.


You know, I don't know if even kernel developers need to write in C/C++. Sure, those are the languages in which most kernels seem to be written, but couldn't kernels be written in, say, Lisp or Forth and have almost as much performance and at least slightly more security?


Sure, you could write a kernel in Lisp or Forth. But if you are a developer for a kernel that written in C/C++, you don't really have a choice.


Security is never a primary objective. If it were, blah, blah, powered off, locked room, blah.

Security might be the most important objective secondary to getting things done, but because it can't be primary, there will always be security considerations for the engineer. Nothing is perfectly safe.


> But subtextually: had you replaced C++ with Python, you'd have had the same outcome

Would you? http://pastebin.com/T5S0w708


> As the offending commentor, I apologize. Particularly to the Rust team for generating this negative publicity, and to the person I replied to, for asserting a lie.

Realistically you're on the right side here. I just like to argue because it makes people justify their positions, which helps me (and hopefully other people) understand them better.

Rust is one of the most promising languages I've seen in a long time. People have been trying to make a language better than C++ for the things that C++ is good at for decades and this is the first time anything has the potential to succeed.

That's why I argue for the other side. Because when something has obvious potential, people have the inclination to deify it. The statement "X will solve all security problems" is false for all X. You can write secure code in C (see also DJB). You can write insecure code in Rust. It's easier to write secure code in Rust, and that's very important, but it's just as important for everybody to realize that no compiler is made out of magic pixie dust that will make all my code perfect even if I'm an idiot.


But Rust does solve all memory safety issues, doesn't it? In the same way, say, F# does. Except I can use it without worrying about deployment or runtime costs.

One scenario I am using Rust for is a packet capture parsing and forwarding daemon. With C, my biggest failure mode is "people can run code on your server by sending a packet across your network". With Rust, my biggest failure is "my code might be buggy so results could be useless". In fact, even if I try, I'm having a hard time coming up with code I could write that'd expose a security issue.

That's pretty close to magic pixie dust. Yes, it's a restricted scenario, but I doubt it's an uncommon one. A bunch of vulnerable utilities are basically just reading and writing data from/to files. If they weren't using a memory-unsafe language, they simply wouldn't be in a position to open security holes.

Am I overlooking something here?


Rust can't solve all memory safety issues. Rust tries very hard to guarantee that safe code (i.e. not unsafe{}) will be memory safe and free of some race conditions. The hard part is making this possible - turns out you need unsafe in the core to be able to write safe implementations of those features in the general case. Attack surface is greatly lessened, but it's still there.


Other GC'd languages come with runtimes written in C/C++ and often these libraries are the source of vulnerabilities. Rust is no different, if you grep your source code and find no unsafe blocks then you are back to where you were in GC land.


The difference is that in Rust not the whole source code but rather only the `unsafe` blocks and what they touch need to be verified for memory-safety.


And often that involves verifying code outside of the blocks that are literally marked unsafe. It's all of the code sitting behind some safe abstraction boundary that needs to be verified.


Aren't we just talking about Heartbleed again? If you're forwarding packets the attacker could send you one with a forged length field.

Even if all you're doing is receiving packets and writing them to a file, what happens when you use the attacker-controlled reverse DNS of the packet's IP address as the file name?


Heartbleed isn't a memory safety issue though, which is what kicked this off. You're right in your original reply to me that if you want to go reuse buffers with leftover data, you can do that in any language. If you want to create a byte array in Java, fill it with private key material, then reuse the same byte array for an output buffer... what can stop that?

But more to my point: in C, I could end up executing arbitrary code by misparsing a network packet. In Rust, the worst I'll do is parse it wrong send invalid data onwards. That's just a massive reduction in scope.

I suppose if you have a tool that writes to arbitrary, attacker-supplied file locations, that could have a severe impact. Or it could pass an attacker-controlled value to the shell without escaping.

But things like the cpio[1] bug mentioned in the lessopen issue. Or the numerous compression libraries that require trusted inputs. Image manipulation code. And on and on. How many of them become simple crashes with a memory safe language? Cause from my unscientific (and flawed as this entire article is about) review, it seems like the great majority are these memory safety issues. Perhaps 90%? For widespread code, am I that far off the mark? (I understand that most intranet or custom software might just eval() every querystring given to it.)

1: http://seclists.org/fulldisclosure/2014/Nov/74


> Heartbleed isn't a memory safety issue though, which is what kicked this off.

It kind of is. It's just not of the type that "memory safe" languages fix for you, which is basically the point. If you define the scope of the problem in terms of what the proposed solution fixes then it tautologically fixes the entire problem, but it's still very important for people to understand that that doesn't mean it fixes every problem in that class.

> in C, I could end up executing arbitrary code by misparsing a network packet. In Rust, the worst I'll do is parse it wrong send invalid data onwards.

This is essentially what I'm talking about. It's still possible to execute arbitrary code in Rust, it's just not as easy. An obvious example is if your parsing bug is in the code that validates attacker-provided input before doing something sensitive with it. Or allows the attacker to flip a bit which is equivalent to remote code execution, like giving the attacker's account admin rights.

And even if "the worst" you do is parse it wrong and send invalid data, that's Heartbleed. The "invalid" data could be secrets.

The question you have to ask is, for all those memory corruption bugs in C, what does "memory safe" turn them into? They're still bugs, they're just not the same bugs. For example, a common way you get RCE in C is an integer overflow that leads to a heap overflow when the overflowed integer is used to allocate a buffer. But take away the heap overflow and the integer overflow is still there. Exploiting an integer overflow is highly context-dependent, but it's commonly possible regardless of whether it leads to a heap overflow. Being able to truncate the amount of Bitcoin being debited from the attacker's account or convert "username=rootxxxx" into "username=root" is arguably better than straight RCE, but not by very much.


OT: How many times have you read a responsible admission like the parent in an Internet forum (or anywhere)? I wish more people had the guts to respond this way to mistakes instead of resorting to humanity's natural self-preservation (i.e., defensiveness). We all screw up; very few have the courage to admit it.

It also is important because it eliminates FUD around the issue: Now it won't be one of those false rumors that follows something around forever, hopefully.

Thank you!

EDIT: Slight edit.


Edit: What I said earlier is actually wrong. The problem is that a too big uninitialized buffer could be allocated and thus memory from previous allocations could be read. This isn't possible in Rust because you can't read uninitialized data.

Of course reusing buffers can be dangerous and lead to information leakage, but it's not what happened with heartbleed, and the possibilities to exploit are smaller.

Old text: Actually heartbleed is a buffer over-read vulnerability that would have been prevented by rust's out of bounds checking. Of course you could allocate one huge buffer that contains sensitive data and is also used as an output buffer but this seems terribly unlikely to me.


Heartbleed occurred because the size of the buffer was based on the size provided by the malicious packet, the buffer was not zeroed, and then the user-provided data was written to the buffer. If user-provided-data size was less than what you said it was, the rest of the buffer contained whatever it had previously contained.


And since people were able to recover SSL keys, does this not mean that this buffer was used for... everything? Having a non zeroing allocator for an entire library seems rather ambitious. It's significantly worse then just having a buffer pool for, say, incoming packets or something.


Oh yes, using buffers without zeroing them is a terrible idea, and sharing those buffers among different types of things is a terrible idea.

I was specifically commenting on the fact that what the parent comment described as "terribly unlikely" is in fact what happened.


There was no buffer reuse like in the linked rust demonstration, but it was data from previous allocations.


The article claims this is not true and the author has a good reputation. I've not looked at the OpenSSL code, but if the author has written an isomorphic example, then there no over read. Is the article incorrect?


> I've no excuse other than commenting when I shouldn't

You totally should've made that comment, look what came from it. You learned something new, and so did a bunch of other people.


I don't know that anyone claimed that a bug similar or analogous to heartbleed couldn't be reproduced in Rust. If they did, that was certainly an overstatement. I think more concretely people claimed that unreachable code yields a warning in Rust, which is absolutely true, but certainly not equivalent to saying something like a heartbleed bug would not happen.

In general, Rust is fairly aggressive about linting for "small" details like unused variables, unreachable code, names that don't conform to expected conventions, unnecessary `mut` annotations, and so forth. I've found that these lints are surprisingly effective at catching bugs.

In particular, the lints about unused variables and unreachable code regularly catch bugs for me. These are invariably simple oversights ("just plain forgot to write the code I meant to write which would have used that variable"), but they would have caused devious problems that would have been quite a pain to track down.

I've also found that detailed use of types is similarly a great way to ensure that bugs like heartbleed are less common. Basically making sure that your types match as precisely as possible the shape of your data -- with no extra cases or weird hacks -- will help steer your code in the right direction. This is a technique you can apply in any language, but good, lightweight support for algebraic data types really makes it easier to do.


I hadn't actually followed the link in the original post. I see that the claims there were slightly different than what I was thinking of. Nonetheless, I stand by what I wrote above.

In particular, while I of course agree with the author that one can write buggy code in any language, I also have found that following Rust's idioms leads to code that is less buggy. This is not unique to Rust: I've also had similar experiences in Scala and Ocaml. What Rust brings to the table is that it supports zero-cost-abstractions, doesn't require a virtual machine, and guarantees data-race-freedom (a rather useful propery).


I mostly agree with the premise: logic errors are always going to be there, at least until the compiler is an IA strong enough to catch them for us (and by then we probably won't need coders anyway...). There's no silver bullet, bad coders are always going to produce. And I also don't like it when people claim that bug X or vulnerability Y wouldn't have happened if they had been technology Z, they're just begging for that type of post.

That being said I'm a bit more skeptical of this part: "code no true C programmer would write : heartbleed :: code no true rust programmer would write :: (exercise for the reader)"

If I look at the examples in the acticle, the C version doesn't look that terrible and contrieved to me. I wonder what the author means by "Survey says no true C programmer would ever write a program like that, either." That looks like a lot of C code I've read, there's nothing particularly weird about it.

On the other hand the rust version looks very foreign to me (and I've been writing quite a lot of rust lately). You basically have to go out of your way to create the same issue.

I guess my point is that while it's true that as long as there'll be coders there'll be bugs and security vulnerabilities it doesn't mean we shouldn't try to make things better. And in my opinion Rust makes it much more difficult to shoot yourself in the foot than plain C.


>I wonder what the author means by "Survey says no true C programmer

I think he is being sarcastic. I.e. the idea that "no true C programmer" would write code like that is nonsense, since we have all seen C code like that. Therefore the idea that "no true Rust programmer" would write the Rust snippet is not a valid defence, because bad programmers gonna program.


Yes. It's an instance of the No True Scotsman Fallacy.

https://en.wikipedia.org/wiki/No_true_Scotsman


Having not done much Rust due to the volatility, I still have to give it credit here for a few more reasons:

* I assume old_io is going away. That is why it is old, after all. Is this still possible in the new_io? If they made this not doable anymore that means they fixed the bug. * The compiler spat out a warning. Maybe it should have been "you dumb fuck why are you doing raw buffer reads and writes" but at least it said "old_io is bad". * I don't see much in the C version that would generate any warnings or errors under any compiler flags. That should be the take away, in my book.

When I do my own projects I almost always go for maximum warnings and errors, and don't call it done until the compiler stops generating them.


The fundamental underlying problem is that a buffer containing sensitive data is being reused for a separate purpose without being scrubbed of the sensitive data. In the particular instance of heartbleed, this faulty behavior was due to the underlying memory allocator rather than any property of the programming language.

It is true that idiomatic Rust tends to avoid working with raw buffers, but for low-level tasks this is sometimes unavoidable. Rust also especially doesn't encourage reusing buffers, but if you've already taken the step of specifying an unsafe memory allocator then Rust can't help you. So yes, Rust gives you a lot of tools to avoid this situation but it can't outright prevent it, as a few people on the internet appeared to be suggesting last year.


logic errors are always going to be there, at least until the compiler is an IA strong enough to catch them for us (and by then we probably won't need coders anyway...)

This raises some interesting philosophical questions: will the ultimate judge of correctness be a human or machine? If it's a machine, what is to say that its definition of "correct" is what humans want?

For some reason, this quote comes to mind: "Freedom is not worth having if it does not include the freedom to make mistakes."


Is this logic error or just misuse of memory? (The buffer array)


The latter. It's trivial to reuse buffers in Rust while avoiding this issue, for example `Vec` has a `clear` method that sets the length to 0 while keeping the allocation.

But AFAICT, in C it didn't even have the same buffer by design, it was reading uninitialized memory from whatever `malloc` gave it back - which is equivalent to allocating a new buffer in Rust.


It was actually using a custom allocator, not the system malloc, which exacerbated the problem. System malloc could still have this problem, but for example OpenBSD has mitigations for this sort of data leakage in their malloc implementation, which OpenSSL then bypassed by using their own allocator.

This is a problem for any "this would never happen in Better Language X" claim regarding Heartbleed. If you decide you want to write your own buffer reuse system for whatever reason, you can pretty easily write this sort of bug in any language.


I would argue that the whole Rust ecosystem pretty strongly discourages you from writing broken low-level infrastructure code. The idea behind unsafe code blocks, the generics system, and Cargo is to encourage you to use off-the-shelf tools instead of rolling your own, often broken, solutions. The C ecosystem, by contrast, tends to encourage rolling your own solutions because managing dependencies in a cross-platform manner is such a pain (and the language doesn't help due to the lack of generics and so forth). I suspect that any free-list library in Cargo that didn't zero out buffers would be fixed pretty quickly (and, as eddyb rightly points out downthread, it would be pretty hard to write a free list system that works with multiple types that doesn't require initialization before use--Rust in general abhors uninitialized memory).

There are parallels between the philosophy of crypto libraries like NaCl and Keyczar and the Rust philosophy. Don't write your own low-level infrastructure code. Use somebody else's.


I mostly agree, but "use somebody else's" doesn't really help if you're writing a crypto library in the first place.


They're referring to writing your own allocator.


And in fact, I've written that exact thing in a C# network stack, about 7 years ago. No bug, as I did properly check the size of the data, but totally possible to have messed up. It's not even hard code to write, just a simple object pool to return byte arrays. Which is natural, as in .NET, high performance often ends up as an exercise in removing every heap allocation possible.


That said, this is a fair point. I have shot myself in the foot in a fairly equivalent manner in another memory-safe language (doesn't really matter which) because I was trying to reuse buffers as an optimization. Oops. I did it to myself, and at least I understood enough about what was going on that I didn't spend days wondering at the heisenbugs, but still, oops. It can happen.

But at least you have to work at it a bit.


Here is what I noticed about this, sorry if it is considered too off-topic:

There was an argument, about something specific and technical; It was refuted without singling out a specific person by name; without using humiliation or insults; using code to do so ("show me the code!"); and there was a polite acknowledgement and resolution.

This is an example of an interaction in a community that I think anyone would want to be a part of. Thank you.


I'm glad you appreciate it. :) We're far from perfect, but we're trying to build a great community, not just great software.


I'm incredibly slow to check this thread, but I wanted to make sure I mentioned that this made me feel better about the rust community also. Keep doing great work :)


Awesome :)


Did Torgo mean the Rust community or the HN community?


I was thinking HN-affiliated, but it applies to the Rust community as well I think. I have seen very positive things on their IRC. Also would like to say that I am loving everything Ted Unangst-related, the OpenBSD community has an (often undeserved imo) bad rap and he is a great ambassador for that community as well, in addition to his amazing technical prowess. With all the talk about toxic communities lately I just want people to recognize positive examples that exist that could be used as guides without being preachy about it. Again sorry for OT, disengaging.


Seems like he was replying to the post by Ted, which was in response to a comment on HN. So I don't think this is about the Rust community.


I'm very confused at the argument here. The C code looks remarkably close to idiomatic. Not "good," mind you, but "idiomatic". The Rust code looks significantly more contrived to my eyes. I'm reading the blog post as arguing that they're equally contrived.

It's true that you can do terrible things in any language, but the test of a language is how easy it makes it to do the right thing in the common case (plus how possible it makes it to do the thing you want in the uncommon case, without these goals compromising the other).

Is there a reason that reusing the buffer makes sense in Rust? (Zero allocation?)

Also, is it not true that Rust lends itself well, probably better than C, to abstractions like bounds-checked substrings within a single buffer? BoringSSL has been doing this in C, and this definitely would have stopped Heartbleed:

https://boringssl.googlesource.com/boringssl/+/master/includ...


This is why I get a little uncomfortable when people suggest Rust fixes tons of security issues. Yes, it will fix some of them. No, just because a Rust program compiles doesn't mean that it won't have problems.

Rust is _memory safe_. Nothing more, nothing less.


I feel like you're undervaluing memory safety. Memory safety prevents most (all?) exploits that lead to remote code execution. There can still be high level vulnerabilities, but guaranteed memory safety is a huge improvement.

Rust's type system can be used to prevent high level attacks too. For instance, if an sql library is set up properly, it can prevent sql injection by requiring inputs be properly sanitized.


I value it very highly, otherwise, I wouldn't work on Rust. :)

I'm just very careful to not suggest that memory safety is the end-all, be-all of errors. The Rust compiler will help you out, but it's certainly not perfect.


Even if it was perfect, nothing can help you if you choose to specifically choose to share data. It was my fault for having only skimmed the original Heartbleed explanation and just made the assumption it was a memory safety issue. Sorry for making Rust look bad, especially right when y'all are working so hard on 1.0.


It's all good. :) We all make mistakes.

Further, I would frankly people understand that you _can_ have security bugs in Rust code, rather than think that if it compiles, there's no need to do security auditing. This post is almost a PSA of sorts. :)


this is especially incredible because the heartbleed bug was a violation of memory safety. the buffer being read from was of a size N, but you could read M bytes from it, where M > N (and in fact, MUCH greater).


Memory safety prevents 3 vulnerabilities that lead to remote code executions, not "most" or "all" of them. They're 3 very common and important vulnerabilities, though.


Based on incidence in Gecko, it is indeed most of them. It depends on your project, of course.


It's a bit tautological to suggest that fixing the most common RCE flaws in C/C++ programs by replacing the language is the same as fixing all of the most common RCE flaws. The clear point here is that memory corruption is an affliction of C/C++ programs, but that other languages have other RCE-breeding flaws.


What are the other, common, RCEs? Command and SQL injection, upload and execute, etc. -- all those would apply to any language, right?

Eval()/dynamic loading and little custom languages (like perhaps some "business rules" type systems) probably aren't as common in C/C++ eh?

Same for overzealous serialization systems (like Ruby's YAML issues, and I think .NET's binary serialization)?

What other kinds of things lead to RCE that don't or rarely occur in C/C++?


You just hit a bunch of them.

The C/C++ RCE bugs are buffer overflow (heap, stack, heap/stack via integers, &c), UAF (and double free), and uninitialized variables. It looks like there's a whole menagerie of different C/C++ RCE flaws, but they really just boil down to bounds checking, memory lifecycle, and initialization.

Metacharacter bugs apply to all languages, but since Rust doesn't eliminate them --- virtually nothing does, with the possible exception of very rigorous type system programming in languages like Haskell --- the metacharacter bugs rebut the parent commenter's point.

Eval() is an RCE unique to high-level dynamic languages. Taxonomically, you'd put serialization bugs here too (even the trickiest, like the Ruby Yaml thing, boil down to exposing an eval-like feature), along with the class of bugs best illustrated by PHP's RFI ("inject a reference to and sometimes upload a malicious library, then have it evaluated").

Those are just two bug metaclasses, but they describe a zillion different RCE bugs, and most of them are bugs that are not routinely discovered in C/C++ code.


If you remove custom software like Intranet apps and focus more on products that have near ubiquitous deployment (like common desktop programs, OSes, basic server-level code), how do you think the come out? What about by number of people impacted?


Right. There's an infinite number of vulnerabilities; fixing any finite number of them still leaves you with almost all vulnerabilities outstanding. But still, fixing them is good.


Some people wrote a completly new TLS Stack in Ocaml to combat this problem:

http://openmirage.org/blog/introducing-ocaml-tls

Here a Video about Mirage OS and this TLS Stack from the 31C3.

Trustworthy secure modular operating system engineering - http://media.ccc.de/browse/congress/2014/31c3_-_6443_-_en_-_...

There goal is to reduce the trusted computing base to a minimal.

Rust could deliver some of the same benefits to writing highperformance low level code.


That page makes the same mistake I did, which caused Ted to write the article in the first place. There's no memory safety issue at play, at least not in the way memory safety is usually referred to. As the TFA shows, the problem is explicitly reusing the same buffer. I don't think there's a general way to prevent this kind of code.

I guess more than just me assumed Heartbleed was a typical blindly allocate and read, going past the buffer bound. But that's not what happened. Writing the same thing is totally possible in OCaml. And in a safe language with GC, it's not unheard of to reuse objects for performance. So in fact it's perhaps even somewhat probable to end up with a Heartbleed-like bug.


True, I still wanted to get the information outthere.

Also I think, if you watch the Q&A at the end of the talk, they clame that the way you write and abstract code is diffrent and leads to saver code as well.

I dont want to clame that it is true, just pointing it out.


If I'm reading the blog code correctly, the error is trusting user input:

    // Rust
    let len = buffer[0] as usize;
    // C
    size_t len = buffer[0];
I'm no Rust hacker, but can I expect the Rust type system to be able to encode some form of tainting? Making the leaky sequence illegal:

    let len = buffer[0] as usize;
    // ERROR ERROR ERROR using unscrubbed user input ERROR ERROR ERROR
    buffer[0 .. len]
How exactly to encode tainting is left as an exercise to the reader :) But ideally it should be able to identify that the buffer is reused between 2 different requests, and that data tainted from second request is used to index an array tainted with data from first request. This seems eery up Rust's alley, given the concurrency / allocation disambiguation support I've read (alas superficially) elsewhere.


You can absolutely express tainting via the type system. I have seen this done before in Rust code in order to express functions that can only accept strings that have been properly sanitized, along with a function that takes an unsanitized string and returns a sanitized one. This particular example was using phantom types, though you could obviously also define wholly separate types for this sort of thing.


Wasn't the heartbleed issue that you could trick it into reading past the memory it had allocated? That's different to explicitly reusing memory you've allocated without clearing it in between.

The original claim was that rust would prevent the class of errors that caused Heartbleed. No one claimed rust would prevent you from writing a program with a different bug that just happens to exhibit similar behavior.

Buffer overruns are tricker to spot than explicitly reusing a buffer.

[Edit] An example of an actual buffer overrun, with no changes to pingback.

C:

    $:/tmp # cat bleed.c
    #include <fcntl.h>
    #include <unistd.h>
    #include <assert.h>

    void
    pingback(char *path, char *outpath, unsigned char *buffer)
    {
            int fd;
            if ((fd = open(path, O_RDONLY)) == -1)
                    assert(!"open");
            if (read(fd, buffer, 256) < 1)
                    assert(!"read");
            close(fd);
            size_t len = buffer[0];
            if ((fd = creat(outpath, 0644)) == -1)
                    assert(!"creat");
            if (write(fd, buffer, len) != len)
                    assert(!"write");
            close(fd);
    }

    int
    main(int argc, char **argv)
    {
            unsigned char buffer2[10];
            unsigned char buffer1[10];
            pingback("yourping", "yourecho", buffer1);
            pingback("myping", "myecho", buffer2);
    }
    $:/tmp # gcc bleed.c  && ./a.out && cat yourecho myecho
    #i have many secrets. this is one.
    #i know your
     one.
    Æ+x-core:/tmp #
Rust:

    C:\Users\ajanuary\Desktop>cat hearbleed.rs
    use std::old_io::File;

    fn pingback(path : Path, outpath : Path, buffer : &mut[u8]) {
            let mut fd = File::open(&path);
            match fd.read(buffer) {
                    Err(what) => panic!("say {}", what),
                    Ok(x) => if x < 1 { return; }
            }
            let len = buffer[0] as usize;
            let mut outfd = File::create(&outpath);
            match outfd.write_all(&buffer[0 .. len]) {
                    Err(what) => panic!("say {}", what),
                    Ok(_) => ()
            }
    }
    
    fn main() {
            let buffer2 = &mut[0u8; 10];
            let buffer1 = &mut[0u8; 10];
            pingback(Path::new("yourping"), Path::new("yourecho"), buffer1);
            pingback(Path::new("myping"), Path::new("myecho"), buffer2);
    }
    
    C:\Users\ajanuary\Desktop>hearbleed.exe
    thread '<main>' panicked at 'assertion failed: index.end <= self.len()', C:\bot\slave\nightly-dist-rustc-win-64\build\src\libcore\slice.rs:524


> Wasn't the heartbleed issue that you could trick it into reading past the memory it had allocated?

No.

Heartbleed is this:

1. malloc an input and an output buffer of the size specified by the caller (16 bits, up to 65536 bytes)

2. copy input data into input buffer (as little as 1 byte)

3. copy input buffer to output buffer

4. send output buffer to caller

Neither malloc nor free zero-out their stuff, so when you malloc chances are the garbage you get before filling your buffer is the result of previous memory writes. In heartbleed, each call would return up to 65535 bytes of these likely-previous-memory-writes to the caller.

This was made even more likely because OpenSSL includes an anti-mitigation framework in the form of freelists: while malloc doesn't zero memory by default, various OS have added mitigation techniques e.g. BSD's malloc.conf can wipe allocated buffers. However OpenSSL does its own memory management using freelists, so it reuses previously-allocated buffers for new allocations making it even more likely heartbleed would leak interesting data.

Anyway one of the things which don't occur at any point is reading or writing past a buffer.


This isn't how I remember it nor what the code looks like [1]. Briefly:

1. tls1_process_heartbeat() retrieves type and 16-bit length from the SSL record.

2. It stores a pointer to the payload (which is after the two length bytes).

3. For the response, it allocates an output buffer and memcpy()s from the payload pointer to the output buffer.

4. If the specified payload length (the two bytes preceding the payload) is less than the actual length of the payload, memcpy() will read past the actual payload in the SSL record and will copy arbitrary memory contents to the response.

This looks like a pretty traditional buffer overrun to me. As far as I can tell, nowhere in the code was the input ever extended to match the specified rather than the actual payload length and the fix was to insert a sanity check to abort if there's a mismatch.

[1] http://git.openssl.org/gitweb/?p=openssl.git;a=blob;f=ssl/t1...


The equivalent would be `Vec::with_capacity` which allocates the right size but does not provide safe access to uninitialized memory - in other words, you can only read what you wrote.


Thank you for clarifying.


Openssl uses their own memory allocator (since malloc is slow on big-endian x86 xenix or something) so they _do_ reuse memory without clearing it in between. Had they used the system malloc, it wouldn't have been vulnerable (on OpenBSD and probably elsewhere). Can rust prevent you from implementing your own (buggy) memory allocator?


No (well, there is still work to be done on custom allocators, but it's planned). But given that Rust uses jemalloc by default, not the system allocator, it is substantially less likely that you will want to replace it with your own. You certainly can't do it without copious amounts of unsafe code (which is really rather unfortunate).


Standard `malloc` doesn't zero memory either. The problem was not caused by their custom allocator. It was exacerbated by it because it allocated everything really close together.


Well it's not really that it allocated everything really close together in memory-space, but since it would try to reuse existing memory from the freelist before asking the OS for more memory, you were more or less certain to get memory openssl had previously used as scratch space to, say, store a private key for temporary operations.


Yes, because we don't support custom allocators yet ;)


You would have to use unsafe code and isolate that as much as you can, behind a safe interface. Memory reuse without clearing would be considered safe, though, as none of the other abstractions actually let you read that memory (doing so would easily be UB anyways in LLVM).


I fail to see the point of this whole discussion.

The code reflects exactly what the program is doing, and there's no undefined behaviour anywhere. There's no way to access anything outside the very delimited scope of "buffer" memory area, like stack variables or any other part of the program.

What's the point of using a high-level language for re-defining basic low-level operations on buffers and recreating everything using those low-level constructs without the proper boundary checks?

Of course, you can simply define a huge "unsafe" block and program everything inside it, but what's the point? That you have a language powerful enough to shoot yourself?

Compare that to C or C++: the unsafe block is always on. Any code block can have unsafe properties anywhere. Not only that, but you have ZERO guarantees on memory safety and other general operations. Summarizing, high-level and low-level totally mixed and no way to isolate them.

Sorry, but if you can't see how Rust avoids a "Heartbleed" or any other kind of similar issue, you have no understanding of programming or no experience debugging anything.

And yes: security != safety, but please note you are the one mixing both concepts.


Slightly OT: while trying to understand the vulnerability I came across a Rust question.

Why can you do this?

    let mut outfd = File::create(&outpath);
    match outfd.write_all(&buffer[0 .. len]) { ... }
According to `old_io::File`'s doc[0] it returns an `IoResult<File>` which is an alias `type IoResult<T> = Result<T, IoError>` i.e. `Result<File, IoError>`. How come you can do `write_all` directly on a `Result<File, IoError>` without unwrapping the `File` first?

The example in the docs does something similar:

    let mut f = File::create(&Path::new("foo.txt"));
    f.write(b"This is a sample file");
So I guess I'm missing something here.

[0] http://doc.rust-lang.org/std/old_io/fs/struct.File.html#meth...


The explanation is at http://doc.rust-lang.org/std/old_io/#error-handling: IoResult implements a bunch of IO traits so you don't need to unwrap it before using it:

> Common traits are implemented for IoResult, e.g. impl<R: Reader> Reader for IoResult<R>, so that error values do not have to be 'unwrapped' before use.


It's because Writer (the trait write_all() is from) is implemented on IoResult. http://doc.rust-lang.org/std/old_io/trait.Writer.html#tymeth...

    impl<W: Writer> Writer for IoResult<W>
From the `std::old_io` docs: http://doc.rust-lang.org/std/old_io/index.html

    > Common traits are implemented for IoResult, e.g. impl<R: Reader> Reader
    > for IoResult<R>, so that error values do not have to be 'unwrapped' before use.


Ah, remote implementation of traits bites me once again.

Is there any reason behind not listing the implemented traits in IoResult's docs? Listing the implementors in the trait is not very useful since in the first place you have to know which traits are implemented to consult them. It's backwards and counterintuitive as I see it.


In general, we're still working on a number of usability issues with Rustdoc's output. The one that bits the most people is methods through Deref. Luckily, this is just a tooling issue, a small bug to be fixed, rather than some sort of fundamental problem. The beta cycle is going to be all about polish, and issues like this are a good candidate for that kind of work.


Line 21 here implements Writer for IOResult<T> where T also implements Writer: https://github.com/rust-lang/rust/blob/master/src/libstd/old...


My take-away: Low-level code will burn you eventually, and unnecessarily low-level code will burn you unnecessarily


It's not low level, though, that was my original misunderstanding. Heartbleed was not a memory safety issue like I incorrectly assumed. It could happen in, say, C#, or Java. In fact, there's probably existing code with the same bug. It's not uncommon to reuse objects in managed code as a performance hack.


I'd argue that that's a bit low-level, actually. If you're starting to manage your own memory like that then you're still at a higher level than C, but not as high-level as the functional languages, for example.

Honestly, if the end of your collection is beyond the addresses of that collection's valid data then you've just malloc'ed. Is it worth the bugs?

How to malloc in a high-level language(assuming a typecast always succeeds): 1. Make a reference in main (this way it's object will never be garbage collected) 2. Make this reference to an array of objects(hereafter referred to as the "block"), where each object holds n integers. 3. Whenever you wish to save an object to the block, cast it to the class which the block contains and cast from that class when you want to retrieve one.

Is the above idea good for performance? Possibly. Does it belong in a cryptography library/program? Nope.


And I would add: A programming language that only allows low-level code only allows buggy code.


No true blogger would wilfully misunderstand a buffer overrun vulnerability in order to score some cheap pageviews.

To put it simply, his examples are the equivalent of doing this:

    unsigned char data[4096];
    #define X (*(int *)(&data[0]))
    #define Y (*(int *)(&data[4]))
    ...
Basically, he's explicitly re-using a buffer, no buffer was overrun. In Rust you will not read something out of a buffer you didn't put there first, in C you can, and you might even read several GB out of a 256 byte buffer.


> No true blogger would wilfully misunderstand a buffer overrun vulnerability in order to score some cheap pageviews.

You may want to read up on Ted, and realise that when he writes

> if we don’t actually understand what vulnerabilities like Heartbleed are

he's probably talking about you.

> Basically, he's explicitly re-using a buffer, no buffer was overrun.

Which is essentially what happened in heartbleed. Heartbleed was not a buffer overrun at any point.

Here's the tl;dr: during heartbeat, OpenSSL would malloc both input and output buffers at the caller-specified size (65536 bytes), copy a caller-provided input (1 byte) to the input buffer then copy the whole input buffer to the output buffer.

Anything beside the overwritten byte would likely be previously written data since neither malloc nor free zero out their stuff by default[0], essentially leaking 65k of random data every time.

This was compounded by OpenSSL doing its own memory management via freelists, making it even more likely interesting data would be present in the input "garbage" and precluding OS mitigations (such as BSD's malloc.conf framework[1]), not to mention the unmitigated (no freelist) codepath had bitrotted and didn't actually work even if you knew how to enable it[2]. Note that [1] and [2] are by TFAA, and that he's an OpenBSD and LibreSSL core contributor.

[0] http://www.seancassidy.me/diagnosis-of-the-openssl-heartblee...

[1] http://www.tedunangst.com/flak/post/heartbleed-vs-mallocconf

[2] http://www.tedunangst.com/flak/post/analysis-of-openssl-free...


What he's done is actually very different to heartbleed. The heartbleed flaw was made much worse by the custom allocator used, but that wasn't the source of the flaw. The source was the fact that dynamically allocated memory in C is not bounds checked.

That isn't true in Rust, and he had to basically implement a deliberately unsafe memory allocator to show this flaw. His argument that you can't say "no rust programmer would write this code" is flawed. Of course any programmer can write insecure code in any language. The point is that Rust makes it far less likely.

If he had ignored the custom allocator and used the defaults in both languages (e.g. malloc in C, whatever it is in Rust), then you would have seen the difference.


> The source was the fact that dynamically allocated memory in C is not bounds checked.

When you say "bounds-checked", what are you talking about? To me, it means that "x[somenumber]" make the program abort if somenumber is out of range. However, as you can see, the reads and writes were never out of range. As I see it, the issue is that uninitialized memory is being read. This is not "unsafe", cause it will never crash your program. I don't know the exact definition of "undefined behaviour", but since we are using a custom allocator, even if reading from freshly malloc'ed memory is undefined, it may not be in this case.

Rust doesn't let you get uninitialized memory without using "unsafe", so to construct a program with the issue, he had to reuse the buffer. I think it's a lot less likely to happen with Rust, since it is visible to anyone that it is the same buffer.

> If he had ignored the custom allocator and used the defaults in both languages (e.g. malloc in C, whatever it is in Rust)

How do you know the default allocator isn't using memory that was previously used for the private key? And why do you think the default allocator wasn't used in the Rust code?

PS: Taint analysis also has a much better chance of working in Rust, since we are not working with libraries, but standard language constructs. In C, your taint analysis would have to taint every byte straight from malloc.


He addresses this though, with the "no true C programmer would have written this code" argument. Was it a bad idea for OpenSSL to use a custom allocator. Yeah. Did they do it anyway? Yeah. Would it be a bad idea to write a custom allocator in Rust? Yeah. Could you do it anyway? Yup.


From the link that you supply in [0]:

> What if the requester didn't actually supply payload bytes, like she said she did? What if pl really is only one byte? Then the read from memcpy is going to read whatever memory was near the SSLv3 record and within the same process.


In Rust, you cannot ever read uninitialized memory (including allocated memory) without using unsafe code (as can be seen in the original code sample, the Rust buffer, unlike the C buffer, is initially zeroed out). So in safe Rust, what you are describing indeed could not happen. The unsafety would have to be explicit at the caller end: the unsafe within the allocator implementation isn't enough.


> So in safe Rust, what you are describing indeed could not happen.

Read the last paragraph. OpenSSL has a buffer reuse system via a freelist (and the non-freelist code had bitrotted), it didn't release buffers to the system's allocator after use, buffers were initialised across calls.

Otherwise while heartbleed would still have existed to a large extent, it would also have been mitigable by e.g. malloc.conf or shimming in a zeroing malloc.


An allocator like that would require unsafe code to write and could not expose a safe interface unless it couldn't be used to read uninitialized memory. `Vec::with_capacity` would still not let you read the data even if you had a custom allocator.


You could write a custom "allocator" that created a pool of zeroed buffers initially, gave out buffers from that pool and accepted buffers back into that pool, and just didn't zero them in between. That would be perfectly "safe" as far as the language was concerned, and would allow you to reproduce the problem.


You would have to go out of your way to do so and handing out buffers that allow reading them without initialization would be a huge warning sign IMO.

If you want performance, you don't have to worry about zeroing anything, just call `.clear()` on a `Vec`.

Nobody is saying you can't reproduce Heartbleed's effect in Rust, you just have to actually design for it, be it maliciously or out of misunderstanding of the language or a library construct.

The real question is how much harder Rust makes that which is often frustratingly trivial to sneak into C code.


You would have to go out of your way to do so and handing out buffers that allow reading them without initialization would be a huge warning sign IMO.

So was doing that in the original C code, but no-one noticed.


In C, it's impossible to hand out buffers that have to be initialized before they're read. You have to go out of your way to initialize them first.

In Rust, it's the other way around. You cannot hand out arbitrarily-sized buffers that allow for reading uninitialized memory without going out of your way to do so.

Both languages can allow for writing bad code, but in C it's trivially easy to get the bad code by accident, and in Rust, you pretty much have to do it by design.


There are mallocs that indeed do initialize memory before they're read. jemalloc, used in FreeBSD doesn't do it by default, but it's easy to set an option in /etc/malloc.conf so it does so, and ottomalloc in OpenBSD zeroes malloced memory because it uses mmap much more heavily. So yes, it is more than possible to have pre-initialized buffers in C, it's just that certain OSes use terrible memory allocation algorithms, with no way of even tuning them to be safe by default.


You would have to go out of your way to write your own custom allocator instead of just calling malloc and free, but that's what the OpenSSL folks did.


I'd be interested to see an actual implementation of such an allocator in Rust that exposed a safe interface. You could do it in the specific case of chunks of predefined sizes, and maybe even for all byte arrays, but to allow arbitrary types in the allocator I do not think you could expose a safe interface without requiring initialization.

Again: I'm quite confident you could reproduce this specific vulnerability. You would just have to go out of your way to do it and the benefits of managing a free list yourself aren't really there (jemalloc is quite good at large allocations).


You could reproduce this, but it wouldn't be heartbleed if it wasn't pouring out key info.


I believe if it went through any of Rust's planned custom allocator system (i.e. worked through `box`) it would still not work, though.

You could certainly rewrite exactly the same system from C, of course (though as I noted elsewhere, it seems less likely that OpenSSL would want to replace jemalloc than the often-slow system allocator), but I think it would be quite difficult to write a safe implementation that worked with Rust's borrow checker. The closest thing to a custom allocator that works in Rust right now is an arena with a free list, which requires you to explicitly initialize any newly allocated elements. As another commentor noted, you would really have to go quite far out of your way to reproduce the issue.


Well of course this is possible. You can port a bug compatible version of a program to any other language. That is called Turing complete ( and may involve writing a x64 emulator in VB Script). /snark

A bit more serious, I wonder which security problems Rust would have, if it would be as well studied as C.


So, let's say I'm on drugs and I'm writing TLS implementation being not "real Rust programmer". What are "rules of the thumb" I should follow (let's assume I have that much self-control) to not end up with something like this?


The biggest thing is to let the memory allocator do its job. Don't cache buffers, etc between uses to speed things up; once it's used, throw it in the dumpster and get a new chunk of memory. Your nifty performance hack will succeed in leaking vital information much faster than the stock memory allocator. Other things are if your allocator doesn't do it for you, zero out your memory before you use it, and if you really want to get fancy, zero it out when you're done using it. Also, test on more than one OS/Architecture. Your code may work beautifully on your Linux x86 box, but does it still work under OpenBSD? How about running on an ARM board? Good, portable code that doesn't rely on trickery is one of the best ways to ensure that your assumptions won't cause the next security disaster.


"Code no true C programmer would write", eh? And yet one did, in a high-profile, security-critical library. When you find Rust code like this in the wild, I'll start to believe in some kind of equivalence.


I guess we're going to get into a "No True Scotsman" situation, but given the OpenSSL codebase, I don't think the OpenBSD folks regard them as "true C programmers". Rust is only being used by enthusiasts currently, so I'm sure we will see code like that once it gets into the general population.


Shouldn't that analogy read:

code no true C programmer would write : heartbleed :: code no true rust programmer would write : (exercise for the reader)


Type / memory safety != security. The Rust people also mistake "no segmentation faults" for "no crashes".


  > The Rust people also mistake "no segmentation faults" 
  > for "no crashes".
I was a witness to one of the first public demonstrations of Servo to the Mozilla community at large (Mozilla Summit 2013). At one point during the demonstration Servo suffered a runtime panic, and the presenter (a Servo dev) self-deprecatingly apologized for the crash. A Gecko engineer in the audience raised his hand and asked if it was a segfault. The answer was that it was not, to which the Gecko engineer replied, "well then it's not actually a crash". So yes, now we're arguing semantics, but in a systems programming context a segfault is most usually what one means by "crash".


> Type / memory safety != security

Type and memory safety certainly does enhance security, by eliminating classes of vulnerabilities. Security isn't a binary thing.

> The Rust people also mistake "no segmentation faults" for "no crashes".

To a systems programmer the meaning of "no crashes" is pretty clear. A Web page (or your browser) doesn't crash because the JavaScript on the page threw an unhandled exception. Rust panics work like exceptions.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: