Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> safety benefit

Has "safety" become a buzzword? What we really strive for is correctness. Hypothetically, Rust could still be detrimental to software correctness compared to a more usual language, if some of its traits as a language (eg., complexity) encourage bugs. Sometimes it seems like Rust afficionados think that buffer overflows etc. are the main kind of bugs.

And if you need absolute correctness (when human life depends on the software) you would probably use Ada and Spark or something else that enables you to actually prove correctness.




> Has "safety" become a buzzword? What we really strive for is correctness.

Correctness is about program making what I meant as a programmer. Safety is a different thing. I see it as insulator on wires. It helps me to avoid shocks while working with electricity. Safety guarantees of Rust is like that, I have a pointer, I could use that pointer without fear of segmentation fault or data race. It feels like safety. It named "safety". So I believe it is a good coherent name.

> if you need absolute correctness (when human life depends on the software) you would probably use Ada and Spark or something else that enables you to actually prove correctness.

Maybe. But between absolute buggy code and absolute correct code there is a spectrum of not so buggy and not so correct code. We need a compromise enabling us to create a reasonable reliable software with a reasonable effort.


> Sometimes it seems like Rust afficionados think that buffer overflows etc. are the main kind of bugs.

Have you looked at the CVE stats of the last decade recently? Memory errors make up around 3/4 of that. Even if Rust would make the last quartile harder (which in my experience it doesn't), it could still be worth it for many applications where you cannot afford full verification, but want to avoid your users being pwned.


They were talking about bugs and correctness generally. CVEs are an extremely biased population of such things, and most types of bugs and incorrectness will never show up in a CVE.

Keeping planes from crashing and bank accounts correct matter too. Rust solves a subset of memory safety problems but it is not a programming language for high assurance applications and in that context enables many other types of unsafe behavior.


How many programs have you worked on that were high assurance programs?


Core infrastructure software like database kernels and protocol stacks should be implemented at least in part to high assurance standards. I've verified parts of database engines and other critical code many times with good effect, finding subtle bugs we never would have discovered in the wild and with (as expected) no defects discovered later.

Most systems programming languages don't make it simple and many people don't do it but it is definitely valuable and worth doing when the economics make sense.


So you end up stuck with languages like Ada, where the language can prove the correctness of your code (or rather, that your code follows the specification)?


Currently modern C++ plus a ton of specialized tooling that covers much of the ground of Ada, just not built into the language. It is a balancing act to keep development from becoming unwieldy and the 80/20 rule applies. Code that is easy to verify also tends to be slow, and that is not a tradeoff you can make for some applications. No one is running something as complex as a database kernel through an end-to-end theorem prover. Design verification scales well (model checkers and similar), implementation not so much due to practical limits on what you can prove and accumulated complexity/bugs in the specification, and verification of code generation is very limited (I use the LLVM stack). Nonetheless, this gets you to a very low defect rate and it isn't like this code is being written from scratch every time.

Even with a fully verified toolchain there will still be bugs. I once had a customer find a rare bug in a database engine that was ultimately caused by slight differences in behavior between microarchitectures running the same binary.


Seems like I recently read that the Ada folks might want to borrow some concepts from Rust. To me that says both languages are working toward similar goals.


it is definitely valuable and worth doing when the economics make sense.

Tautologically so.


I would bet 99% of that 3/4 (or at least the part written in C) could have been caught with testing, fuzzing, and sanitizers we have today.


Last JetBrains questionnaire proves, once again, how little most devs care about them.

"Which of the following tools do you or your team use for guideline enforcement or other code quality/analysis?"

https://www.jetbrains.com/lp/devecosystem-2019/cpp/


Too bad neither Microsoft, Google or Apple ever heard about those tools…


Or even Linux, with its strict patch process, still managed to have 68% of 2018's kernel CVEs, related to memory corruption issues due to C's use.


Is there a stat on how much % of CVE directly pertains to Microsoft, Google or Apple products?


You could easily calculate such a stat from here. Microsoft Apple and Google are all in the top 5, with IBM and Oracle being the other two. Adobe used to be on top but with the death of flash they have been slipping. I checked out the breakdown of memory corruption/overflow bugs and its well over 50% of CVEs for MS and Apple. Google is much better with less then a quarter of their CVEs being memory related.

https://www.cvedetails.com/top-50-vendors.php

Microsoft 6508/116769 = 5.5%

Apple 4502/116769 = 3.8%

Google 4110/116769 = 3.5%

total (6508 + 4502 + 4110)/116769 = 13%


Microsoft's own research on this area claims it's closer to 70%: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


You're not talking about the same thing: the ggp asked how many CVE were dedicated to Apple, Microsoft or Google products (a question that doesn't make much sense here, but the gp still went on the calculation). You are talking about which proportions of thoses big corps CVE are memory-related (which is the right question to ask in this context, but not the one asked…).


There's a lot of value in those tools, but it's better to entirely eliminate the family of bugs 'by construction'.


The claim that Rust’s complexity is a detriment to correctness is an awkward argument.

The complexity you’re probably referring to is the type system and/or borrow checker. The type system is strong enough that, if you take the time, correctness can be encoded into the types. Meaning, you can build types that make it a compilation error if the software is not “correct”. At that point even refactors can be more assured to meet those encoded correctness guarantees. So I’d say that the type system helps develop software that is provably correct.

Then there’s the borrow checker, and that’s about runtime memory access correctness, again increasing correctness.

Point being, Rust programs because of the “complexity” can get much closer to correct than other languages in its space.


Would the full phrase "memory safety/thread safety" have been something more gentle to your ears? Because memory safety has been a metric used to gauge software for as long as I've been alive and then some. If a program I write is correct on its face, but subject to issues caused by asynchronous reads/writes (perhaps completely unexpectedly), then it is not memory safe and will still cause problems. Whether you consider this a separate quality than correctness, or as a subset of correctness, it's still a useful metric to employ.

In my experience, lack of memory safety is one of the largest sources of non-trivial bugs I experience. Sure, if you are writing truly mission critical software for planes or pacemakers, you are going to need stronger (and different) guarantees. But Rust's goal is not provable correctness in the Coq sense. It's to provide a productive interface for designing efficient and memory safe/thread safe programs.


(Memory) safety and correctness are orthogonal issues. If I'm building a missile's aiming system, I absolutely want correctness. But if I run a web service on the cloud I care more about an attacker taking control of my machines than about another kind of bugs. Same if I deploy a fleet of connected devices to thousands of consumers' houses.

Computers are all around us, and they are a security nightmare, less because of their lack of correctness than because of their lack of safety. That's why Rust is important.

Formal proofs and tools like Ada are important too in their own domain, and Rust isn't competing with them (yet at least, I know there are some people would like to develope something akin to Spark for Rust adding best in class correctness tools to Rust's toolkit).


No, they're different things. Correctness is nice, but it's much more important to me that my software doesn't let some bad person take over my computer than it is that my software works perfectly all the time. If I can get both at a reasonable cost, I'll take both, of course. But I'll settle for safety.


In something like a TCP/IP stack correctness is strongly related to safety. For example, it could be perfectly memory safe, but deliver packets to the wrong address, allow other programs to read all traffic, or allow easy denial-of-service attacks.


I realise we didn't define "safe" before we started, but I didn't just mean memory safety. Agreed that all of your examples would be incorrect, I'd just also term all of them as also unsafe.

An example of something that's incorrect but not unsafe would be, say, an error which would occasionally corrupt random TCP packets causing checksums to fail and the packets to be retransmitted. It's not working how it should, but it's not compromising your system's security or your data's safety (at least I don't think it is).


Just use C or C++ with testing, fuzzing, sanitizers. It will probably be done before an equivalent Rust project compiles.


I think the reason people are excited about improved safety is that in practice, it's really hard to be 100% sure that programs are correct in most circumstances, but safety at least gives you some insurance that if the program isn't correct, the types of failures you can get are more limited.


Modern Rust aficionados use the same fallacy spread by Java programmers in the 90s. Just because the language has a type system that is safer than C, it doesn't mean that they can automatically write "safe" code. If a piece of code is safe or not depends more on its design and on the ability of the implementation team than the language in which it is implemented. Nowadays we have a big pile of Java code that has given us all vulnerabilities found in the web in the last 20 years. It probably wouldn't be different if these were written in Rust.


How are the web vulnerabilities related to Java ?

There used to be a lot a vulnerabilities in the Java plugin of browsers, but it has nothing to do with Java as a language, it's a fraction of the security bugs that affected the web for the last decades (and it's also pretty dead).

The majority (by far) of thoses bugs were in fact memory issues[1], which would have been solved from the beginning if the browsers (or the plugins, like Flash) were written in Java (or Rust).

[1]: https://hacks.mozilla.org/2019/02/rewriting-a-browser-compon...


> And if you need absolute correctness (when human life depends on the software) you would probably use Ada and Spark or something else that enables you to actually prove correctness.

That's not even absolutely correct, because provability depends on correct preconditions, and there is a universe of incorrect preconditions that can throw a wrench into your provable system.

Your completely 'proven' system fails, when there's a short in jump resistor J35 on the motherboard, which invalidates the entire underpinning of the software system you assume to have.

For critical systems, you should strive for resiliency and failovers, not provability.


In Spark2014, you could at least prove the absence of runtime errors, and in the near future, prove memory safety. Without specifying them since they're part of the semantics of the language. That's not bad for a start.


What exactly are your suggested alternatives to Rust in this sort of niche? C/C++? Both of which have worse type systems, worse toolings, more convoluted edge cases, more hidden complexity etc etc?


For the curious, there is an example Spark2014 tcp/ip stack at https://github.com/AdaCore/spark2014/blob/master/testsuite/g... (see README). Not clear how much of it they proved and at which level... And the README says it's not thread-safe (but then Spark2014 should get rust-like protections soon).


I probably shouldn’t let it deter me from learning the language but what you’ve said exactly describes the feeling I got when a coworker was telling me about his experience with it.

Apparently there’s a way to have code execute at compile time (as an alternative to preprocessor macros I guess) and there’s some API in the compiler that either lets you change the grammar or interact with the AST in some way. This guy was using it to add something to the syntax (I think) to make something easier. It sounded awesome (this guy was always doing some of the neatest stuff for fun) but my immediate thought was “now you can’t really be sure other parts of the code aren’t doing this and that severly limits the assumptions you can make about code you’re interacting with.”

I probably should play with it anyway since they certainly have some neat ideas but some of the complexity the language allows sounds really bad for larger projects.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: