I know rewrite it in Rust has essentially become a meme at this point, but really there aren't a lot of great reasons to write new software in languages that aren't memory safe. The only compelling reasons I can think of off the top of my head are interop with existing software and that C and C++ developers would require a bit of training to move to safer languages. There may also be a few problem domains that demand the maximum performance and lowest resource utilization possible, but your typical software would have more than acceptable performance in a memory safe language.
>The only compelling reasons I can think of off the top of my head are interop with existing software and that C and C++ developers would require a bit of training to move to safer languages.
Decent C and C++ developers shouldn't take that long to grasp the borrow-checker. Those that don't can't be trusted coding C or C++ in the first place. Because they don't have a good enough mental model of memory management to deal with it manually.
Interop with C is fairly straightforward. Python also with PyO3. C++ has some support.
> There may also be a few problem domains that demand the maximum performance and lowest resource utilization possible
You can approach that with Rust as well. Often you can even do performance improvements better and safer then in C++, because the borrow checker helps you stay safe. Especially CoW is a godsend when processing data.
> Decent C and C++ developers shouldn't take that long to grasp the borrow-checker. Those that don't can't be trusted coding C or C++ in the first place. Because they don't have a good enough mental model of memory management to deal with it manually.
In fairness to C and C++ developers: while often the problem is not understanding memory management, sometimes the problem is understanding it and having a model that doesn't mesh with borrow checking, or having a problem that needs some additional work to map into something easy to write in safe code. For instance, it takes some additional knowledge in Rust to know that if you want to build a graph, you 1) should almost always use an established graph library and not write your own, and 2) if you do need to write your own graph, you either need to use Rc, have an array of nodes and use indices, or use unsafe and raw pointers and provide a nice safe well-tested wrapper.
(I'm one of the developers of Rust, and I think it's important to characterize languages fairly.)
I don't disagree with either of your points, but when I learned Rust, those points about borrow-checker vs graphs in the cycle were entirely clear after looking at the first few tutorial lessons on the borrow-checker. Maybe day 2 of learning Rust for me.
And coming from C++, Rc is not such a foreign concept compared to smart pointers.
Coming from C it might be a bit more of a foreign concept I'll admit.
> Productivity
> Readability
> Performance
> Static binaries
> Not needing PhD in rust, especially async rust
Rust works fine as high level language but it's absolutely awful for anything else and suffers from C++ feature creep. It would be a much better language if rust wasn't afraid of making `unsafe` an useful tool rather than thing that "everyone expect the core devs should absolutely avoid". Ergonomics would increase tenfolds if you could use it to essentially control the borrow checker better as well instead having to do mega abstractions (which aren't often zero-cost even) or hoops for every single thing.
I feel like people on an esteemed forum like Hacker News should be able to move past this kind of beginner-level argument.
The point of memory safe languages is not to not have unsafe code. It is to centralize unsafe code in a few places that can be thoroughly audited, tested, and verified, and then provide safe abstractions on top of that.
The point is not about hacker news. The point is: should we choose to regulate this, what level of sophistication do you think regulators will operate at?
Inb4 this is only a recommendation: regulation often starts as recommendation trial balloons
No one is saying that you can't use unsafe languages or features when you need to. This press release is announcing the release of a report from the White House Office of the National Cyber Director (ONCD) and it is simply a recommendation, not law or regulatory guidance.
Other fields are restricted from using unsafe materials and materials, and are required to demonstrate safety. I think IT still has it's head in the space of an immature industry of hackers.
IMHO, at least for many things, such as IT systems controlling fundamental public functions (e.g., DNS), services (e.g., electrical grid), dangerous machines, etc., our engineering should be held the same standards as bridges, airplanes, etc.
Look at the endless problems and costs caused by our crappy quality: National security harms and risk, privacy violations, endless fraud, systems that are almost impossible to adequately secure.
I am reminded of physicians disliking the idea of checklists[1] and revolting against hand washing[2].
[1]: https://www.flightsafetyaustralia.com/2018/11/one-thing-at-a... (interestingly, I would argue that Boeing Model 299 being able to be accelerated enough to lift off while gust locks were engaged were was a design failure, and points towards the need for safe-by-design mechanisms, even for experienced operators.)
I often see people likening software development with woodworking, which is funny because I think our industry is still at at the "you're not a real carpenter unless you're missing three fingers" stage.
This hacker news, after all. Hackers built this stuff, and they laugh at the bureaucrats who try to make rules to control it.
> our engineering should be held the same standards as bridges, airplanes, etc
In principle, I'd agree. BUT that's only going to happen when the practitioners have the ability to say "no" to bad designs and short deadlines. Which requires some kind of professional engineering certification, and requirements that only certified software engineers be allowed to do certain things, and certified software engineers that actually have the guts to say no.
Lacking that, we'd just have to regulate the designs themselves - and, as soon as government regs get their hands on any software design, I guarantee you innovation of those designs will freeze. (Same reason why innovation of private aircraft has frozen - the cessna, the most popular private aircraft, is an eighty year old design because innovation is virtually impossible, because regulations, because safety.)
> Look at the endless problems and costs caused by our crappy quality: National security harms and risk, privacy violations, endless fraud, systems that are almost impossible to adequately secure.
Yes, but look at the incredible pace of innovation which the lack of regulation has allowed: web browsers, operating systems, word processors, scripting languages, world-wide communication and freedom from oppression. You can't stop the signal.
> Hackers built this stuff, and they laugh at the bureaucrats who try to make rules to control it.
Even the IT industry is much more mature than that. That and the rest of the comment are an old fantasy that reality left behind long ago. Look at the reality of these things:
> Yes, but look at the incredible pace of innovation which the lack of regulation has allowed: web browsers, operating systems, word processors, scripting languages, world-wide communication and freedom from oppression. You can't stop the signal.
An inverse relationship between pace of innovation and regulation is an unsupported claim by people who simply want to do whatever they like and/or want unrestricted personal
Of course some regulation could impede innovation, while some could greatly improve it. Imagine innovation in privacy, for example, because regulations require it, rather than ignoring privacy because it's almost entirely unregulated (in the US).
> innovation which the lack of regulation has allowed: web browsers, operating systems, word processors, scripting languages, world-wide communication
All those innovations go back to the 1990s.
> freedom from oppression. You can't stop the signal.
In fact, oppression has increased and democracy and freedom have retreated for the last 20 years. Many SV leaders are against or ambivalent about democracy, and push anti-democratic narratives (which favor giving more power to them, of course).
End-user control, including freedom-as-in-speech, in IT is almost a forgotten issue. And once the oppressors got the hang of it, they now very ably use the Internet and other technology to conduct effective information warfare directly against the public.
> Even the IT industry is much more mature than that
The IT industry is like the average googler - competent enough to use the tools designed for them. They aren't who I'm talking about.
> an unsupported claim by people who simply want to do whatever they like
The claim that regulation doesn't hamper innovation is an unsupported assertion of control freaks. And regulation that directs innovation only does so because it makes alternative innovations harder - not because it makes certain innovations easier.
> All those innovations go back to the 1990s.
Don't take such things for granted. Their age makes them no less relevant; regulation would have killed them just as easily as chatgpt. I picked old tech as examples because their benefit to the world - and the lax regulatory environment they were created in - are both incontrovertible. That makes them no less examples of innovation.
> oppression has increased and democracy and freedom have retreated
One of the best weapons to implement these kinds of changes are an ever-increasing regulatory burden. Small wonder that's increased alongside the harms you describe.
> misinformation
Everyone does it, everyone gets fooled by it, everyone thinks they're special. More signal, more information - not less - is the only hope for a solution there.
But that's not what I meant. I meant that I have gcc on my laptop, and I can compile code there that runs on linux, and no one can stop me from writing code and sharing it. But they can add ever greater regulations to the infrastructure involved until the friction kills the community.
> I meant that I have gcc on my laptop, and I can compile code there that runs on linux, and no one can stop me from writing code and sharing it. But they can add ever greater regulations to the infrastructure involved until the friction kills the community.
Much of that gcc-on-Linux compiled code is absolute crap (generally, not yours), as we can see by the outcomes. We need much higher standards for anything engineered for the public.
>> oppression has increased and democracy and freedom have retreated
> One of the best weapons to implement these kinds of changes are an ever-increasing regulatory burden.
Could you give an example? China, Russia, Hungary, Turkey, India, etc. are not known for their regulatory burdens. The EU has plenty of regulation and is arguably the leading region for democracy.
> The claim that regulation doesn't hamper innovation is an unsupported assertion of control freaks. And regulation that directs innovation only does so because it makes alternative innovations harder - not because it makes certain innovations easier.
I didn't say regulation doesn't hamper innovation, I said it depends on the regulation. Some regulations make innovation easier; for example, standardized Internet protocols fostered incredible innovation, instead of everyone reinventing incompatible wheels.
> is absolute crap ... as we can see by the outcomes
Which outcomes? safety or productivity? Because all kinds of useful stuff is built in an unsafe manner.
> We need much higher standards for anything engineered for the public.
I think you're imagining a world where the same software exists, but safer. What you would actually get is far less useful software.
> Could you give an example? China, Russia, Hungary, Turkey, India, etc. are not known for their regulatory burdens.
But the US and UK are, increasingly so. Voter registration makes it harder to vote. Backdoor requirements in messaging apps (like what the UK tried to pass recently) can be used to target journalists and limit our freedom of speech. Misinformation regulations can be misused for censorship (which facts are wrong/inconvenient? let the government decide). And recently on HN - regulations on AI safety are being pushed by the incumbent players (e.g openai) to keep out new ones.
> The EU has plenty of regulation and is arguably the leading region for democracy.
And is far, far behind on innovation. Mistral and deepmind are rare counterexamples on the long tail of the bell curve.
> Some regulations make innovation easier; for example, standardized Internet protocols
Do you mean the IETF? Because that's not a government nor regulatory body. It's a great example of the actual engineers involved, not bureaucrats, working together outside of regulatory bodies to make stuff work.
> I didn't say regulation doesn't hamper innovation, I said it depends on the regulation.
>> An inverse relationship between pace of innovation and regulation is an unsupported claim by people who simply want to do whatever they like and/or want unrestricted personal
There are domains where things like Ada SPARK are (or were) the standard. I’m not sure that’s really necessary for everything, but for those other things it wouldn’t hurt to use a memory-managed language. But that’s already mostly the norm there, too.
I think the real subject being discussed here is the viability of Rust for applications which are traditionally a good fit for C and C++. I think it would be better to be specific about that.
memory safety is not the only safety. Dangerous machines are more likely to be fucked by nondeterministic timing or OOMing / overflowing more than "is it memory safe". Notably, rust protects you from neither, and in many ways, obfuscates you and puts you at risk from both. The first, via macros, the second, via hidden allocations
How about the sheer cost of the rewrites? Probably hundreds of billions of dollars worth of rewriting and retesting, if not more. Plus the costs of retraining.
Memory safety is very weak in Rust, IMO. Memory errors are only one kind of error, and users of other languages have invented many brilliant ways to avoid and fix issues. It would be better to focus on detecting and preventing errors than rewriting IMO.
These guidelines do not speak about re-writes, they talk about new code, techniques to mitigate memory unsafety in existing code, and preferring to use memory safe tools where possible, for example, when you are purchasing software.
It's also unrealistic to expect existing software to go away. So "memory unsafe" languages aren't going anywhere. People who don't want to learn how to manage memory correctly have had good alternatives to C and C++ for decades, each with their own drawbacks. And C and C++ remain viable with active developer communities despite all the attacks over a span of decades crying about memory management being difficult.
The crux of the problem is not that memory management is difficult, it’s that with unsafe languages it is dangerously error prone. In the 70s and 80s most machines were considerably more resource constrained and either on closed networks or not networked at all. In today’s world where everything is connected to 24/7 high speed internet these are risks that just aren’t worth taking for most new software projects.
You say it isn't difficult but it is "error-prone", which is a contradiction. Either it is hard to get right or it isn't. I think it might be but not to the extent you do. Whatever issues we have with memory management, solutions exist and have existed for decades. People who continue to use so-called "unsafe" languages tend to know what they're doing. There are lots of solutions out there besides rewriting in another language that isn't that much less error-prone, such as Rust.
I think what the parent means to say is that memory management is simple to understand, but it’s hard to get right.
Put differently, a developer can grok how to properly manage memory, but we’ve found that trying really hard to get it right does not scale. Even if you’re an expert. We need tools to check our work.
Within another couple years, automated systems that are descendants of today's rapidly-advancing coding-assistant LLMs will be able to do amazing things.
For example, it's plausible they'd be able to to create proven-memory-safe rewrites of older software, with verified drop-in compatibility for all enumerable use-cases, practically for free.
Well, unless it's a parasitic government contractor doing the work.
>Within another couple years, automated systems that are descendants of today's rapidly-advancing coding-assistant LLMs will be able to do amazing things.
Yeah I'll believe it when I see it. If humans struggle to figure out what these systems are supposed to do, no AI is going to figure it out.
Hundreds of billions of dollars is a lowball estimate on what it would take to rewrite it all in Rust.
Meanwhile, most people haven't even heard of Capability Based Security.[1] 8(
It's sad to see so much effort wasted, while ignoring the root cause, ambient authority.
Given the sad state of operating systems these days, you're forced to absolutely trust any code you run, which is nuts.
Anything you run can open a port to a server somewhere and exfiltrate all your data, encrypt it, or whatever evil that the programmer wants, and there's no practical way to stop it.
---
Imagine if the power grid were built without any fuses or circuit breakers... that's what our internet and computers are like at present, in terms of security.
In that world, it would be like the White House demanding purer copper in the wiring.
Capability systems are a great thing, and they solve another class of security problems. Use memory safe languages, and move towards capability systems.
Capability security doesn't prevent data leakage and lateral movement, especially in today's age of "identity based security". A compromised application would most likely still be running with a privileged identity, giving it access to databases etc.
There is no such thing as "privileged identity" in a capabilities based operating system. When you open a document, the application ONLY gets a handle to that document, and nothing else. It certainly can't just then open a pipe to a server somewhere and dump all your documents to it, unlike any random program these days.
From Wikipedia, the free encyclopedia
In information security, a confused deputy is a computer program that is tricked by another program (with fewer privileges or less rights) into misusing its authority on the system. It is a specific type of privilege escalation.[] The confused deputy problem is often cited as an example of why capability-based security is important.
Capability systems protect against the confused deputy problem, whereas access-control list–based systems do not.[]
I guess that you missed in my parent comment that this type of security in practice is extraordinarily difficult to implement correctly and I've seen confused deputy privilege escalation issues in pretty much every system I've ever come across. Do you have any examples of someone doing it correctly?
Otherwise you're just sort of no-true-scotsman'ing this and I have no interest in engaging that.
Every system you've worked on is likely an ACL based system, which is why the confused deputy problem was in play.
I don't have any experience with capability based OSs, because you can't get one as a daily driver... yet.
I do have plenty of real world experience with capabilities, though.... cash in a wallet is an economic capability. Circuit Breakers are an electrical version of capabilities.
Virtual Machines are a low resolution version of capabilities.
The whole point is to limit side effects before they happen.
---
Back in the 1980s before we had hard drives, a 2 floppy DOS machine was an excellent capabilities based system.... you could only corrupt the data on un-write protected diskettes in the system. Thanks to this fact, and the ease of backing up boot disks, we routinely bought/shared/copied dozens, sometimes hundreds of diskettes full of programs from everywhere, just to see if we liked them. In perfect safety.
People don't know what they're missing these days. Eventually, I'll have Genode's Sculpt OS[1] as my daily driver, and I'll be able to just run things, once again, without worry. No more virus scanners, worry about clicking on the wrong attachment, or a bad PDF, etc.
I don't have an opinion on this matter as I don't think I'm qualified (though I am interested in capability-based systems), but your comment is unclear to me: have you had significant experience in a capability-based system?
"Read not print" documents are an aberration before the eyes of The Lord. If you want to hand a process only the document, and not a handle to the printer, that's fine... but that's your choice, not that of the the operating system, and certainly NOT the creator of the document.
DRM is un-Holy, and definitely NOT part of Capability Based Security.
There are some really weird beliefs in this world thanks to ambient authority and DRM.
> When you open a document, the application ONLY gets a handle to that document, and nothing else. It certainly can't just then open a pipe to a server somewhere and dump all your documents to it, unlike any random program these days.
My comment was in the context of a Capabilities Based secure OS. Once you have read-access to a document, you can copy it, and do almost anything else you want with it, it's your choice, and the OS will enforce that choice.
DRM and "View but don't allow print" are both completely incompatible with Capabilities based OSs (but you could run a Windows VM on it, and that could have "DRM" in its internal state)
It really is completely absurd that this hasn't been solved yet. Almost makes you think it is on purpose.
Why the hell doesn't Windows implement permissions like iOS already? They seemed to be moving in the right direction last year, with the announcement of Win32 App Isolation, and then nothing happened.
I mean when it was introduced in MacOS Catalina literally everyone was griping about the popups and menus and how "locked down" everything is. Microsoft probably doesn't want another Vista situation, and who can blame them lol
What I'm most curious about is how this will affect new program language development, and programming languages currently in development. Like if Zig wants to see mainstream adoption, will it have to implement the Borrow Checker, or something similar, if people want to use Zig in government contexts? Trying to create a new programming language that people will invest in is hard enough, and if it isn't memory safe, it will be even less attractive to people.
On top of that, trying to start a software business is tough, and if your will be shunned if your stack is non memory safe, why put yourself at a disadvantage?
In 1975, the DoD commissioned Ada to be designed and built. Although the use of Ada wasn't mandated until 1991, between 1983 and 1996 the number of high level programming languages in use at the DoD fell from 450 to 37. For a few years in the mid-80's, Ada was the most popular programming language, even temporarily surpassing C and C++. The mandate was removed in 1997.
Ada was required for NATO systems and was mandated as the preferred language for defense-related software in Sweden, Germany, and Canada.
Today it is used in projects where a bug can have severe consequences such as avionics, air-traffic control, and commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport, and banking.
This doesn’t go into any detail, not even saying what counts as a “memory safe language.” Are there any practical implications? Will the government change anything it does?
This document also does not provide a concrete definition of "memory safe language," instead only providing examples of such, but there are two things that are interesting about it: first of all, it provides C and C++ as examples of "memory unsafe languages," explicitly.
The second is more towards your "practical implications" question. It says this:
> With this guidance, the authoring agencies urge senior executives at every software manufacturer to reduce customer risk by prioritizing design and development practices that implement MSLs. Additionally, the agencies urge software manufacturers to create and publish memory safe roadmaps that detail how they will eliminate memory safety vulnerabilities in their products. By publishing memory safe roadmaps, manufacturers will signal to customers that they are taking ownership of security outcomes, embracing radical transparency, and taking a top-down approach to developing secure products—key Secure by Design tenets.
The National Defense Authorization Act for Fiscal Year 2024 had language inside of it that said:
> SEC. 1713. POLICY AND GUIDANCE ON MEMORY-SAFE SOFT- WARE PROGRAMMING.
>
> (a) POLICY AND GUIDANCE.—Not later than 270 days after the date of the enactment of this Act, the Secretary of Defense shall develop a Department of Defense wide policy and guidance in the form of a directive memorandum to implement the recommendations of the National Security Agency contained in the Software Memory Safety Cybersecurity Information Sheet published by the Agency in November, 2022, regarding memory-safe software programming languages and testing to identify memory-related vulnerabilities in software developed, acquired by, and used by the Department of Defense."
This is referring to the above. However, the final bill text seems to be missing this, and I haven't tracked down yet how that happened.
The sentiment seems to feel like this is something akin to a softer version of the Ada Mandate: that being implemented in a memory safe language is a competitive advantage if you want to sell to the DoD, because using memory unsafe languages will require documentation explaining how you're mitigating the issues they have. Time will tell if that actually comes to pass.
Yeah, I think they're trying to say "Stop writing in C and C++, blockheads!" but in a more diplomatic tone. Most of the common languages today are memory-safe. Really, which ones aren't? C and C++ are the big ones. I guess throw Pascal in there if you don't use pointers correctly. Assembly lets you make indirect accesses anywhere in your allocated memory. Perl lets you leak memory if you create circular structures and then lose references to them. But Java, JavaScript, Go, Python, Ruby, etc. don't let you trample all over memory the way C and C++ let you. You can corrupt memory if you really want to, e.g. in Python by using ctypes to cast integers to pointers, but it takes a lot of effort.
These documents tend to be vague because details could be wrong. They're used by the next layer of bureaucracy to justify programs, and by the next layer to justify plans, and by the next layer to justify designs, and the next layer to justify implementations.
I know you're joking, but the government had asked for comments on some of this work previously, and part of the C++ committee did in fact respond.
I had read all 200+ responses, and was planning on writing up a post about them, because it is interesting, but I've had other things going on this month, and they actually shipped this before I managed to do so. Oh well, maybe I'll still end up doing this.
Anyway, there's a lot of junk in there, but out of major organizations, the vast majority were pro, a few were ambivalent, and the committee's response stuck out as one of the only strongly anti responses. (It was also weird for other reasons that turned out to be a benign misunderstanding.)
> was planning on writing up a post about them, because it is interesting
It is very interesting, and I hope that you still do a write up.
> It was also weird for other reasons that turned out to be a benign misunderstanding.
I would love to know more about this part, maybe just to understand a bit more about how these bureaucratic things work, which while less technical are also interesting.
The short of it was that it was weirdly evasive around who the authors were, and that ultimately stemmed from a misunderstanding of the form, not from some attempt to hide who they were.