Maybe, maybe not. What bar do you set for "this product is worth attacking"? Because the product owners and the product users definitely thought that their product was valuable[1].
[1]The products in question were 1) Payment acquirer and processor, and 2) Munitions control software.
a product can be valuable without being worth attacking, like something that only runs on trusted inputs. the bar for software "worth attacking" for me is that there are people who are paid mainly to find exploits on it, and not just incidentally as a product of development.
> a product can be valuable without being worth attacking, like something that only runs on trusted inputs. the bar for software "worth attacking" for me is that there are people who are paid to find exploits on it.
I'm in agreement, but then we get back to the fact that memory safety may not be a compelling argument to use a new language.
After all, most products aren't "worth attacking" until they are large and successful enough, thus reinforcing the decision not to rewrite a product in a new language. This means that memory safety alone is not a compelling argument for switching languages.
why is the "payment processing" software written in c/c++ and not something like java in the first place? i would imagine not needing to care too much about memory would speed up development velocity. was it before java became popular? i can understand the munitions control software possibly running on a constrained device and not needing very sophisticated memory management (cue the old joke about the "ultimate in garbage collection").
> why is the "payment processing" software written in c/c++ and not something like java in the first place?
Until recently[1] payment terminals were memory constrained devices. Even right now, a significant portion of payment terminals are constrained (128MB of RAM, slow 32-bit processors).
Even though Java was around in 2006, the payment terminals I worked on then ran on 16-bit NEC processors (8088, basically).
Even if we aren't looking payment terminals (or any of the intermediaries), there's still a large and not-insignificant class of devices for which anything but C or a reduced set of C++ is possible. Having common libraries (zip, tls, etc) written in C means that those libraries are usable on all devices from all manufacturers.
[1] The industry-wide trend right now is a move towards android-based terminals. While this does mean that you can write applications in Java and Kotlin for these terminals, it's still cheaper to port the existing C or C++ codebase to Android and interface via JNI, as that reduces the costs (of which certification is a significant minority).
Even right now, there is more portability in writing the EMV transaction logic in plain C because then it can be used from almost anywhere. A team that went ahead and wrote the core payment acquisition logic in Java would find themselves offering a smaller variety of products and will soon get beaten in the market by those manufacturers who packaged the logic up into a C library.
Don't underestimate how price-sensitive embedded consumers are. A savings of a few dollars per product can absolutely lead to a really large leg up over the competition.
idk, many large companies report about 70% of their security issues are due to memory unsafety. reducing your security bugs by a factor of 3 sounds pretty compelling to me...
Yeah, it's weird. I really don't know why Rust isn't more popular.
Edit: I just read another comment you made about somebody's program not being worth attacking. I think you are on to something with that argument. Most software isn't worth attacking.
Seriously. You get these tests for free with zig, no extra tooling, if you use testing allocator in your tests. This also is a carrot to get you to write tests. Hell, I even mocked libc's malloc/calloc/free to make sure that my code doesn't leak memory:
Rust does not free you from having to track memory, for large classes of important data structures. And, if you want to implement smart pointers for your app in zig, it is easy.
Yes, your examples are two proprietary, nonstandard techniques. Or alternatively using a third-class toolset (jemalloc toolkit e.g ).
Sanity is having a single, anointed way to do it in the stdlib.
There's a similar effect as Amdahl's law with these. Only after programmers come to their senses and opt to take care of that 70%, we get to the remaining part of the pie and its subdivisions, and can start thinking about the actually interesting part of engineering secure systems.
The overwhelming prevalence and impact of memory safety bugs has retared development of solutions for the remaining, nontrivial problems for many decades, by consuming so much of the attention.
these companies apply things like sandboxing and fuzzing to reduce the incidence of memory unsafety bugs, and yet they're finding a majority of their security bugs being memory unsafety. if you can't find memory unsafety in your c++ code, it's because your code isn't worth attacking.
It doesn't do enough. It's so low level that you have to run another OS on top of it. So all it does is provide a virtual machine. Typically people load Linux on top, which means you have all the security holes of Linux. You just get to run a few copies of Linux, possibly at different security levels.
I would have liked to see a secure QNX as a mainstream OS. The microkernel is about 60Kb, and it offers a POSIX API. All drivers, file systems, networking, etc. are in user space. You pay about 10%-20% overhead for message passing. You get some of that back because you have good message passing available, instead of using HTTP for interprocess communication.
i was responding to the claim "It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++". of course, the feasibility part is questionable.
It was written by top experts of the field through multiple years and is formally verified. It could have been written in brainfuck as well, since at that point the language is not important.