Somehow I'm doubtful that, using 60's-era technology, many non-trivial data structures and algorithms could be proven safe in a mechanized way. As for trusting a programmer's word, it's probably better than trusting a non-programmer's, but enabling unsafe code is still fundamentally a social hack. While you can work around technical problems by social means, this will never constitute a ultimately satisfactory solution.
I am not saying they were full proof, I am saying that systems programming languages from 60's and 70's were much safer than C.
Many of memory corruption issues with C aren't a problem with the stronger type system of Algol and PL/* family of languages.
However OSes written in those languages were commercial, whereas C was bundled with UNIX, distributed for free, because AT&T was forbidden to charge for it.
If it wasn't for that, most likely Rust wouldn't be needed, at least not in the way it looks like.
> Many of memory corruption issues with C aren't a problem with the stronger type system of Algol and PL/* family of languages.
This is probably just my lack of imagination, but, how would you get the same degree of control C offers over the memory layout of data structures (among other things), without giving up on safety, and using only 60's and 70's technology? Even (safe) Rust doesn't help in some cases, say, if you want a single dynamically allocated memory block containing an array of “m” integers followed by an array of “n” floating-points, where “m” and “n” are determined at runtime. Or are you just going to say this is never desirable?
> If it wasn't for that, most likely Rust wouldn't be needed, at least not in the way it looks like.
Because we would still be using tagged architectures, and every machine instruction would incur in the cost of checking these tags? Or because non-determinism is never too big a price to pay for safe memory management? (Never mind the fact that programs also need to manipulate other resources that aren't as plentiful as memory, and thus require eager reclamation.) No, thanks.
Some of the languages that come to mind without GC, Algol and its variants, NEWP, Mesa, Modula-2, Ada.
With GC, Lisp, Smalltalk, Modula-2+, Modula-3, Eiffel, Oberon, Oberon-2, Active Oberon, Component Pascal,....
Keeping the focus on the ones without GC, all of them were use to build full stack mainframes, so of course they supported dynamic structures. Just the memory was measured in KB not GB.
And, yes they did not prevent the programmer of doing a double free or calling free in memory that wasn't yet allocated.
But they prevented:
- implicit conversions between pointers and arrays
- using bad pointers for output parameters in function calls
- bad, implicit, conversions between numeric types
- implicit conversions between enumerated values and integers
- using non-existent enumerated values
- strings with missing terminator
- Using assignment operator when a comparison was intended
- Doing pointer manipulation all over the place instead of when it is really needed
For your m * n example, that is something you can easily do in Ada, even if a bit verbose.
Hence why Rust would be less needed, because we would be in a computing world, where the majority of the exploits C has brought into mainstream computing would be reduced to a very tiny subset, also with unsafe red signs that one has to explicitly opt-in.
And just like with Assembly we could probably manage it, if served in very tiny doses as infrastructure code for other languages.
As for tagged architectures, why not?
Specially since that is what Intel and other manufactures are now trying to push as a workaround tame C corruption errors.
But all of that is history now, and we really do need languages like Rust to make the lower layers of our OSes and language runtimes safe.
> And, yes they did not prevent the programmer of doing a double free or calling free in memory that wasn't yet allocated.
Well, in 2016, or even in 1995, dynamic resource (not just memory!) allocation is the norm, not the exception, so, effectively, “unsafe dynamic resource management” means “unsafe” without further qualification.
> (long list)
These are indeed problems with C. Let's not beat a dead horse.
> For your m * n example, that is something you can easily do in Ada, even if a bit verbose.
Rather than “m * n”, it's more like “m + n”, or even “mx + ny”, or even “mx + padding + ny”, but okay.
> Hence why Rust would be less needed, because we would be in a computing world, where the majority of the exploits C has brought into mainstream computing would be reduced to a very tiny subset, also with unsafe red signs that one has to explicitly opt-in.
I would accept this claim if unsafe code would only be used to implement the occasional non-critical system component. But, can you implement a safe language runtime without “opt-in” unsafe language features? Clearly, static checks at some level are the only way of you really want safety. If you don't, that's okay - just be honest about it.
> As for tagged architectures, why not?
Because it's expensive! The user shouldn't have to use his machine to compute something that any responsible programmer would've known in advance. And runtime is too late to fix any mistakes anyway.
> Specially since that is what Intel and other manufactures are now trying to push as a workaround tame C corruption errors.
> But, can you implement a safe language runtime without “opt-in” unsafe language features? Clearly, static checks at some level are the only way of you really want safety. If you don't, that's okay - just be honest about it.
No, because at systems programming level, specially when interfacing with hardware is not possible to have 100% safety.
Not even SPARK or formal methods allow for that, there is always a very tiny portion of unsafety at some level.
But the point is not to remove it completely, rather to contain it to the point that is a tractable problem and easy to review, even manually.
Which is easier to achieve when languages are strong typed and require explicit escape hatches for doing unsafe things.
> Because it's expensive! The user shouldn't have to use his machine to compute something that any responsible programmer would've known in advance. And runtime is too late to fix any mistakes anyway.
Expensive is relative. I rather have the computer do stuff for me, and focus being productive.
Also just like everything else in computing, if there would be more research in that area, most likely the performance would increase.
Given that the manufacturers are turning to tagged architectures ideas to try to sort out the C mess, apparently they see a path forward there.
> > Specially since that is what Intel and other manufactures are now trying to push as a workaround tame C corruption errors.
> Well, that's sad.
Intel has added the MPX extensions to their processors, already available in gcc, clang and msvc++
> Which is easier to achieve when languages are strong typed and require explicit escape hatches for doing unsafe things.
No disagreement here. What I'm saying is that, in today's environment, requiring escape hatches for dynamic resource management is effectively requiring escape hatches for everything. The worst part is that, in most languages, you don't even need to enable escape hatches to preform nonsensical operations like mutating the same object in two threads, without explicit concurrency control. Literally everything is unsafe!
To the best of my knowledge, the only practical language today that attempts to mitigate these issues is Rust. Even if we somehow revived those old "safer than C even if this doesn't mean much" languages, they would probably not help much.
> Expensive is relative. I rather have the computer do stuff for me, and focus being productive.
With runtime checks, your computer isn't really doing anything for you. Your user's computer is alleviating a problem created by you.
> Also, as much as I like Rust, they still need to sort out some usability issues like non-lexical ownership.
I'd rather take the inconvenience of declaring the occasional use-once temporary variable, which takes at most 10 seconds of my time, than the inconvenience of tracking object lifetime bugs, which can take days or weeks to fix. Correctness is non-negotiable.
It might be desirable now, but those '60s/'70s OSes probably weren't dynamically allocating memory ever. They'd make a buffer in [their equivalent to] the .BSS section that was M×sizeof(int), and another that was N×sizeof(float), and then they'd process records through those buffers blockwise.
Exactly. In the '60s and '70s malloc was a luxury that most programs didn't have, and so Rust's facilities weren't needed to ensure safety. That doesn't describe the state of play in 2016. Those safer languages from the '60s and '70s would also be inadequate to ensure safety today.
> It might be desirable now, but those '60s/'70s OSes probably weren't dynamically allocating memory ever.
Those systems suddenly seem a lot less impressive now. While the lack of dynamic resource management poses other problems (e.g. determining up-front if the system has enough resources to complete the given task), I can't imagine this being more challenging than implementing a modern runtime system (JIT compiler, garbage collector, green thread scheduler, etc.).