But in addition to the mismatch between the abstractions provided and the real hardware, C qua C is missing a huge number of abstractions that are how real hardware works, especially if we pick C99 as Steve did, which doesn't have threading since that came later. I don't think it has any support for any sort of vector instruction (MMX and its followons), it doesn't know anything about your graphics card which by raw FLOPS may well be the majority of your machine's actual power, I don't think C99 has a memory model, and the list of things it doesn't know about goes on a long ways.
You can get to them from C, but via extensions. They're often very thin and will let you learn a lot about how the computer works, but it's reasonable to say that it's not really "C" at that point.
C also has more runtime that most people realize since it often functions as a de facto runtime for the whole system. It has particular concepts about how the stack works, how the heap works, how function calls work, and so on. It's thicker than you realize because you swim through it's abstractions like a fish through water, but, technically, they are not fundamental to how a computer works. A computer does not need to implement a stack and a heap and have copy-based functions and so on. There's a lot more accidental history in C than a casual programmer may realize. If nothing else compare CPU programming to GPU programming. Even when the latter uses "something rather C-ish", the resemblance to C only goes so deep.
There's also some self-fulfilling prophecy in the "C is how the computer works", too. Why do none of our languages have first-class understanding of the cache hierarchy? Well, we have a flat memory model in C, and that's how we all think about it. We spend a lot of silicon forcing our computers to "work like C does" (the specification for how registers work may as well read "we need to run C code very quickly no matter how much register renaming costs us in silicon"), and every year, that is becoming a slightly worse idea than last year as the divergence continues.
But that's fighting against a straw man. When people advice others to "learn C," they don't mean the C99 specification they mean C as it is used in the real world. That includes pthreads, simd intrinsics, POSIX and a whole host of other features that aren't really C but what every decent C programmer uses daily.
As you point out, modern architectures are actually designed to run C efficiently, so I'd say that's a good argument in favor of learning C to learn how (modern) computers work. Pascal is at the same level as C, but no one says "learn Pascal to learn how computers work" because architectures weren't adapted according to that language.
> . Pascal is at the same level as C, but no one says "learn Pascal to learn how computers work" because architectures weren't adapted according to that language.
They used to say it though, before UNIX derived OSes took over the computing world.
Hardware stacks are a relatively recent feature (in historical terms) even though subroutine calls go way back. Many systems did subroutine calls by saving the return address in a register. On the IBM 1401, when you called a subroutine, the subroutine would actually modify the jump instruction at the end of the subroutine to jump back to the caller. Needless to say, this wouldn't work with recursion. On the PDP-8, a jump to subroutine would automatically store the return address in the first word of the subroutine.
On many systems, if you wanted a stack, you had to implement it in software. For instance, on the Xerox Alto (1973), when you called a subroutine, the subroutine would then call a library routine that would save the return address and set up a stack frame. You'd call another library routine to return from the subroutine and it would pop stuff from the stack.
The 8008, Intel's first 8-bit microprocessor, had an 8-level subroutine stack inside the chip. There were no push or pop instructions.
I don't have direct experience, but I believe that system/360 programs have historically not had stacks, opting instead for a statically allocated 'program save area'.
RISC-type architectures with a branch-and-link instruction (as opposed to a jsr- or call-type instruction) generally have a stack by convention only, because the CPU doesn't need one to operate. (For handling interrupts and exceptions there is usually some other mechanism for storing the old program counter.)
From the perspective of "stack" as an underlying computation structure: Have you ever played a Nintendo game? You've used a program that did not involve a stack. The matter of "stack" gets really fuzzy in Haskell, too; it certainly has something like one because there are functions, but the way the laziness gets resolved makes it quite substantially different from the way a C program's stack works.
If a time traveler from a century in the future came back and told me that their programming languages aren't anywhere as fundamentally based on stacks as ours are, I wouldn't be that surprised. I'm not sure any current AI technology (deep learning, etc.) is stack-based.
From the perspective of "stack" as in "stack vs. heap", there's a lot of existing languages where at the language level, there is no such distinction. The dynamic scripting language interpreters put everything in the heap. The underlying C-based VM may be stack based, but the language itself is not. The specification of Go actually never mentions stack vs. heap; all the values just exist. There is a stack and a heap like C, but what goes where is an implementation detail handled by the compiler, not like in C where it is explicit.
To some extent, I think the replies to my post actually demonstrate my point to a great degree. People's conceptions of "how a computer works" are really bent around the C model... but that is not "how a computer works". Computers do not care if you allocate all variables globally and use "goto" to jump between various bits of code. Thousands if not millions of programs have been written that way, and continue to be written in the embedded arena. In fact, at the very bottom-most level (machines with single-digit kilobytes), this is not just an option, but best practice. Computers do not care if you hop around like crazy as you unwind a lazy evaluation. Computers do not care if all the CPU does is ship a program to the GPU and its radically different paradigm. Computers do not care if they are evaluating a neural net or some other AI paradigm that has no recognizable equivalent to any human programming language. If you put an opcode or specialized hardware into something that really does directly evaluate a neural net without any C code in sight, the electronics do not explode. FPGAs do not explode if you do not implement a stack.
C is not how computers work.
It is a particular useful local optimum, but nowadays a lot of that use isn't because "it's how everything works" but because it's a highly supported and polished paradigm used for decades that has a lot of tooling and utility behind it. But understanding C won't give you much insight into a modern processor, a GPU, modern AI, FPGAs, Haskell, Rust, and numerous other things. Because learning languages is generally good regardless, yes, learning C and then Rust will mean you learn Rust faster than just starting from Rust. Learn lots of languages. But there's a lot of ways in which you could equally start from Javascript and learn Rust; many of the concepts may shock the JS developer, but many of the concepts in Rust that the JS programmer just goes "Oh, good, that feature is different but still there" will shock the C developer.
C doesnt "have", or require, a stack, either. It has automatic variables, and I think I looked once and it doesn't even explitly require support for recursive functions.
You're right. I searched the C99 for "recurs" and found the relevant section that briefly mentions recursive function calls.
That means static allocation is insufficient for an implementation of automatic variables in a conformant C compiler. Nevertheless I still like to think of it as a valid implementation sometimes. In contemporary practice many stack variables are basically global variables, in the sense that they are valid during most of the program. And they are degraded to stack variables only as a by-product of a (technically, unnessary) splitting of the program into very fine-grained function calls.
> We spend a lot of silicon forcing our computers to "work like C does" (the specification for how registers work may as well read "we need to run C code very quickly no matter how much register renaming costs us in silicon"
Can you elaborate? I thought I knew what register renaming was supposed to do, but I don't see the tight connection between register renaming and C.
Register renaming was invented to give assembler code generated from C enough variables in the forms of processor registers, so that calling functions wouldn't incur as much of a performance penalty.
The UltraSPARC processor is a stereotypical example, a RISC processor designed to cater to a C compiler as much as possible: with register renaming, it has 256 virtual registers!
Register renaming is older than C (the first computer with full modern OoOE was the 360/91 from 1964). It has more to do with scheduling dynamically based on the runtime data flow graph than anything about C (or any other high level language).
Agreed with your historical information, but the comment about function calls and calling conventions is not without merit. If you have 256 architectural registers you still can't have more than a couple callee-save registers (otherwise non-leaf functions need to unconditionally save/restore too many registers), and so despite the large number of registers you can't really afford many live values across function calls, since callers have to save/restore them. Register renaming solves this problem by managing register dependencies and lifetimes for the programmer across function calls. With a conventional architecture with a lot of architectural registers, the only way you can make use of a sizable fraction of them is with massive software pipelining and strip mining in fully inlined loops, or maybe with calls only to leaf functions with custom calling conventions, or other forms of aggressive interprocedural optimization to deal with the calling convention issue. It's not a good fit for general purpose code.
Another related issue is dealing with user/kernel mode transitions and context switches. User/kernel transitions can be made cheaper by compiling the kernel to target a small subset of the architectural registers, but a user mode to user mode context switch would generally require a full save and restore of the massive register file.
And there's also an issue with instruction encoding efficiency. For example, in a typical three-operand RISC instruction format with 32-bit instructions, with 8-bit register operand fields you only have 8 bits remaining for the opcode in an RRR instruction (versus 17 bits for 5-bit operand fields), and 16 bits remaining for both the immediate and opcode in an RRI instruction (versus 22 bits for 5-bit operand fields). You can reduce the average-case instruction size with various encoding tricks, cf. RVC and Thumb, but you cannot make full use of the architectural registers without incurring significant instruction size bloat.
To make the comparison fair, it should be noted that register renaming cannot resolve all dependency hazards that would be possible to resolve with explicit registers. You can still only have as many live values as there are architectural registers. (That's a partial truth because of memory.)
There are of course alternatives to register renaming that can address some of these issues (but as you say register renaming isn't just about this). Register windows (which come in many flavors), valid/dirty bit tracking per register, segregated register types (like data vs address registers in MC68000), swappable register banks, etc.
I think a major reason register renaming is usually married to superscalar and/or deeply pipelined execution is that when you have a lot of physical registers you run into a von Neumann bottleneck unless you have a commensurate amount of execution parallellism. As an extreme case, imagine a 3-stage in-order RISC pipeline with 256 registers. All of the data except for at most 2 registers is sitting at rest at any given time. You'd be better served with a smaller register file and a fast local memory (1-cycle latency, pipelined loads/stores) that can exploit more powerful addressing modes.
Because I'm an assembler coder, and when one codes assembler by hand, one almost never uses the stack: it's easy for a human to write subroutines in such a way that only the processor registers are used, especially on elegant processor designs which have 16 or 32 registers.
You almost had to, because both Z80 and x86 processors have a laughably small number of general purpose registers. To write code which works along that limitation would have required lots of care and cleverness, far more than on UltraSPARC and MC68000.
On MOS 6502 we used self-modifying code rather than the stack because it was more efficient and that CPU has no instruction and data caches, so they couldn't be corrupted or invalidated.
You can get to them from C, but via extensions. They're often very thin and will let you learn a lot about how the computer works, but it's reasonable to say that it's not really "C" at that point.
C also has more runtime that most people realize since it often functions as a de facto runtime for the whole system. It has particular concepts about how the stack works, how the heap works, how function calls work, and so on. It's thicker than you realize because you swim through it's abstractions like a fish through water, but, technically, they are not fundamental to how a computer works. A computer does not need to implement a stack and a heap and have copy-based functions and so on. There's a lot more accidental history in C than a casual programmer may realize. If nothing else compare CPU programming to GPU programming. Even when the latter uses "something rather C-ish", the resemblance to C only goes so deep.
There's also some self-fulfilling prophecy in the "C is how the computer works", too. Why do none of our languages have first-class understanding of the cache hierarchy? Well, we have a flat memory model in C, and that's how we all think about it. We spend a lot of silicon forcing our computers to "work like C does" (the specification for how registers work may as well read "we need to run C code very quickly no matter how much register renaming costs us in silicon"), and every year, that is becoming a slightly worse idea than last year as the divergence continues.