This whole article seems to be picking a lot of nits. I didn't read it too deeply, so feel free to correct me if I'm wrong, but the biggest complaints highlighted in the article are:
1. Modern CPUs use instruction level parallelism (ILP)
2. Memory isn't linear (you have separate caches, L1, L2, L3 and main memory)
If you've ever debugged a C or C++ program in release, you've quickly found out about ILP. The code still maps relatively close to the hardware, it just won't run in the sequential order you've provided it in. Many C and C++ programmers know this and try to make the implicit assumptions in their code explicit to allow the compiler to reorder instructions more easily/reduce memory dependency chains[0].
And I'm sure you've heard several proponents over the past few years (Mike Acton comes to mind) espousing data oriented design. This is an entire code methodology intended to help the CPU caches, and it shows an implicit understanding that memory is not linear. Heck, most C/C++ programmers realize that they're using VRAM all the time, and the memory locations they get aren't necessarily backed by physical memory until the OS sorts it out. This is especially transparent if you've ever done any sort of memmapping with files or played with virtual memory.
Anyways, C doesn't necessarily map directly to the hardware, but it's a heck of a lot easier to intuit what a C program will end up doing on the CPU vs what a Python program will do. And most C/C++ programmers realize this fact, and actively write code to utilize the fact that C does not map directly to hardware.
Yeah. Also, learning C is still probably the best way to learn how raw pointers work. And pointers underpin everything - even if you spend your life in Python or Java.
When I was teaching programming, it was always a rite of passage for my students to implement a linked list in C. Once it clicked for them, the world of programming opened up.
C is also still the universal glue language for FFI. Wanna call Swift from Rust? You can always reach for C.
1. Modern CPUs use instruction level parallelism (ILP)
2. Memory isn't linear (you have separate caches, L1, L2, L3 and main memory)
If you've ever debugged a C or C++ program in release, you've quickly found out about ILP. The code still maps relatively close to the hardware, it just won't run in the sequential order you've provided it in. Many C and C++ programmers know this and try to make the implicit assumptions in their code explicit to allow the compiler to reorder instructions more easily/reduce memory dependency chains[0].
And I'm sure you've heard several proponents over the past few years (Mike Acton comes to mind) espousing data oriented design. This is an entire code methodology intended to help the CPU caches, and it shows an implicit understanding that memory is not linear. Heck, most C/C++ programmers realize that they're using VRAM all the time, and the memory locations they get aren't necessarily backed by physical memory until the OS sorts it out. This is especially transparent if you've ever done any sort of memmapping with files or played with virtual memory.
Anyways, C doesn't necessarily map directly to the hardware, but it's a heck of a lot easier to intuit what a C program will end up doing on the CPU vs what a Python program will do. And most C/C++ programmers realize this fact, and actively write code to utilize the fact that C does not map directly to hardware.
[0]: https://johnnysswlab.com/instruction-level-parallelism-in-pr...