Based on these graphs and the differences in outcomes they show, you are not talking about "alert vs less alert" nurses but about "nurses doing their job vs nurses basically slowly killing dozens of patients".
... Yes? Do you really, seriously think that eight billion people can so easily disappear within a couple of generations and there won't be anyone left for the population to start growing again?
Rust 0.1, the first public release, came out in January 2012. CFront 1.0, the first commercial release, came out in 1985.
The public existence of Rust is 13 years, during which computing has not changed that much to be honest. Now compare this to the prehistory that is 1985, when CFront came out, already made for backwards compatibility with C.
I grew up with all the classic 8 bit micros, and to be honest, it doesn't feel like computing has changed at all since 1985. My workstation, while a billion times faster, is still code compatible with a Datapoint 2200 from 1970.
The memory model, interrupt model, packetized networking, digital storage, all function more or less identically.
In embedded, I still see Z80s and M68ks like nothing's changed.
I'd love to see more concrete implementations of adiabatic circuits, weird architectures like the mill, integrated FPGAs, etc. HP's The Machine effort was a rare exciting new thing until they walked back all the exciting parts. CXL seems like about the most interesting new thing in a bit.
Yeah, I almost called that out. Probably should have. GPU/NPU feels new (at least for us folks who could never afford a Cray). Probably the biggest change in the last 20 years, especially if you classify it with other multi-core development.
Meaning that all the machines I've ever cared about have had 8 bit bytes. The TI-99/4A, TRS-80, Commodore 64 and 128, Tandy 1000 8088, Apple ][, Macintosh Classic, etc.
Many were launched in the late 70s. By 1985 we were well into the era of PC compatibles.
in 1985 PC compatibles were talked about, but systems like VAX, and mainframes were still very common and considered the real computers while PCs were toys for executives. PCs had already shown enough value (via word processors and spreadsheets) that everyone knew they were not going away. PCs lacked things like multi-tasking that even then "real" computers had for decades.
My https://en.wikipedia.org/wiki/Tandy_1000 came out in 1984. And it was a relatively late entry to the market, it was near peak 8088 with what was considered high end graphics and sound for the day, far better than the IBM PC which debuted in 1981 and only lasted until 1987.
C++ is C compatible so more than 50 years of backward compatibility. Even today the vast majority of C programs can be compiled as C++ and they just work. Often such programs run faster because C++ a few additions that the compiler can use to optimize better, in practice C programs generally mean the stronger rules anyway (but of course when they don't the program is wrong).
<pedantry corner>CFront was never compatible with K&R C to the best of my knowledge, so the actual start date would be whenever C89-style code in widespread use; I'm not sure how long before 1989 that was.
I can tell that during 1999 - 2003, the aC compiler we had installed on our HP-UX 11 development servers, still had issues with C89, we had #defines to use K&R C function declarations when coding on that system.
Kind of, compatible with C89 as language, and with C23 to the extent of library functions that can be written in that subset, or with C++ features.
And yes, being a "Typescript for C" born at the same place as UNIX and C, is both what fostered its adoption, among compiler and OS vendors, and also what brings some pains trying to herd cats to write safe code.
People don't use Office frequently, and then when they do it's slow and a bad look. So they will cheat in a way that prioritize their own software, and then every one else will then that feature loses all value, as all programs launch on startup as not to be "slow"
A major part of it for me has been taking a moment to relate to the artist, looking at various details and imagining the thought and care that might have gone into them.
I mean, what happened to "use the right tool for the job"? There is Rust, C++, Julia, D, and certainly many more. Are they hard or what? Are they harder than mastering the math and algorithms that are relevant to an LLM? The code is actually pretty simple, certainly simpler than many "boring" apps.
Your comment shows such a fundamental misunderstanding of how modern AI/LLM works that is hard to be kind and thoughtful....
Python is simply the orchestration layer. The heavy lifting is done by low-level libraries used in the repo, written in C++, CUDA, and Rust (e.g., PyTorch’s C++ backend, Flash Attention’s CUDA kernels, FAISS’s SIMD optimizations, or Hugging Face’s Rust-powered tokenizers).
Python’s role is to glue these high-performance components together with minimal overhead, while providing accessibility. Claiming it’s "unsuitable" ignores the entire stack beneath the syntax.
A critique that is like blaming the steering wheel for a car’s engine performance.
Arguing that Rust, C++, Julia or D are a better "right tool for the job" than Python for a book that teaches people about LLMs is a bit of an eyebrow-raiser.
How so? Since when is Python a good language for numerical computation? What if the reader wants to try something that cannot be achieved by plumbing canned C++? They are out of luck I guess.
Good job teaching the sloppy semantics of a scripting language for numerics I guess.
It is possible that the vast majority of AI researchers are flat-out incorrect and need to be shown a better direction by you.
It is also possible that your own fitness-for-purpose coefficients are tuned differently than the majority of the field and they've made a sensible choice for their situation.
You can all raise your eyebrows and appeal to authority all you want. Doesn't change the fact that as soon as I want to write a tight (or not so tight) loop to do something that is not covered by all these (undoubtedly great) libraries running underneath, I am very probably out of luck and have to use another language.
Doesn't sound like a very creative teaching environment to me.
Not to mention that, depending on what you count as "AI research", the vast majority of it is pure garbage (come on, we all know this).
Also I have no clue how anyone could assert that the universe being "continuous" (whatever that means) could be casually claimed by anyone to have any link to our universe being "a simulation" (whatever that means) or not.
reply