Hacker News new | past | comments | ask | show | jobs | submit | d_tr's comments login

Based on these graphs and the differences in outcomes they show, you are not talking about "alert vs less alert" nurses but about "nurses doing their job vs nurses basically slowly killing dozens of patients".

> lol, you think we’ll live a hundred years...

... Yes? Do you really, seriously think that eight billion people can so easily disappear within a couple of generations and there won't be anyone left for the population to start growing again?


Nobody can claim to have a clue about such a timeframe.

Rust 0.1, the first public release, came out in January 2012. CFront 1.0, the first commercial release, came out in 1985.

The public existence of Rust is 13 years, during which computing has not changed that much to be honest. Now compare this to the prehistory that is 1985, when CFront came out, already made for backwards compatibility with C.


I grew up with all the classic 8 bit micros, and to be honest, it doesn't feel like computing has changed at all since 1985. My workstation, while a billion times faster, is still code compatible with a Datapoint 2200 from 1970.

The memory model, interrupt model, packetized networking, digital storage, all function more or less identically.

In embedded, I still see Z80s and M68ks like nothing's changed.

I'd love to see more concrete implementations of adiabatic circuits, weird architectures like the mill, integrated FPGAs, etc. HP's The Machine effort was a rare exciting new thing until they walked back all the exciting parts. CXL seems like about the most interesting new thing in a bit.


Does GPU thingy count as something that has changed with computing?


It may go on to be as important as the FPU [0]. Amazingly enough you can still get one for a Classic II [1].

0. https://en.wikipedia.org/wiki/Floating-point_unit

1. https://www.tindie.com/products/jurassicomp/68882-fpu-card-f...


Yeah, I almost called that out. Probably should have. GPU/NPU feels new (at least for us folks who could never afford a Cray). Probably the biggest change in the last 20 years, especially if you classify it with other multi-core development.


Today a byte is 8 bits. That was not always the case back then, for example.


> I grew up with all the classic 8 bit micros

Meaning that all the machines I've ever cared about have had 8 bit bytes. The TI-99/4A, TRS-80, Commodore 64 and 128, Tandy 1000 8088, Apple ][, Macintosh Classic, etc.

Many were launched in the late 70s. By 1985 we were well into the era of PC compatibles.


in 1985 PC compatibles were talked about, but systems like VAX, and mainframes were still very common and considered the real computers while PCs were toys for executives. PCs had already shown enough value (via word processors and spreadsheets) that everyone knew they were not going away. PCs lacked things like multi-tasking that even then "real" computers had for decades.


> in 1985 PC compatibles were talked about

My https://en.wikipedia.org/wiki/Tandy_1000 came out in 1984. And it was a relatively late entry to the market, it was near peak 8088 with what was considered high end graphics and sound for the day, far better than the IBM PC which debuted in 1981 and only lasted until 1987.


C++ carries so much on its back and this makes its evolution over the past decade even more impressive.


Yes, people keep forgeting C++ was made public with CFront 2.0 back in 1989, 36 years of backwards compatibility, to certain extent.


C++ is C compatible so more than 50 years of backward compatibility. Even today the vast majority of C programs can be compiled as C++ and they just work. Often such programs run faster because C++ a few additions that the compiler can use to optimize better, in practice C programs generally mean the stronger rules anyway (but of course when they don't the program is wrong).


<pedantry corner>CFront was never compatible with K&R C to the best of my knowledge, so the actual start date would be whenever C89-style code in widespread use; I'm not sure how long before 1989 that was.


I can tell that during 1999 - 2003, the aC compiler we had installed on our HP-UX 11 development servers, still had issues with C89, we had #defines to use K&R C function declarations when coding on that system.


Kind of, compatible with C89 as language, and with C23 to the extent of library functions that can be written in that subset, or with C++ features.

And yes, being a "Typescript for C" born at the same place as UNIX and C, is both what fostered its adoption, among compiler and OS vendors, and also what brings some pains trying to herd cats to write safe code.


I thought Windows had a generic subsystem for "warming up" frequently used apps for faster launches.


People don't use Office frequently, and then when they do it's slow and a bad look. So they will cheat in a way that prioritize their own software, and then every one else will then that feature loses all value, as all programs launch on startup as not to be "slow"


Only for OS components, I think?


https://en.wikipedia.org/wiki/Windows_Vista_I/O_technologies...

SuperFetch was supposed to work with any app. It never seemed to have much effect IMO.


A major part of it for me has been taking a moment to relate to the artist, looking at various details and imagining the thought and care that might have gone into them.


I guess it wouldn't sell shit if it used a language suitable for this type of work.


I mean, what happened to "use the right tool for the job"? There is Rust, C++, Julia, D, and certainly many more. Are they hard or what? Are they harder than mastering the math and algorithms that are relevant to an LLM? The code is actually pretty simple, certainly simpler than many "boring" apps.


Your comment shows such a fundamental misunderstanding of how modern AI/LLM works that is hard to be kind and thoughtful....

Python is simply the orchestration layer. The heavy lifting is done by low-level libraries used in the repo, written in C++, CUDA, and Rust (e.g., PyTorch’s C++ backend, Flash Attention’s CUDA kernels, FAISS’s SIMD optimizations, or Hugging Face’s Rust-powered tokenizers).

Python’s role is to glue these high-performance components together with minimal overhead, while providing accessibility. Claiming it’s "unsuitable" ignores the entire stack beneath the syntax.

A critique that is like blaming the steering wheel for a car’s engine performance.


Again, I am extremely well aware of all of this. You assumed too much.


Arguing that Rust, C++, Julia or D are a better "right tool for the job" than Python for a book that teaches people about LLMs is a bit of an eyebrow-raiser.


How so? Since when is Python a good language for numerical computation? What if the reader wants to try something that cannot be achieved by plumbing canned C++? They are out of luck I guess.

Good job teaching the sloppy semantics of a scripting language for numerics I guess.


"Since when is Python a good language for numerical computation?"

30 years. Numeric came out in 1995, then evolved into NumPy in 2005. https://en.m.wikipedia.org/wiki/NumPy

Almost every AI researcher and AI lab does most of their research work in Python.


I know all of these facts. Doesn't mean it is how it is for the right reasons, and even if it is, it does not imply that it is a good way to teach.


Taking constant side-quests into Rust memory management during a class on LLMs doesn't sound like a productive way to teach to me.


It is possible that the vast majority of AI researchers are flat-out incorrect and need to be shown a better direction by you.

It is also possible that your own fitness-for-purpose coefficients are tuned differently than the majority of the field and they've made a sensible choice for their situation.

I'd wager on the latter.


You can all raise your eyebrows and appeal to authority all you want. Doesn't change the fact that as soon as I want to write a tight (or not so tight) loop to do something that is not covered by all these (undoubtedly great) libraries running underneath, I am very probably out of luck and have to use another language.

Doesn't sound like a very creative teaching environment to me.

Not to mention that, depending on what you count as "AI research", the vast majority of it is pure garbage (come on, we all know this).


I assume you mean book's code shown in the Jypyter notebooks in the repository. (Which I think is both simple and messy.)


The latest Myst and Riven remakes are fantastic. The puzzles are the same, the environments are wonderfully made and the atmosphere is there 100%.


> Has this been actually proven?

Neither proven nor disproven in any capacity.

Also I have no clue how anyone could assert that the universe being "continuous" (whatever that means) could be casually claimed by anyone to have any link to our universe being "a simulation" (whatever that means) or not.


>Simulations are usually vastly simpler than the substrate they run on.

So to simulate continuum you need a transfinite VM. It will be more efficient to just run the universe bare metal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: