> You opted to make some outlandish and very broad sweeping statements, and when asked to provide any degree of substance, you resorted to talk about "chill pills"?
You are not answering to OP here. Maybe it's time for a little reflection?
Computing: Performing the instructions they are given.
Thinking: Can be introspective, self correcting. May include novel ideas.
> Our whole modern world is built on outsourcing thinking to machines at every level.
I don't think they can think. You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI. They regurtitate what they are fed and hallucinate instead of admitting lack of ability. This applies to LLMs too as we all know.
> You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI.
You as a human have a list of cognitive biases so long you'd get bored reading it.
I'd call current ML "stupid" for different reasons*, but not this kind of thing: We spot AI's failures easy enough, but only because their failures are different than our own failures.
Well, sometimes different. Loooooots of humans parrot lines from whatever culture surrounds them, don't seem to notice they're doing it.
And even then, you're limiting yourself to one subset of what it means to think; and AI demonstrably do produce novel results outside training set; and while I'm aware it may be a superficial similarity, what so-called "reasoning models" produce in their so-called "chain-of-thought transcripts" seems a lot like my own introspection, so you aren't going to convince anyone just by listing "introspection" as if that's an actual answer.
> Computing: Performing the instructions they are given.
> Thinking: Can be introspective, self correcting. May include novel ideas.
LLMs can perform arbitrary instructions given in natural language, which includes instructions to be introspective and self correcting and generate novel ideas. Is it computing or is it thinking? We can judge the degree to which they can do these things, but it's unclear there's a fundamental difference in kind.
(Also obviously thinking is computation - the only alternative would be believing thinking is divine magic that science can't even talk about.)
I'm less interested in topic of whether LLMs are thinking or parrotting, and more in the observation that offloading cognition on external systems, be their digital, analog, or social, is just something humans naturally do all the time.
Ultrasonic humidifiers require ridiculously clean water. Even regular distilled water is scarcely clean enough. You need to clean the water tank ridiculously well, ridiculously often. No one actually uses an ultrasonic humidifier like it's "supposed" to be used. When the water isn't industrially clean, everything dissolved in it is turned very efficiently into tiny particles. You'll get a fine white mineral dust everywhere.
Ionizing air purifiers make ozone and the makers claim either that it's a good thing or that it's too little to worry about. The first is wrong, the second is a bad sell because the thing it's supposed to be removing is also (by itself) not a big health risk on any single day; the health benefit from removing particles is likely eaten up by the negative health effects of the ozone. You can clean air without ionization, so why? As far as I can tell, it's a pure marketing gimmick, the need to seem high-tech outweighs the actual utility of the thing. (That's also probably why ultrasonic humidifiers are popular, you can see the fog)
Only found a short but good article about such a case [0], i'm sure someone has bookmarked the original. There are support groups for people like this now!
> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”
Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.
I guess this depends on what you consider good tooling. I am relatively happy with C tooling. But if you want to quickly assemble something from existing libraries, then language-level package managers like npm, cargo, pip are certainly super convenient. But then, I think this convenience comes at a high cost. We now have worms again, I thought those times were long over... IMHO package management belongs into a distribution with quality control and dependencies should be minimized and carefully selected.
There are tons of package managers. They just don't exclude other languages besides C and C++. There are also build systems en mass. Some even were included into an inter-OS standard.
Currently inside is an i7-9600 which I limit to 3.6ghz and a cheap 1050ti.
The CPU is technically over the TDP limit of the case but with the frequency limit in place I never exceed about 70degC and due to my workloads I'm rarely maxing the CPU anyway.
There is zero noise under any load. There is no moving parts inside the case at all, no spinning HDD, no PSU fan, no CPU fan, no GPU fan.
It's an HP OEM (because I moved countries during the pandemic and getting parts where I settled was ridiculously more expensive).
The CPU is AIO (and the radiator fans are loud). The GPU has very loud fans too, but is not AIO.
It's four years old at this point and I might just build something else rather than try to retrofit this one to sanity (which I doubt is possible without dumping the GPU anyway).
I bought my current gaming desktop off a friend as he didn't need it anymore when I was looking for an upgrade. It had an AIO cooler. The pump made so much noise and it seemed like I had to fiddle with fan profiles forever to get it to have sane cooling. I swapped it for a $30 CoolerMaster Hyper 212 and a Noctua case fan. It cools well enough for the CPU to stay above stock speeds pretty much all the time and is much quieter than the AIO cooler was. I'm not suggesting this CPU cooler is the best one out there, but just pointing out its not like one needs to spend $100+ on a cooler to get pretty good performance.
The GPU still gets kind of loud during intense graphics gaming sessions but when I'm not gaming the GPU fans often aren't even spinning.
Honestly at this point it's not so much about money as it is about whether or not this particular case/setup/components combo is salvageable with minimal effort.
The CPU fan is rarely an issue (it mostly just goes bananas when IntelliJ gets its business on with gradle on a new project XD).
The GPU is the main culprit and I'm not sure there's any solution there that doesn't involve just replacing it.
Just last week I moved from using a Noctua NH-U12S to cool my 5950X, to a
ARCTIC Liquid Freezer III Pro 360 AIO liquid cooler (first time using liquid cooling), and while I expected the difference to be big, I didn't realize how big.
Now my CPU idles at ~35 usually, which is just 5 degrees above the ambient temperature (because of summer...), and hardly ever goes above 70 even under load, and still super quiet. Realize now I should have done the upgrade years ago.
Now if I could only get water cooling for the radiator/GPU I'm using. Unfortunately no water blocks available for it (yet) but can't wait to change that too, should have a huge impact as well.
> The Framework Desktop from the modular, user-upgradeable laptop company, has soldered-on memory because they had to "choose two" between memory bus performance, system stability, and user-replaceable memory modules.
No need to pick on Framework here, AMD could not make the chip work with replaceable memory. How many GPUs with user replaceable (slotted) memory are there? Zero snark intended
You are not answering to OP here. Maybe it's time for a little reflection?
reply