Hacker Newsnew | past | comments | ask | show | jobs | submit | Tempest1981's commentslogin

> requirements that you sacrifice your retirement and financial well being

Holding index funds seems reasonable (Total Stock Market, S&P 500)... not a huge sacrifice imo


Definitely prefer something like Total market, even 500 companies is too much of a bias towards one group. Especially with the S&P500 currently looking like an S&P10...

this feels like a no brainer solution, but just seems like a wonderful wishlist that won't happen, ive heard it a lot.

I am wondering why i don't hear anything like: if a church or group representing a church get involved with politics, through lobbying or any other type of means, their tax free status is revoked and they are treated like a normal entity.

it seems like a similar type of thing that in my eyes at least, is a no brainer, but is just as likely to be chopped down because of how our current system functions.


Original, for the curious: https://web.archive.org/web/20251203002619/https://lenowo.or...

[id Software was Lazy - DOOM could have had PC Speaker Music!]


I think we have our hacker/power-user blinders on. It was cool to hate Clippy.

But many non-tech-savvy users felt differently, and were accepting of the attempt to provide help.


what if clippy's true purpose was sales/marketing and not productivity?

back in the day people needed to be convinced that they needed a computer and that they'd be able to figure it out.

if you see clippy on a showroom floor or on your friends pc, you might think "oh yeah, i suppose I could use a computer to do that!".


Is it really a contradiction?

Convincing people they maybe should get a computer worked much better if they saw that it might increase their productivity.

Techy folks all already had one back then, and Clippy was aimed at those still writing their correspondence on a typewriter. I have no doubt that switching to a computer was productive for them, even if it increased Microsoft sales in the process.


Is the same true for USB flash drives? Do they rely on the OS to scrub/refresh them?

Why do we all need 128GB now? I was happy with 32.

Close a few Chrome tabs, and save some DDR5 for the rest of us. :-)


Last night, while writing a LaTeX article, with Ollama running for other purposes, Firefox with its hundreds of tabs, multiple PDF files open, my laptop's memory usage spiked up to 80GB RAM usage... And I was happy to have 128GB. The spike was probably due to some process stuck in an effing loop, but the process consuming more and more RAM didn't have any impact on the system's responsiveness, and I could calmly quit VSCode and restart it with all the serenity I could have in the middle of the night. Is there even a case where more RAM is not really better, except for its cost?

> Is there even a case where more RAM is not really better, except for its cost?

It depends. It takes more energy, which can be undesirable in battery powered devices like laptops and phones. Higher end memory can also generate more heat, which can be an issue.

But otherwise more RAM is usually better. Many OS's will dynamically use otherwise unused RAM space to cache filesystem reads, making subsequent reads faster and many databases will prefetch into memory if it is available, too.


Firefox is particularly good at having lots of tabs open and not using tons of memory.

    $ ~/dev/mozlz4-tool/target/release/mozlz4-tool \
        "$(find ~/Library/Application\ Support/Firefox/Profiles/ -name recovery.jsonlz4 | head -1)" | \
        jq -r '[.windows[].tabs | length] | add'
    5524
Activity monitor claims firefox is using 3.1GB of ram.

    Real memory size:      2.43 GB
    Virtual memory size: 408.30 GB
    Shared memory size:  746.5  MB
    Private memory size: 377.3  MB
That said, I wholeheartedly agree that "more RAM less problems". The only case I can think of when it's not strictly better to have more is during hibernation (cf sleep) when the system has to write 128GB of ram to disk.

In my experience firefox is "pretty good" about having lots of tabs and windows open if you don't mind it crashing every week or two.

I've not had a crash on Firefox in like a decade, basically since the Quantum update in like 2016.

Try living like I do. I currently have 1,838 tabs open across 9 different windows. On second thought, maybe don't live like I do...

I've got ~5k+ tabs, and I've also seen basically zero crashes in the last decade. I'm on Macos, not very many extensions though one of them is Sidebery (and before that Tree Style Tabs) which seems to slow things down quite a lot.

Why do you need all of these tabs open? How do you find what you need?

I likely don't need all the tabs. Some were opened only because they might be useful or interesting. Others get opened because they cover something I want to dig into further later on, but in this case it's the buildup of multiple crash>restore cycles. Eventually I'll get to each tab and close it or save the URL separately until it's back to 0, but even in that process new tabs/windows get opened so it can take time.

On consumer chips the more memory modules you have the slower they all run. I.e. if you have a single module of DDR5 it might run at 5600MHz but if you have four of them they all get throttled to 3800MHz.

Mainboards have two memory channels so you should be able to reach 5600mhz on both and dual slot mainboards have better routing than quad slot mainboards. This means the practical limit for consumer RAM is 2x48GB modules.

Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

However this does not make sense, as for more than a decade the processors have only grown increasing the number of threads, therefore two channels sounds like a negligent and deliberately imposed bottleneck to access the memory if one use all those threads (Lets say 3D render, Video postproduction, Games, and so on).

And if one want four channels to surpass such imposed bottleneck, the mainboards that nowadays have four channels don't contemplate consumer use, therefore they have one or two USB connectors with three or four LAN connectors at prohibitive prices.

We are talking about consumer quad-channel DDR4 machines ten years old, wildly spread, keeps being competent compared with current consumers ones, if not better. It is like if all were frozen along this years (and what remains to be seen with such pattern).

Now it is rumoured that AMD may opt for four channels for its consumer lines due to the increased number of pin connectors (good news if true).

It is a bad joke what the industry is doing to customers.


> Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

You need to re-check your sources. When AMD started doing integrated memory controllers in 2003, they had Socket 754 (single channel / 64-bit wide) for low-end consumer CPUs and Socket 940 (dual channel / 128-bit wide) for server and enthusiast destkop CPUs, but less than a year later they introduced Socket 939 (128-bit) and since then their mainstream desktop CPU sockets have all had a 128-bit wide memory interface. When Intel later also moved their memory controller from the motherboard to the CPU, they also used a 128-bit wide memory bus (starting with LGA 1156 in 2008).

There's never been a desktop CPU socket with a memory bus wider than 128 bits that wasn't a high-end/workstation/server counterpart to a mainstream consumer platform that used only a 128-bit wide memory bus. As far as I can tell, the CPU sockets supporting integrated graphics have all used a 128-bit wide memory bus. Pretty much all of the growth of desktop CPU core counts from dual core up to today's 16+ core parts has been working with the same bus width, and increased DRAM bandwidth to feed those extra cores has been entirely from running at higher speeds over the same number of wires.

What has regressed is that the enthusiast-oriented high-end desktop CPUs derived from server/workstation parts are much more expensive and less frequently updated than they used to be. Intel hasn't done a consumer-branded variant of their workstation CPUs in several generations; they've only been selling those parts under the Xeon branding. AMD's Threadripper line got split into Threadripper and Threadripper PRO, but the non-PRO parts have a higher starting price than early Threadripper generations, and the Zen 3 generation didn't get non-PRO Threadrippers.


At some point the best "enthusuast-oriented HEDT" CPU's will be older-gen Xeon and EPYC parts, competing fairly in price, performance and overall feature set with top-of-the-line consumer setups.

Based on historical trends, that's never going to happen for any workloads where single-thread performance or power efficiency matter. If you're doing something where latency doesn't matter but throughput does, then old server processors with high core counts are often a reasonable option, if you can tolerate them being hot and loud. But once we reached the point where HEDT processors could no longer offer any benefits for gaming, the HEDT market shrank drastically and there isn't much left to distinguish the HEDT customer base from the traditional workstation customers.

I'm not going to disagree outright, but you're going to pay quite a bit for such a combination of single-thread peak performance and high power efficiency. It's not clear why we should be regarding that as our "default" of sorts, given that practical workloads increasingly benefit from good multicore performance. Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts) than CPU.

I said "single-thread performance or power efficiency", not "single-thread performance and power efficiency". Though at the moment, the best single-thread performance does happen to go along with the best power efficiency. Old server CPUs offer neither.

> Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts)

A gaming GPU doesn't need all of the bandwidth available from a single PCIe x16 slot. Mid-range GPUs and lower don't even have x16 connectivity, because it's not worth the die space to put down more than 8 lanes of PHYs for that level of performance. The extra PCIe connectivity on server platforms could only matter for workloads that can effectively use several GPUs. Gaming isn't that kind of workload; attempts to use two GPUs for gaming proved futile and unsustainable.


You have a processor with more than eight threads, at same bus bandwidth, what do you choose, dual channeled or four channeled processor.

That number of threads will hit a bottleneck accessing only through to channels of memory.

I don't understand why you brought up the topic of single-threading in your response to the user, given that processors reached a frequency limit of 4 GHz, and 5 GHz with overclocking, a decade ago. This is why they increased the number of threads, but if they reduce the number of memory channels for consumer/desktop...


What is the best single thread performance possible right now? With over locked fast ram.

But you can easily have 128GB and still on 2 modules

Larger capacity is usually slower though. The fastest ram is typically 16 or 32 capacity.

The OP is talking about a specific niche of boosting single thread performance. It’s common with gaming pcs since most games are single thread bottlenecked. 5% difference may seem small, but people are spending hundreds or thousands for less gains… so buying the fastest ram can make sense there.


> Is there even a case where more RAM is not really better, except for its cost?

RAM uses power.


It also consumes more physical space. /s

Not really /s, since it is a limited resource in e.g. Laptops.

It depends on what you are doing.

If you are working on an application that has several services (database, local stack, etc.) as docker containers, those can take up more memory. Especially if you have large databases or many JVM services, and are running other things like an IDE with debugging, profiling, and other things.

Likewise, if you are using many local AI models at the same time, or some larger models, then that can eat into the memory.

I've not done any 3D work or video editing, but those are likely to use a lot of memory.


640K ought to be enough for anyone.

Why did you waste all your money on 32gb when 4gb is enough? Why did we all need 32gb?

Bloated OS loaded with things the buyer does not need and bloated JS ecosystem probably.

Get this. Pen and paper. No need for silicon at all.

You're welcome.


Having recently updated to 192gb from 96gb I'm pretty happy. I run many containers, have 20 windows of vscode and so on. Plus ai inference on CPU when 48gb vram is not enough.

Exactly. I recently doubled my RAM and have now 4GB.

I like to tell people I have 128GB. It's pretty rare to meet someone like me that isn't swapping all the time.

I also tell people that. It’s not true, but it’s free.

The headline is about preloading into memory, not onto the hard disk.

Yes, I'm talking about the "OfficeClickToRun.exe" that would run in the background, keeping big parts of the software preloaded. It would do things like update automatically in the background as well.

Cables do stretch and rust


And motors/actuators wear out and can rust too. A cable is still much cheaper.


I do enjoy seeing what themes others are using.


Summarizing:

- Get a regular physical, or at least a blood test. (Don't wait 5 or 10 years)

- If it shows cholesterol issues, get an advanced lipids blood test, which can indicate whether it's caused by genetics (LipoA/ApoB?)

- If eating and exercise alone aren't helping, consider taking statins for cardiovascular health

- Consider a CT scan to check for calcium build-up, which is not reversible (afaik)

fwiw, I think the advice is much more than just "eat well and exercise".


You really should push for an ApoB test in general - most people are bit by LDL-C and not other atherogenic particles like Lp(a), but it's still common enough to find out. The good news is Lp(a) is largely genetic so if you know you have low levels you likely don't need to test again anytime soon.

A CAC will show calcified build-up, not reversible (or at least not in any appreciable way)

A CTA will show soft plaque buildup, which IS reversible with a low enough atherogenic particle load. This generally means keeping your LDL-C below the 50-70 range, though if Lp(a) is the cause you'll likely need a PKCS9 inhibitor or an upcoming CETP inhibitor to drive it down.


A regular lipid panel won't test for Lp(a) which is genetic. So you need to test specifically for Lp(a) once in your lifetime because you need to know your risk factors. The test was $35.20 when I had it done by LabCorp last year. 20-30% of the population (including me) have high Lp(a). Statins don't reduce Lp(a).


Is there any way to get rid of the calcification? Experimental techniques?


Sure, surgery and several other risky and/or invasive treatments.

Or you could take statins and prevent it from becoming an issue in the first place.


Amlodipin


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: