Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are those GPUs really stupid? They seem like a great price for performance devices when ultra high level gaming is not the priority.

EDIT: I personally always liked intel iGPUs because they were always zero bullshit on Linux minus some screen tearing issues and mumbo-jumbo fixes required in X11.



They can encode AV1 even. Really amazing, amazing chips for the price.

The "stupid" thing with them (maybe) is that they cannot do anything exceptionally good, and that while having compatibility problems. They are cheap yes, but there are many other chips for the same price, and while they are less capable, they are more compatible.

Make A380 cost 30% less or invest way more to the drivers and IMHO it'd been completely different.


Another nice thing—it looked like Intel was lagging AMD in PCIe lane counts, until somewhat recently. I suspect selling GPUs has put them in the headspace of thinking of PCIe lanes as a real figure of merit.


AMD AM5 are also not great at having enough PCIe lanes, hence at most one connected PCIe5 x16 GPU, if you need more it's x8 for 2 GPUs and so on, and that's before we connect fast M2 storage, fast USB4 slots etc. If you need more PCIe lanes, you have to buy a Threadripper or Epyc and that's easily 10 times the price for the whole system.


PCIe lanes and DDR channels take up the most pins on a CPU connector (ignoring power). The common solution (for desktops) is to have a newest generation protocol (5) at the CPU level, then use the chipset to fan out more lanes at a lower generation (4).


I understand the tradeoff, but it left a segment of the market between pure consumer solutions and pure productivity/server solutions in no mans land.


Yeah. Theadripper/Epyc is what I’m thinking of—it isn’t obvious (to me at least) if it was just a coincidence of the chiplet strategy or what, if so it is an odd coincidence. The company that makes both CPUs and GPUs has ended up with data center CPUs that are a great match for the era where we really want data center CPUs that can host a ton of GPUs, haha.


I am basically biased towards discrete GPU = asking for trouble in Linux.

Driver stability, less heat and fan noise, battery life is almost assured in the Intel iGPU.


Nah. AMD discreet GPUs are fantastic in Linux these days. You don't need to install a proprietary driver! They just work. It's really nice not having to think about the GPU's drivers or configuration at all.

The only area where AMD's discreet GPUs are lagging behind is AI stuff. You get a lot more performance with Nvidia GPUs for the same price. For gaming, though AMD is the clear winner in the price/performance space.

Of course, Nvidia is still a bit of a pain but it's of their own making. You still need to install their proprietary driver (which IMHO isn't that big a deal) but the real issue is that if you upgrade your driver from say, 550 to 555 you have to rebuild all your CUDA stuff. In theory you shouldn't have to do that but in reality I had to blow away my venv and reinstall everything in order to get torch working again.


Nvidia's GPUs work well on Linux. A friend and I use them and they are fairly problem free. In the past, when I did have some issues (mainly involving freesync), I contacted Nvidia and they fixed them. More specifically, I found that they needed to add sddm to their exclusion list, told them and they added it to the list after a few driver releases. They have also fixed documentation on request too.


On the question of integrated versus discrete GPUs, what are the practical differences?

I am trying to learn this but having difficulty finding good explanations. I know the Wikipedia-level overview, but need more details.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: