Hacker News new | past | comments | ask | show | jobs | submit login

I’ve been thinking about this a lot recently - pcie slots. They need to be deprecated. We should be using mini-sas style breakout cables and reduce motherboard sizes.

It’s particularly a problem with huge top of the line GPU’s like the 7900 XTX and 4090. They are so long and so heavy that they sag. To work around it, we have kickstands and brackets that are added on the far end (opposite the external slot) to prop them up. Vertical mounts exist, but they’re a very wide ribbon the same width as the slot that will get in the way of lots of stuff.

Why aren’t we innovating here? Big GPU’s are so big they will often block all the slots on the board anyway. Manufacturers are shifting pcie lanes to m.2 on platforms without many lanes. The slots need to go or remain only for legacy use.

It’ll help with things like this too. The 4060 is using a full slot that it doesn’t need so those wasted lanes are now available for an m.2 card. IMHO all this should be modular like a less polished usb-c/thunderbolt interface. Minisas comes to mind but I know the server market is doing things with pushing pcie over a breakout cable.

ATX feels so antiquated as a form factor right now.




Not my field, but high speed (high frequency) data transmission cables must overcome several factors related to the interferences caused by the current and voltage of surrounding wires through inductance, and also circumvent the signal degradation due capacitance, resistance and others within the wire per se. With nowadays' PCIe gens we would be talking about frequencies around 8Ghz, 16 GHz and the 32 GHz of the next generation.

When motherboards are designed, the interference and signal integrity between traces/lanes needs to be calculated and balanced for to avoid those issues, but I'm not sure could be done effectively with flexible electrical cables at PCIe frequencies.

But lets say the interferences and signal integrity were stable. I suspect that for to reduce the number of wires within the cables needed by the PCIe lanes -for to avoid the width of classical IDE hard drives cables or something rigid?- it would be needed to increase the speed of the data transmission for a serialized cable equivalent to the source PCIe parallelized lanes, with frequencies wildly higher.

So, as alternative we would be talking about an hybrid of optical data cables + clocking electrical wires then? but the latency/synchro introduced by the optical transceivers (that would need to work at higher frequencies than PCIe bus), each manufacturer using more optimal or less optimal optical transceivers from different brands, etc, I suspect that would introduce a new kind of issue invading the users forums (or maybe not, nevertheless I smell it) due such cables.

The matter is, maybe the above could reduce a bit the size of the motherboard, but not the volume occupied by the targeted devices (lets say GPU), it would be just for to repositioning those devices to a new place. so, although I do believe that the technology in general has been stagnant for a few years, in this case, IMHO, the GPUs are the problem, the GPU's manufacturers are not innovating at all, and due that the bricks.


Back in the day "Infiniband" was going to be the successor to things like PCIe, such that you could network together discrete components and allow them to communicate/share their resources among multiple hosts.

Infiniband never took off though, though it's still around as a high speed networking interconnect (and gets active development for some industries like HPC).


Close, things like PCI and AGP (i.e. fast/wide parallel busses), not PCIe.

Everyone in the late-90's recognized that parallel was maxed out and GHz SerDes with embedded clock recovery, adaptive equalization, lane skew compensation, error detection, etc. was the future. Future I/O (IBM, HP, Compaq, 3Com, Cisco, ...) and NGIO (Sun, Dell, Intel, ...) were competing efforts that eventually merged and then rebranded by Intel as InfiniBand. But IB had the usual design-by-committee disease as it tried to shoehorn in networking, io, and system interconnect roles. Intel then bailed from the effort and serialized PCI instead. Intel tried to get back into that game in the 2010's with OmniPath without success.


> Close, things like PCI and AGP (i.e. fast/wide parallel busses), not PCIe.

Thanks. Had a feeling I may have been wrong with "PCIe" term specifically, but couldn't be bothered looking up the exact details. :)


Cases in the traditional "desktop" flat style have vertical GPU mounts for free. It's interesting that there's very little activity in that sector now, for fullsize motherboard support, especially since we've decided we no longer need to design a case around drive bays.

I do think the PCI-E to M.2 migration is getting a bit silly. My current mainboard has six M.2 slots, with four different levels of feature support (one intended for the preinstalled Wi-fi, one that can be SATA or PCI-E 3.0 x2, three PCI-e 4.0,4 one 5.0x4) Yet, if I said "I want a SCSI or SAS card", no dice, because there's only two x1 slots, and most cards seem to be x4.

The four-slot GPU problem might be more manageable if GPUs with AIO liquid coolerss included became more of a norm. That would at least let you move the majority of the bulk of the cooler away from the slot area. But usually this is reserved for ultra-expensive enthusiast cards.


I would like to see mini-sas as well, but simply for being a bit more flexible in case design and potentially separating heat generating components instead of putting everything in the same box.

But the issue with GPU sag is AFAIK a different one. Jayz2cents had a pretty good video a while ago demonstrating how to fix sag on the case itself without one of the little stands. From personal experience I can say the Gainward Phantom 4090 doesn’t sag in a Fractal Torrent for example.


It doesn’t work on the huge cards. Your PCB will bend. Ask me how I know.

Maybe if you’ve got a really great/thick backplate but not all cards do. It’s too much stress regardless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: