#1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.
That 4.0 x 4 set of lanes is expected to service up to...
* 4.0 x4 (bridged daisy)
* 4.0 x4 NVMe
* 4x SATA 6Gbit/sec
* USB 3.2 20Gbit
* 4x USB 3.2 10Gbit
* 4x USB 2.0
+
* 4.0 x4 (daisy to cards)
* 4.0 x4 NVMe
* 3.0 x1 2.5Gbit Ether
* 3.0 x1 + 2.0 USB WiFi
* 2x SATA 6Gbit
* USB 3.2 20Gbit
* 4x USB 3.2 10Gbit
* 4x USB 2.0
__ideally__ they'd have done something more like x4 PCIe 5.0 for the secondary NVMe drives and downstream system ports to share, along with an x2 PCIe 5.0 link utilized for the other existing ports on the two chips.
The sort of person that buys the _high end_ prosumer boards DO use things in burst often enough for it to be a consideration.
In the end, the devices chipset AMD has sourced from 3rd parties is good as a general tool for manufacturers... My issue is that I'd _really_ like their high end chips to support more PCIe 5.0 x4 lanes, possibly used in aggregations.
Imagine if they instead supported 5.0 x16 (or x8+x8) AND 5.0 x16 (2x8 || x8+x4+x4 || 4x4). That'd allow for either a second full x16 slot for future mass IO devices (be it a GPU or NVMe riser card) or the full sized ATX boards with a good number of x4 slots.
Maybe that is what a lower core count, higher mhz, thread-ripper socket was really for.
> Maybe that is what a lower core count, higher mhz, thread-ripper socket was really for.
It is. Historically the HEDT sockets have often overlapped with the consumer socket in terms of core count - this was true of X58, X79, and sTR4, and X99 was so cheap that de facto it did overlap anyway (5820K basically cost the same as a 4790K, and motherboard costs were in-line with what we saw from X570 boards until the B550 line settled prices down a bit).
That’s fine because HEDT is not about core count, it’s about memory and PCIe lanes. The current offerings leave a void for "I want a big platform but I don't need >24 cores and I'm still somewhat price-sensitive", the classic "workstation/prosumer" tier that used to be serviced by things like the 5820K/3930K/1900X/1920X.
Potentially you could get to a similar place with a bunch of PCIe 5.0 slots attached to PCIe switch chips - this style of board used to be called “supercarrier” by one brand. Unfortunately it pretty much died out in the wake of SLI and crossfire becoming niche and then extinct. And the current crop of Intel and AMD boards only offer PCIe 5 on the first slot anyway so that isn’t quite as possible as you’d think at first glance.
It’s really a shame the way AMD hollowed out the HEDT segment and cranked prices. A 3960X is four 3600s on a HEDT package with a single bigger IO die instead of four little ones, it’s a very cheap chip to produce, it should really go for more in the $700-800 price range than $1600+.
(And the HEDT boards are also quite expensive for what they are - the ROMED8-2T gets you 7x slots of PCIe 4.0x16 full-capacity, with power delivery for 280W TDP CPUs, dual 10gbe, and BMC, for $600. Look at what a $1000 sTRX40 board buys you and just laugh, "gamer" boards are ripping you off.)
Again, the precedent is the 5820K and the TR1900 series where these savings were passed on to the consumer - it is historically abnormal for HEDT to be such a huge reach compared to desktop chips, but AMD isn’t interested in pursuing low-end (actually they aren’t even interested in releasing Zen3 HEDT chips at all outside WRX80) and Intel has abandoned the segment entirely for now. Maybe Alder Lake-X will change the situation and force AMD to pay a little more attention, just as it has forced some of the ridiculous 5000 series price increases to be backed down.
Right now it is actually worth a strong look at Epyc server boards like ROMED8-2T and chips like the 7402P because if you don’t need the absolute clock rate of Threadripper the Epyc chips are often cheaper per-core while offering a better PCIe and memory capability. That’s completely opposite from how HEDT has always worked but AMD is pushing hard in the server segment and sandbagging in the HEDT segment and that flips the math in a lot of homelab or workstation situations.
(note: the "pcie5 only on the first slot for X670E, no PCIe 5 for X670" appears to have been a false rumor, per the computex presentation it's "X670E is PCIe 5 on everything, X670 is PCIe5 on the first slot, B650 is pcie5 only on storage".)
To justify worrying about available bandwidth, you shouldn't be listing available ports but instead listing specific devices along with a use case that would actually have them actively transferring data simultaneously in the same direction at speeds that would make the x4 uplink problematic.
> #1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.
Keynote slides that just came out show RDNA2 in the Zen4 I/O die, so it looks like there won't be non-APU systems. I think you'd have trouble using pins for PCIe sometimes, and DisplayPorts other times, depending on the CPU you install, would make things more confusing, IMHO.
Yes, seeing those this morning, if even the higher end CPUs all come with at least an anemic framebuffer GPU a server could use the x8 and x8 links intended for a desktop's GPU as IO expansion slots. It looks more palatable as a 'could be pressed into service as a server' segment for both new and hand-it-down builds.
The block diagrams show (for useful devices):
#1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.That 4.0 x 4 set of lanes is expected to service up to...
+ __ideally__ they'd have done something more like x4 PCIe 5.0 for the secondary NVMe drives and downstream system ports to share, along with an x2 PCIe 5.0 link utilized for the other existing ports on the two chips.