Hacker News new | past | comments | ask | show | jobs | submit login
MSI reveals first USB4 expansion card, delivering 100W through USB-C (techspot.com)
139 points by CharlesW on June 4, 2023 | hide | past | favorite | 135 comments



Why the DisplayPort outputs? I could understand if the card took an existing video signal and spliced it into the USB-C ports via alt-mode, but just outputting it in a separate socket doesn't seem very useful.

Also, where is the video signal coming from? I assume the card doesn't have a GPU, but I don't see any internal ports either.

Or did the article get it wrong and the DP sockets are really inputs that are then spliced into the USB-C outputs?


I'm amazed how the report could get it utterly wrong considering it is literally spelled out in the only photo of the device they have. It has also been exactly the same in the previous MSI TB3 add-in card which has exactly the same port layout.

Ah, and the source article https://www.techpowerup.com/309532/msi-first-motherboard-mak... got it right.


Thunderbolt add-in cards have generally had DisplayPort inputs on the back to feed into the thunderbolt ports. It is a hack, but a side-effect of the adapter not having a GPU, and this becomes the most convenient solution (especially as GPUs puts their outputs in a similar location).


I always loved having a cable coming out of the GPU and connecting to another card. There have been many versions of this throughout my use of various expansion cards. This is excluding the SLI type of internal connections.


This made me remember of my first 3d accelerator card, the Diamond Monster 3D. It was exactly like that, a 3D-only card, that took over the 2D signal of your default video card when you activated it by daisy chaining the 2D videocard to the 3D via a short vga cable between both :-) Good times when 640x480x16bpp was enough to make me wow at something like GLQuake


I also remember my AVerMedia TV card had no way to output audio through the computer's speakers, so you'd have to take its audio out and plug it into your microphone input and enable playback of audio in.


I still to this day am not sure how the voodoo 2 actually worked with that analog link.


They used chrome-key, aka green screen. The 2D card would be told to paint an area a solid color, and the Voodoo would overwrite that section.


Pass the system’s usual video signal through the voodoo unmodified when not generating 3D graphics.

Replace the signal entirely with the voodoo’s own signal when generating 3D graphics.


Wait, what's the use case pf "connect monitor to some usb-c/thunderbolt thingy instead of just directly to GPU next to it" ?


You only need one (possibly long) cable to connect your workspace to your PC. This might be useful in cases where the PC is in a different room/far away (when using an optical USB C cable this allows distances of at least 50m), when your monitor is your USB hub or if you're simply a clean desk fetishist.


I have a similar setup, and the use case is that it’s a single USB C swap for my entire workstation setup to go from my work laptop to my gaming PC. Just a convenience thing.


Kinda shame as Displayport optionally (in theory, pretty much never in practice) supports USB over DP cable.

Could just be one DP cable to the GPU without extra fuss, like decade ago...


The photograpgh of the card in the article shows the DisplayPort sockets labeled as _in_ fwiw.


Ah, I see it now. That makes more sense.


USB4 mandates every port have DisplayPort capability. I think it's quite likely indeed DisplayPort input we see here.


ASUS has the same thing (since last year at least), so dunno why the propaganda https://www.asus.com/motherboards-components/motherboards/ac...


Just FYI for anyone tempted to buy this card: it does not work. There are multiple revisions and none of them work right. It stands a 50/50 chance of becoming terminally confused after a peer disconnect, will thereafter not work and hang the BIOS on reboot, only fixable by unplugging the machine. Absolute garbage. If you want TB4 I strongly recommend a machine with TB4 ports integrated, preferably from a major brand like Intel or Apple.


That's literally the same experience I have with the MSI TB3 card. You were talking about the ASUS one, I guess?

The level of BIOS bugginess on recent motherboards is reaching astronomic levels. The MSI motherboard I was trying that on will literally _duplicate_ the entire ACPI DSDT whenever you enable TBT (Linux shows a hundred warnings when booting up, which MSI claims is fine)...


Yes. I never tried the MSI one. The add-in cards are an opportunity for some chronic underperformer from a 3rd-tier BIOS developer team to put bad code between you and an Intel thunderbolt chip. Just say no.


The fractal brokenness of the Thunderbolt ecosystem in Windows is... something. I was hoping that USB4 would at least make it a little nicer by confining failure modes to PCIe not working while the USB tree worked as normal since the USB tree was native, but my Asus motherboard with built-in USB4 not infrequently manages to get itself into a state where DisplayPort and downstream PCIe devices work while USB devices plugged into a USB4 hub don't.

(also an older Asus with its TB3 expansion card only worked at all in Windows if you disabled the "enable Windows support" option in the BIOS, which is an interesting choice)


It’s spectacular and super weirdly bad. So many bugs and issues.

Oddly my OWC thunderbolt dock/switch somehow manages to provide it’s gigabit nic my surfacebook 3 over an _active_ thunderbolt cable. But the surfacebook doesn’t have thunderbolt support at all! So somehow the USB fallback and the display attached via a usb-c to displayport cable works. If I plug a real thunderbolt device in, it can see them and tries to enumerate them, which of course fails and throws up a malfunction message, stating that the thunderbolt device does not work correctly. Then it bluescreens…


it's too bad AMD doesn't have first class TB4 support yet, at least not on the consumer hardware i've been looking at


> Just FYI for anyone tempted to buy this card: it does not work. There are multiple revisions and none of them work right.

Ah, the ASMedia experience! Glad AMD picked such an experienced and well-regarded company to do their chipsets /s

edit: Okay, the linked part actually uses an Intel chip, though the point still stands. ASMedia has so many other chips and bridges (including the mentioned AMD chipsets) and they're all pretty notorious for weird "huh that was supposed to work" kind of problems. Problems that generally don't happen on Intel platforms.


>USB4 uses USB-C

That's a relief I guess, I was afraid we were getting a new connector again. How do normal people keep all this USB crap straight? I consider myself something of a nerd but I find all of this cable/port diversity utterly bewildering.


USB is 27 years old and 99% of USB devices I've seen used one of just three different ports, two of which are still in wide use. I'd say the fact I can plug a 15 years old harddrive into a brand new PC and it'll just work without any adapter is amazing. Bewildering diversity? Where?


Within about 5 meters of me I have devices with sockets or cables for USB 2 Type A and Type B, USB 2 Mini (not sure if Type A, B, or AB...), USB 2 Micro (Type AB?), USB 3 Type A, USB 3 Micro (Type B? not sure...) and finally USB 3 Type C.

That's seven at least by my count. I probably have some USB 1.1 Type A peripherals in my closet but those might be USB 2, they look the same so I'm not sure. The only devices I counted were: phone, kindle, laptop, PC, mouse/keyboard, printer, portable HDD, and a GPS nav unit. Maybe I'm weird for still using the last one, but otherwise I don't think any of this is particularly unusual.


All the type A are compatible though? I think most people consider that one.

There have been several common B (B, miniB, microB, appleB) but it was mostly all microB for quite a few years before C happened.


USB 3 type A has extra pins. It is backwards compatible with pre-3, but you don't get the USB 3 features.


I think you've confused "first USB4 expansion card" with "first device that has a USB4 socket." Laptops have had USB4 for a couple of years now.


USB4's been out for quite some time, USB3 is usb you're used to / we cut our teeth on if you're over 25.

There's a vocal subset that argues *USB4 itself is confusing*, but 95% of the time they're referring to either:

A) power (the crappy $5 Amazon cable from 3 years ago will not carry 100W for your M2 Pro Apple Silicon Apple MacBook Pro

B) crappy 3rd party cables you can't trust because you can't trust crappy 3rd party cables, ever


For what it's worth, I've tried to find a Thunderbolt/USB4 C-to-C cable that can also carry 100W. The only cables I've found (that weren't obviously lying in the ad) were Apple's cables.

Even other laptop manufacturers have cables for charging and cables for high speed data transfer. Samsung's official cables seem to do either USB3 or high-speed charging, but not both at the same time. Lenovo has a Thunderbolt 4 cable (with no details on the power capabilities) or a USB 3.2 2x2 cable that can carry up to 100W. Dell Europe doesn't seem to be selling C-to-C cables. HP seems to sell a competent cable for one of their docks, but that's out of stock. Asus' cable is USB2 only. Acer doesn't seem to sell any cables as far as I can tell, except for a USB 3.1 cable for Chromebooks.

If you want a fully-featured USB4 cable, your only options are either buying a cable from Apple or going for third parties and with the prevalent scams on online web stores and the risk of burning your house down if the cable turns out not to be up to spec, getting good cables can be a real challenge.

Apple sells their cables for a ridiculous price (€150) but I wouldn't expect the average consumer to know where to reliably spend their €30-€50 for a charge cable. Device manufacturers really need to step up and sell some kind of certified cable that doesn't cut any corners.


> [...] either USB3 or high-speed charging, but not both at the same time [...]

That seems implausible/unlikely to be the cable's fault: Since different wire pairs are used for power supply and data, why would the two be mutually exclusive?

> I wouldn't expect the average consumer to know where to reliably spend their €30-€50 for a charge cable

The situation was tricky for a while, but these days, even on Amazon.com's search results (which are frequently swamped with spam/uncertified products), searching for "Thunderbolt 4 cable" yields products by trustworthy-looking vendors supporting 100W charging in the first 5 results.


Caldigit sells them https://www.caldigit.com/thunderbolt-4-usb-4-cable/ including 2m active ones.


I don’t think I’ve ever seen a 40gbps capable cable that didn’t support 100W, and Apple isn’t the only one selling such cables with certification. Where are you finding these odd cables?

5/10/20gbps cables, those I’ve seen limited to 60W.


I'll believe it as soon as the manufacturer advertises it. If they have a cable that says "100W" and a different cable that says "high data transfer speed" then I can only assume the high speed one wasn't made to carry that much power.

I know it's only a few amps but there's still a real risk of melting plastic and starting fires (and this has happened in the past).


I won't say that USB4 is confusing, because I admit I don't understand it, because I haven't really bothered to try, aside from noticing some motherboards advertising "support for special USB4 module socket, coming soon."

The USB forum has completely lost any sort of messaging muscle it once had.

USB3 was somewhat exciting in its original 5Gbps form-- it made it feasible to put things like spinning rust external HDDs on there with no compromises, and in effect strangled eSATA and the last places you'd see external SCSI ports.

Since then, all they've been able to deliver is an alphabet soup. First, nothing was special because everything keeps getting upgraded-- that "3.0" device I was going to buy became "3.2" without having any actual changes made. There's an ever growing set of laundry-label style icons for power supply and DisplayPort compatibility, precision-engineered to be either ignored or misused by lowest-bid vendors.

USB 4.0 could really have benefitted from a strong message-- something like "Here's ONE big and obvious 4.0 badge and it means any cable with it will support ALL the features promised in this slide deck."


Neither will good cable. You need to explicitly go to 3rd party test sites to buy a fucking cable that does what protocol says it should. It's a fucking nightmare


> How do normal people keep all this USB crap straight?

We just have a drawer full of various USB things. If something doesn't fit/work, just try another one. If none work, or the drawer is running low, buy some more. You might have to buy multiple times if the new ones don't work.

At least, this is what works for me.


When you plug a monitor into this, where does the framebuffer live?

Is it in GPU RAM of another GPU in your system, with pixel data being DMA'ed over the PCIE bus? Does that mean my screen goes blank if I ever get contention on the bus?

Could I put loads of these cards into a machine and run 20 independent displays, all with one GPU?


The two DisplayPorts on the card are inputs, you run a loopback cable (or two) from the GPU to the USB4 card and it muxes the video into one (or both) of the USB4 outputs.

Loopback cables are a bit clunky, but it does mean there's no pressure on the PCI-E bus from moving the pixels from the GPU to the USB4 card.


I bet if they put the displayport inputs on the inside with an optional passthru bracket in the box to get them out of the case as needed, we'd start to see GPUs offered with internal displayport outputs too.

Basically the CD-ROM analog audio cable all over again.


Thats kinda lame... I was hoping there was software support for the GPU to share the framebuffer RAM with the USB4 controller, and there being some way for the OS to prioritize some fixed bandwidth to make sure it doesn't get starved.

Apple kinda has this in their M1/M2 SoC's where the GPU is entirely separate from the stream-data-from-ram-to-the-display logic.


Sounds like something that would be a pain to get broad OS/driver support for – and all just for the sake of avoiding a small external loopback cable?


Well it also means you can have as many displays as you like...

And put displays onto hubs and cable extenders.


What's the use case for this kind of setup?


connecting usb-c-only external monitor / vr goggles

so you need to have video signal and power on one usb-c output.


> Does that mean my screen goes blank if I ever get contention on the bus?

The monitor has a framebuffer of it's own.

Otherwise how could it display the OSD (On Screen Display)?


It's not really a framebuffer, but a tiny bit of RAM that just gets scanned out on the applicable parts of lines when the OSD is active --- certainly not the whole screen.


I just want a graphics card with USB4 on it which hopefully(!) would make it more useful and reliable than these addin cards.


GPUs are already filled with ports and there is no demand for any manufacturer to put a USB 4 port on it. They also saturate their PCI-E slots so trying to shove more data through it isn't tempting. I doubt you will see such a thing unless it is made specifically for that purpose.


They’re not hugely commonplace yet, but video cards with USB-C ports are available without being super specialized equipment.

Nvidia had some on the 2000 series GPUs but dropped them with the 3000 series. This was part of the VirtualLink system which failed, but they’re otherwise fully functional.

Some of AMD’s newest cards have USB-C ports on them now: https://www.amd.com/en/products/graphics/amd-radeon-rx-7900x...


Cool. I thought they were dropped completely after the Turing series; I didn't know AMD picked them up.


> GPUs are already filled with ports and there is no demand for any manufacturer to put a USB 4 port on it.

USB-C ports on graphics cards are already quite readily available.

> They also saturate their PCI-E slots so trying to shove more data through it isn't tempting.

No they don’t. Most cards and use scenarios don’t even saturate x8 PCIE4 let alone a full x16 PCIE5 slot.


I'll say this once and I'll say this again.. mixing Data + High Power never a good idea..


It's not really that high of a power rating though

48V is ubiquitous through POE et al

And with 5A the gauge of the wire is important but it's not going to be running hot to the touch

I doubt you'll see it go much higher though, especially the current


Why not? It doesn't seem to be starting fires(At least not enough to make the news) and it's digital data, with IIRC pretty good forward error correction. Interference doesn't seem to be an issue, it all seems to work fine.

The only issue I see would be counterfeit cables, but that's the same issue you get with any low voltage system that allows separate cables, and 120VAC even has the same issue, people burn down houses with space heaters and extension cords incessantly, because we are apparently idiots who think it's fine to not have fuses in the cords.


Not an EE but 100w, that gives me The Fear. It reeks of trouble. Someone please set my mind to rest.


Many Watts means some combination of Amps and Volts - power is Amps times Volts.

The main risks from each are different though. Simplified explanation for the non-physicist:

Lots of Amps make wires get hot, make connections melt, etc. In general, thin wires (eg headphone wires) are good for 1 or 2 amps safely, and chunky wires (eg. AC cords) are good for 10 or 20 amps safely. The exact amount depends on how much cooling they get - which is why you should never put blankets over power cords. If you screw this up, the outcome is probably a fire, which maybe kills you.

Lots of volts causes electrocution. Thats because your skin is pretty decent at blocking electricity, but when you have enough volts, some can sneak through your skin, freezing up your muscles, stopping your heart which probably kills you.

So - amps and volts have totally different mechanisms of death. USB-C @ 100W is 20 volts at 4.5 amps. The voltage is well below dangerous levels, and the current is pretty safe in the cables specced, but I can totally imagine a few fires.

It's worth noting that about 10x as many people die from amps (electrical fires) than volts (direct electrocution).


By the way... The USB-C standard goes up to 240 Watts.

Thats 48 volts (fairly safe IMO) at 5 amps (only marginally safe IMO, on that connector design).

There are laws in many countries capping voltages, but not on currents - which is why the standard is pushing the safety envelope there!

The standard has no way to detect damaged cables (ie. baby chewing through the cable), nor connections getting hot (ie. an old dirty connection heating up and catching fire). But I suspect a future revision will allow both more volts and more amps, and will add those safety features - I'm hoping on the next version being 500 volts and 8 amps, allowing it to replace all household outlets.


> The standard has no way to detect damaged cables

Of course it does. All it needs is to ask the other device what voltage is on the other side, calculate resistance off it and trip or drop to lower power mode if it deems that there is too much power lost in the cable. Still not perfectly safe as say 3W lost over whole cable is far less dangerous than this 3W loss focused solely on dodgy connection but still.


What part of the standard mandates, or even optionally supports that?

Standard USB-PD only requires that you negotiate a voltage and current suitable for both sides (among a preset list). It essentially goes open loop after that; you can't exceed the current limit set, but beyond potentially tripping the PD controller (on the sink side) if the voltage drops too low, there's not really any protection here. It's fully on the PD sink to implement (or, not implement) any sort of protections like that.


> What part of the standard mandates, or even optionally supports that?

None, which is the fucking problem. Cheap cable can just lie about it's current capacity.


I think the parent was saying that it could be done, not that it is being done.


Hopefully they come with auto-update firmware where if I plug a gen 15 cable into a gen 14 device and a gen 12 local power delivery unit the cpu in the cable will update the key in the PDU and get key material from the device to ensure that everyone's up-to-date on their licenses.


That would enable a new generation of USB worms.


"It would unleash an entire new generation of recurring revenue generation opportunities as innovators could develop exciting new technologies and industry leadership while protecting intellectual property rights holders"

"blah blah blah worm security blah blah"

-- same sentence different language.

(and -- I'm entirely trying to be tongue-in-cheek -- the notion of little dynamically updated microcontrollers everywhere in my house, getting bricked or asking me for a micropayment to turn a light on is a pkd story come to life)


Eloquent that, thanks.


> USB-C @ 100W is 20 volts at 4.5 amps

Clearly one of those numbers is wrong



I got a pinprick sized 3rd degree burn from the end of a micro usb charging cable. It was more startling than dangerous, but I now have a little more respect for low powered devices, especially when plugged into mains.


In what sense? Putting that much power down a USB cable? USB-PD keeps the potentially cable-melting amperage capped at 3A or 5A (the latter only if the cable is explicitly marked as 5A capable) and moves more power by increasing the not-cable-melting voltage instead, so 100W is 5A at 20V. There's already laptops which charge at 140W over USB-C, it's fine.


It's a lot of energy. And close to signalling wires. Controlled by computers (What you describe is done by a CPU, not some simple foolproof system, I guess?) which can go wrong. In a small plug that can get yanked out easily and damaged.

That kind of thing. Am I being paranoid? It just smells like a house fire in the offing.


There's a lot of moving parts yes, but in practice it's proven to be pretty safe. Issues usually manifest as charging being too slow, rather than going too fast and causing a fire.

Whenever you charge a device you're already trusting a complex computer-controlled system not to turn the lithium ion pack into an incendiary device, trusting a computer not to overload a copper cable is small stakes relatively speaking.


in practice we don't have all that many devices pushing it that much that are not "a vendor charger connected to vendor power supply".

I expect there will be some surprises with cheap cables pretending they can handle full power.


Never mind “pretending they can handle full power”. Remember this story?

https://arstechnica.com/gadgets/2016/02/google-engineer-find...


Unless the other end lies about what voltage they're getting, a cable can't lie about its ability to carry current.


Of course it can. It is not tested by any of the devices, it is returned by cable itself. Cheap cable maker can just program cable to pretend to be more powerful one but not have wiring up to spec:

> Sources Shall detect the type of Attached cable and limit the Capabilities they offer based on the current carrying capability of the cable determined by the Cable capabilities determined using the Discover Identity Command (see Section 6.4.4.2) sent using SOP’ Communication (see Section 2.5) to the Cable Plug. The Cable VDO returned as part of the Discover Identity Command details the maximum current and voltage values that Shall be negotiated for a given cable as part of an Explicit Contract.

Please don't talk about stuff you have no idea about.


higher voltage makes it a lot more sane, imo. if anything increasing the voltage a long time ago would've probably helped us avoid the plague of all these shitty under spec USB cables. the cynic in me says they would've just shifted to even thinner gauge wires, but hey, a guy can dream, right?


Given that Apple users have been charging their laptops this way [1] for more than five years, maybe you are :)

[1] To be fair, it's more like 60-80 Watt for most models, but these days it can be up to 140 Watt, as far as I know.


No worse than an Apple power cable, and have those ever gone up in flames


yep, exactly. there's voltage negotiation up to 48v going on. though in practice probably not quite that high yet


USB power delivery works very reliable. Much better than those 10W USB-A chargers/devices operating outside the USB spec and sometimes shutting down your old Laptop when connecting it, because of too high power consumption.


The size of the pins they're running this over is what makes it scary.

I've had high-end devices (Macbook Air I'm typing this on) give me unreliable connections for basic USB 1.1, and now they're planning to run amps over similarly-sized pins?

My experience with USB-C on laptops has been nothing but terrible.


There's plenty of smaller pins that run MUCH higher amperage through them without an issue (M.2 sockets for NVMe drives come to mind). Granted, those are secured a bit better... but you have to trust the mechanical designers of this interface have actually thought things through here. It's robust.


M.2 is limited to 14W.


According to Wikipedia, each pin is only rated to 500mA. If you look at a pinout (depending on key), there's roughly 9 3.3V pins. 14W at 3.3V over 9 pins would be 470mA per pin.

It seems like comparing the amperage of M.2 to USB-C is indeed a little silly.


> now they're planning to run amps over similarly-sized pins?

Now? Between 2016-ish and the re-introduction of MagSafe, MacBooks have been using USB-C charging exclusively!


It's already ubiquitous, most new laptops have these as primary power plug


Do laptop power supplies scare you?


I guess they don't but here remixing a power supply with other things and I don't like the complexity. See my other answer above if that helps explain my concern.


GPUs are arguably much more complex and have been using much more than 100 W for a while now.


But they're not exactly cheap commodity items like USB cables.


Laptop plugs and wires are significantly beefier than average cable


Average USB cable? Not in my experience. And between my laptop power and USB cables, it's a laptop power cable that's the thinnest.


How can 100W be delivered through those tiny USB-C connector landings, especially since only a few of them are used for power?


It negotiates and then switches to 20V to deliver 100W, so it only needs to run 5A through the cables.

Wire gauge requirements are related to current, not power; that's why high-speed trains usually operate at 25000V or 50000V to allow their megawatts of power to be delivered through a ~1cm diameter overhead line.

That said, I still hate USB-C as a power standard. I would have much preferred 2 connectors just supplying 20VDC, no questions asked, with a standard connector, just like 120VAC power works. Pick another connector for the USA low-voltage standard, none of this negotiation fuss, none of this complicated power circuitry, and much harder to get cables wrong. With the complexity of USB-C there are just way too many substandard cables flooding the market, and average Joes don't understand the difference between a 100W-capable cable and a 60W-capable cable and one that is charging-only, one that is USB 3.0, one that is USB 3.2, one that is USB 3.2 + 100W charging and all of that. People just buy "5-star" rated cables online that often don't meet the standard and it's a shitshow.


That would force all power adapters to support a fixed maximum current, which in many cases would mean either wasted materials (consider e.g. shipping a 100W adapter with a smartphone) or unsafe conditions (with a device attempting to pull 5A from an adapter supporting only much less than that).

I'm really happy with being able to plug my laptop into my tiny 5V, 2A power adapter and slowly charging it, yet being able to fast-charge it at home at 100W using the same cable on a larger power adapter.


> That would force all power adapters to support a fixed maximum current, which in many cases would mean either wasted materials (consider e.g. shipping a 100W adapter with a smartphone) or unsafe conditions (with a device attempting to pull 5A from an adapter supporting only much less than that).

No, it would not require any of that. It would require only for cable to signal it's max current capacity.

So cheap cable would allow for say 20v/0.5A (10W), while decent one could signal 2A(40W) or max.

Simplest way of signal would be as the previous USB buses used, via a specified value resistor on one of the pins.

More complex would be signalling the power supply power to the device on the other side, especially if you want dynamic power or reversible charging direction. But USB-PD is generally horrifically complicated for what it does


> It would require only for cable to signal it's max current capacity.

As you say yourself a bit later, you'd need at least two "simple" resistors (one for the cable itself, one for the adapter) – otherwise it's only fixed-cable power supplies (and I really, really hate having to throw away an entire high-quality power supply just because the cable frayed at the laptop end).

> But USB-PD is generally horrifically complicated for what it does

Which parts would you suggest are safe to omit, in a world where you want to be able to charge both small accessories (Bluetooth headphones etc.), smartphones, and large laptops using a single connector?


> As you say yourself a bit later, you'd need at least two "simple" resistors (one for the cable itself, one for the adapter) – otherwise it's only fixed-cable power supplies (and I really, really hate having to throw away an entire high-quality power supply just because the cable frayed at the laptop end).

You could have end device regulate itself by looking at voltage drop, akin to MPPT. And note that simple (sub-10W) devices could opt to implement nothing whatsoever and just get the minimum 20V/0.5A, making small accessories need nothing whatsoever when it comes to extra chips to handle it.

You really need communication only when you start needing features like switchable sink/source function or voluntarily lowering power usage.

>>But USB-PD is generally horrifically complicated for what it does

> Which parts would you suggest are safe to omit, in a world where you want to be able to charge both small accessories (Bluetooth headphones etc.), smartphones, and large laptops using a single connector?

Not talking about featureset but how hideously complex the implementation is. I remember some early adopters tried to implement it using a microcontroller and it ate like 30+kB of code and it was more than half of total product's code

The spec (3.0, rev 1.1) is 574 pages. No fucking wonder we have so many devices implementing it subtly wrong. Hell, they even decided "no, none of the existing serial protocols fits our super special use case, need to invent that too. At least they didn't reinvent CRC32...


I don't understand the implementation, it's pretty nuts. And the multiple layers of optional features and fragmentation are pretty awful. I kinda wish they hadn't specified any power levels other than 5v and full programmable PPS....

I also wish they had implemented a dedicated solar profile to simplify solar generators, so a panel could say "I will act like a current source and do MPPT".

They probably could have done Dallas One Wire for the signalling and been just fine. It could also be how the cable chip communicates, and very small simple devices could use it without the full USB stack.

But, I can accept it just because of how well it works in practice as is.


> That would force all power adapters to support a fixed maximum current

No, we would just start to have both 20VDC sockets on all the walls in the world in addition to 120VAC/240VAC, and people would stop needing to lug around power adapters.

Yes there would be a maximum current per socket, maybe 5A or 10A, that's not much different from the 15A limit on most AC sockets.

The conversion to 20VDC could be be wired into buildings as standard.

Before then people can lug around a 20VDC adapter that supplies enough current for their own devices. If they have a laptop they might need a 10A adapter, if it's just a phone they could get by with 1A.


So you're suggesting we re-wire every single building with these "simple" 20V DC plugs? We'd better get both the voltage and maximum current exactly right, because the last time we picked standards for this, they lasted for several centuries!

And if we do get it wrong, do we change the mechanical plug shape if we decide that no, actually 40V (or 15V, or 32.945V) would have been better so that no unsafe connections become possible?


AC outlets have been the same on a per-country basis for decades and will be for several more decades.

USB, on the other hand, is "hey I have a new plug" every 5 years. We'll have USB-D in 2030 and everyone will have to buy new cables and adapters.

Anyone who needs 40V should just design it to use 120V/240V instead.


Sure, but what makes 20V the perfect voltage for almost everything, so important that we should go through the effort of rewiring entire buildings?

It's way too much to charge small accessories (without an internal step-down converter [1]), yet it's probably not enough to sustain (let alone charge) beefy laptops.

> "hey I have a new plug"

You do realize that this is exactly your pitch for these 20V "standardized" plugs/outlets, right? :)

[1] A big reason why USB's 5V is so popular is that it's just the right voltage to charge lithium ion batteries and run simple circuits without expensive (in terms of parts) active voltage converters.


> it's just the right voltage to charge lithium ion batteries

No, it's not. Lithium is nominally 3.7 volts per cell and charging voltage goes up to about 4.2 volts max, maybe 4.3. Definitely never 5V.

A 5V->4.2V buck converter isn't particularly different than a 20V->4.2V buck converter. In fact it's probably a bit easier to make a 20V->4.2V converter since you don't have to worry about parasitic losses as much.

> You do realize that this is exactly your pitch for these 20V "standardized" plugs/outlets, right? :)

Not exactly, (a) I'd use an existing connector, maybe a Lenovo 20V connector (b) USB keeps having to switch things because they weren't a charging standard to begin with, they were abused as a power plug and then had to keep evolving with that while maintaining backward compatibility and continually keeping up with Moore's law in bandwidth requirements.

A 20VDC plug would just be that -- a low-voltage socket for <100W appliances.

Anything higher, just use 120V/240V and get upto 1800W.


There's also the added failure mode of a PSU deciding it needs to supply 20V to a device that can only withstand 5V.


You need to have device side protection for that anyway because broken PSU can happen in many ways you can't really predict, including "someone implemented that monster of a standard wrong"


Care to add any counterarguments?


Sure. Since you asked so nicely, here's a datasheet that illustrates a device which integrates overvoltage protection. Plug a malfunctioning 20V USB-C power supply into your phone, and the phone will internally disconnect the USB-C connection to protect itself.

https://www.ti.com/lit/ds/symlink/tps65987d.pdf

I incinerated 250 seconds of my time, or about a billionth of my life for this answer for you - for free! Hope the little dopamine hit was enough for you today. I won't be back.


A PSU "deciding" to do that is an ultra rare case, and probably dwarfed by PSUs that simply break internally and deliver the wrong voltage, which can happen to any PSU.

The closest thing I've heard of was a cable that pretended it was a device, making it so that unplug events didn't register properly.


The cable must report itself as being capable of 5A, which requires it to use 20-gauge wire for the VBUS.

ETA: I guess your question was specifically about the connector interface, which is less of a problem because there are 4 power and 4 ground pins. 1.25A per pin is no issue.


20 gauge wire is 0.9 mm diameter.

A USB-C contact is much smaller (seems to be around 0.5 mm).

This is the part I don't understand. If you require a 0.9 mm diameter wire, how can you have it connect to a 0.5 mm landing strip?


Resistance is the function of conductivity of the material, cross section area of the conductor, and length. In the case of wires the length is long, so they need more cross sectional area to keep the resistance down, in the contacts the length is very short, so they need less cross sectional area to achieve the same Resistance. Often, they make the contacts from an even more conductive material than copper like gold or silver, so that helps even more.


Also there are 4 connectors for ground and 4 connectors for power in USB-C connector


The electricity is going through the plug for only a few millimeters, and is going through the rest of the cable for 500-2000 millimeters. It's okay for the plug to have more ohms per meter. It's going to have a very tiny voltage drop and emit a very tiny amount of heat.


there are 4 connectors for ground and 4 connectors for power in USB-C connector

hacks like that are not required.


Hacks like what?

If you're saying the plugs don't have more resistance than the equivalent length of cable, that's wrong. From what I can find it's very typical for a USB C plug to have 40 milliohms of contact resistance. Even if you divide that by 4, that's 10 milliohms in a single centimeter. 20 gauge wire is 0.3 milliohms per centimeter.


It's often called "necking down". The connector is higher resistance, and will indeed have more resistive loss than the wire. It's a very small length, though. The extra heat is quickly dissipated into the the nearby wire through thermal conduction, so it will be only marginally warmer than the rest of the cable.

This is a very common practice on PCBs. You can use a very wide trace where there's room, and narrow it to get around tight spots like between pins on an IC.

When I built solar race cars in college, we optimized the powertrain for resistive losses over weight. We used 00 AWG wire from the battery to the motor, knowing it would max out at 40 amps. We would then "neck it down" to 8 AWG to go into 40A rated connectors. To a casual observer it definitely appeared ridiculous and wrong.


>This is the part I don't understand. If you require a 0.9 mm diameter wire, how can you have it connect to a 0.5 mm landing strip?

there are 4 connectors for ground and 4 connectors for power in USB-C connector


The bigger issue with this is not the steady state but what happens when the cable is janked out carrying that current. Spark erosion of the contacts etc., the spec goes into quite some detail there.


Or just makes a poor connection to begin with due to out-of-spec cables or connector wear/damage, which is likely considering the much tighter tolerance than old-school, reliable, battle-tested connectors such as barrel jacks.


(Relatively) high voltage, so lowish current. I think 20 volts at 5 amps or so.


While this is cool for the occasional use, 100W means a bit more of that must come from the wall, just for USB peripherals. I appreciate fast charging and all that but it would be good to somehow have efficient devices again.


We're expecting way more from our USB peripherals than we used to. It's either drawing power from USB or having an extra power adapter connected directly to the USB peripheral, and thankfully we're shifting towards the more convenient option. I'm not sure what this has to do with efficiency.


>I'm not sure what this has to do with efficiency.

Energy efficiency.


There's no theoretical reason I see why USB would be worse than an external adapter with the same wire gauges.

The only issue is that an expansion card powered by the PC supply would have an extra conversion step to get the voltage to USB-C levels.

That's solvable though, maybe they could route the USB through the power supply itself. Or a docking station can power both PC and monitor. Maybe the next generation of PC parts will run directly on 48v or 20v.


Wonder is this locked to MSI boards or could it work on any board?


Alpine Ridge didn't work without the proprietary header at all.

Titan Ridge worked to an extent -- but usually it couldn't add PCIe buses so the number of PCIe devices connected to your desktop TB3 card at boot defined the limited until the next reboot. You could remove and add different devices as long as they didn't need extra PCIe buses compared to what got reserved at boot.

Can't see why Maple Ridge would work differently.


The photos of this card show a header JU4_1 which has a different pinout than all MSI motherboards' Thunderbolt headers I know so far, so I think it won't even work in current MSI motherboards, just future ones.


These kinds of boards always require proprietary connectors directly to a supporting motherboard and the image shows some extra pins next the power connector. So I suspect this card isn't any different


There was some hope that someone would make a USB4 expansion card that didn't support PCIe tunneling, which wouldn't need those extra pins. If this card isn't it, then literally the only interesting thing about it is that it doesn't use an Intel controller. Which really is a negative.


USB4 1.0 USB4 2.0

Yeah, what a reasonable naming convention.


The only thing I want from USB4 is ECC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: