USB9 V1.742.ac/3 ... it is important to be precise here, because otherwise, the cable will be incompatible and the systems involved will fall back into a safe mode that's essentially, but not quite USB 1.0 standards. /s
So, why exactly wasn't USB4 2.0 not called USB 4.2?
They had to call it USB4 2.0 because the previous generation was USB4. How could they change their naming scheme to something like 4.0, 4.1, 4.2, that would confuse consumers!
True, i somehow omitted the 100km while reading their comment.
It's still unlikely to succeed, as the required precision for multiple simultaneous data streams is quiet significant. Nonetheless, it's theoretically possible. Just not economically useful, as cables are inherently cheaper for the same amount of data transmitted.
I'm not saying that these won't exist, they already do to some degree with Starlink after all... I'm just saying that this tech is highly unlikely to replace the backbone infrastructure, which seacables are.
We can't possibly imagine now what can succeed in 100 years because we can't even conceive what kind of tech or societal developments will happen over that period.
I was SURE they had learned their lesson after the clusterfuck that was 3.1/3.2 but no, right back at the same shit again.
It's hard to see this as anything but purpose-built to make it harder to tell what a cable is capable of (a huge boon to scammers, aka: 90% of cable sellers). If the USB-IF had half a brain between them all (clearly they don't) they would require the data/charging/etc specs to be printed on the cable. Instead they spend their time coming up with confusing standards that are indecipherable to the average person which lead to a shitty world where everyone needs a cable tester to know what a cable is capable of, or they have constant bad experiences with a cable that fits the port but underperforms if it works at all.
I've heard some people say this is because it's all marketing-driven vs engineering-driven, if that's the case the fire the whole department because they suck at marketing. It's gotten to the point the only cables I can trust are from Anker and even those suffer from lack of labeling on them.
They have successfully reverse engineered our computer architectures, found abundant documentation on TCP/IP et al, one team had a major breakthrough and international recognition when they recovered large parts of the network infrastructure of an old obscure company called Google, which uncovered a platform for hosting audio-visual content, giving researchers abundant material to reconstruct how past societies worked. But that one thing they just can't crack is those weird cables with different plugs and colors and names, misleading and contradicting instructions, documentation and specifications. They'll never find out what secrets these plastic-encased HDDs hold.
The most probably cause for the civilizational downfall, argue the archeologists, is that after a software update everyone's devices stopped being able to communicate with each other. They found references to a prophecy of this in old books, and refer to it as the "Babble" event.
The old marketing names assume a user can understand the difference between Full Speed, High Speed and Super Speed without prior knowledge, and that is somehow easier than... numbers.
I think standards in general are confusing to the average person. The average person doesn't care about standards, they care about completing a task: can I use this cable to connect my Mac to a 4k Monitor? Can it pass all the power that my device needs from my charger? Will this cable be a bottleneck if I connect it to my external SSD storage?
None of my computers support USB4 today, but I'm buying USB4 cables so I don't have to waste so much time looking for a cable that will do a particular job.
I want to say that I will only buy cables that are labeled clearly with capabilities and bandwidth but I'm not sure that is a promise I can keep.
Worth noting: the USB cables I have which reliably work at 20Gbps currently are considerably bulkier than other cables. Thicker wire AWG and more shielding. They're not always the most practical and they're considerably more costly.
Also, I had to test quite a few cables which were sold as supporting 20Gbps before finding ones that actually worked, so pre-buying ones which supposedly work at a certain rate when you cannot test that may not be the most prudent approach.
I don't understand why they can't use trademark law or something to keep companies from violating the spec all over the place. Even the Nintendo Switch's implementation has a list of violations a full page long.
I don't think the Switch is sold as USB-C in the US. (You can charge with one, but I think they tell you to use the Nintendo Power Adaptor and make no mention of USB-C.)
The target market for thunderbolt and USB cables are very different though. Most USB cables are bought by people who just want to charge their iphone and don't want to spend a cent too much. Thunderbolt cables are bought by people who want to send data between very specific and expensive devices.
Real cable testers are specialized high speed scopes with all sorts of protocol support and cost thousands.
But what you need is a device that simply queries the cable and asks it what it thinks it can do. I haven’t seen one for sale and am too busy to make one, but it shouldn’t be that hard.
Apple’s system info ought to tell you this but doesn’t.
I wish, I've looked a little but part of the problem is the new specs. I doubt many testers can be "forward compatible" to be able to test new specs before they exist. I have an old USB-A tester I used but it's obviously useless for USB-C and all it's specs. Even then, the old tester I have just tells you the speed/wattage, not the max, it's more for monitoring than testing.
At some point I'll just have to bite the bullet and get whatever is on the market and hope there aren't new specs right around the corner.
I believe Totalphase tests signal integrity instead of specifically what kind of cable it is - which solves the forward compatibility problem but their stuff is very expensive for the home user
Why? Surely there could be some SBC or something for like $100 which you could connect to your computer. The testing software can measure how long it takes to transfer a file. Then it could try charging itself from the computer requesting a higher and higher wattage. I don't see how a device like that could be so expensive.
I mean sure, but you can't see signal quality at 1gb/s without a scope capable of much higher rates (you need at least a few points in each period). Short of muxing together several fpgas and hand picking components, you straight up can't see the signals. There's a reason why you can buy a 4 digit multimeter for $30, 5 for maybe $500, and >5 costs thousands.
Yeah, I would like a test device that could let me know which transfer rates a cable supports. I've got a huge pile of USBC cables (and adapters). A good chunk of them might even be USB 2.0!
It seems like a device capable of acting as a host & device could just try to shove data down a cable and see what the max is...
Take a USB3 device, plug it in with the cable, and then go to System Profiler (Apple Menu -> About this Mac -> System Profiler). Look for the device, and check the speed. If it says 480MB/s it's a USB 2 cable, if it says 5GB/s, 10GB/s or 20GB/s then it's a USB3 cable.
I assume there's a way to do the same thing on Windows and Linux.
Coconut battery shows how much power your battery is charging with. I've only used this to compare power supplies, but I don't see why it shouldn't work with cables as well. Of course that only works when the battery is empty enough to actually fast charge.
It would be helpful if the connectors for the different cables had different shapes, so it would be immediately obvious what kind of cable it is and mostly prevent plugging incompatible cables into the wrong places. Thank you for listening to my TED talk.
What do you want to test? Another reply has mentioned the 15K USD TotalPhase device, which I'd love to use one day.
For "home" testing, I've used a USB C -> NVMe enclosure (UGreen is my favourite) with a half decent NVMe drive in it. Does the drive reach 1 gigabyte / second sustained read/write? Congrats, the cable supports 10 gigabit.
An average person doesn't need mods than USB 2.0 speeds. Anything past microSD speeds is fine. Even M.2 can only do 2GB/s or so, right?
120Gbps is way beyond anything an average person needs.
The only issue is that hopefully it will replace HDMI and some cables might not handle that. But even then, it's solved by buying a few Ankers and marking them with tape.
Yeah, it's all super confusing, and could be better, but in practice it works really well.
No, it really doesn't work well. Have you ever used a USB protocol analyzer or written a USB device driver?
Bad cables operating out of spec don't just make things a bit slower - they introduce noise. They corrupt data which best case is caught by a checksum and just creates latency but worst case triggers device driver bugs and makes devices behave oddly, hang, need to be plugged and unplugged, etc. This is enormously frustrating for users and it happens all the time. Users typically blame their devices, or their computers, rather than the bad cabling.
Furthermore, it's not even remotely correct that an average person doesn't need more than USB2.0 speeds. That's only 480Mbit/s, which isn't nearly enough to drive the $80 HD webcams that we're all using while WFH. Common modern webcams need like 1.5Gbit/s, and that's just the webcam. You can't run a typical WFH computer on USB 2.0 in the age of zoom meetings.
If you're not a 45k+ a year type, you might very well be doing remote work on phones, assuming you're WFH at all. Almost all laptops have built-in cameras and the convenience factor is pretty high.
Even if you are, webcams could very well run in 480Mbps if they had to, if in camera encoding was still the norm, which it would be if high speed cables weren't (theoretically) widespread.
I agree that the situation might not be ideal for people doing stuff beyond average consumer level, but even that could be mostly worked around with a controller firmware update(Count the bit error rate, inform the user if they are using a trash cable).
No amount of standardization and marking of the cables will eliminate scammers without killing cheap uncertified stuff totally, so I would think software and documentation fixes would be best.
Most stuff already comes with a suitable USB cable. Just pop up a reminder to fish out the good one if the one you're using is bad.
"Almost all laptops have built-in cameras and the convenience factor is pretty high."
How do you think the internal camera on most laptops is connected to the rest of the computer? (Hint: It's permanently wired to the USB bus. It's a non-removable USB webcam)
Given this fact, it seems you should actually agree with my original statement.
However, many newer laptops have begun to use cameras with a MIPI interface, like the smartphones. This has created problems with bad device driver support for non-Windows operating systems, as discussed in a recent HN thread.
2012 called and they want their PCIe 2.0 devices back.
My PCIe 4.0 x4 SSD does 7GB/s for bursts up to ~700GB at a time (SLC cache) which is very convenient when I'm moving video around. You can also get x8 and x16 data center SSDs.
PCIe 5.0 SSDs should come out next month and double that.
The USB versioning makes total sense. People just misunderstand versioning, in a similar way to how some people thing version 1.2 is higher than version 1.11 because versioning sucks.
For the general consumer, there are three labels. SuperSpeed USB 5Gbit/s, SuperSpeed USB 10Gbit/s, and SuperSpeed USB 20Gbit/s. More Gbit/s is more better. Easy and understandable. SuperSpeed USB 10Gbit/s used to be called SuperSpeed+ but that was a mistake that was soon corrected. All the gen-X-Y-by-Z stuff is for people who read data sheets for fun. You don't need to care and you shouldn't.
The Gen 3.2 2x2 label is juar not for general consumers. It's a technical spec written in a similar way to how law works: the spec has revisions.
Take food for a comparison. There have been big lawsuits around whether something is a biscuit or a cake for weird tax reasons. For this analogy I'm going to assume cakes are just better biscuits, swap them around in your head if you disagree.
Now, if I sell a biscuit, I'll say "1 biscuit". This is similar to 1x1 in USB terms. Then I come up with my improved biscuit, or a cake, and I'll also sell cakes; I'll put "1 cake", or 2x1 in USB terms. The generation is just how good the food is or how fast the lane is.
Through the magic of engineering I also find a way to pack multiple foodstuffs into a single box! I now sell boxes of "2 biscuits" (1x2) and "2 cakes" (2x2). In practice, nobody needs boxes of 2 biscuits, but it's a technical possibility.
However, suppose the law changes this year. Some biscuits are now cakes, some cakes are now biscuits.
If my target audience is the general public, I'll put "2 cakes" and "2 biscuits" on the right boxes with updated packaging or ingredients. This is similar to the USB logo with a little 10 or "SuperSpeed USB 10Gbit/s". Because nobody buys my boxes of two biscuits, I don't even bother designing a logo for them.
If my target audience demand technical specs on my food, I'm in trouble now, though. How do they know if my definitions are based on the new law or the old law? Will their dietary compliance officer let them buy from me? The solution is simple: add the spec you're following to the ingredient list on the back!
Now I'm selling "two biscuits (food law of 2021)", or in USB terms "USB 1x2 Gen 3.2", and "two cakes (food law of 2022)", "USB 2x2 Gen 3.2" in USB terms.
Cakes can be cakes in both 2021 and 2022 law, but I don't think anyone will be confused if you write down according to what law they're considered cakes. Similarly, people shouldn't get confused by what spec the USB description works. Look at the consumer name for comparison of actual products, look up the provided spec the conversation table that came with your copy of the latest USB spec if you want to get a pedantic view.
This versioning problem is everywhere. Are two PCIe Gen 4 lanes faster or slower than four PCIe Gen 3 lanes? How does a PCIe Gen 5 lane compare? What about memory, is DDR5-3200 faster or slower than DDR4-3200? Is an Intel Core i7 12700 faster than an Intel Core i9 11900K? What about a Ryzen 7300 versus a Ryzen 3950X?
The label "USB4v2" should probably be changed to "USB4 Gen 2" for consistency, but this random Chinese hardware manufacturer might just have used the wrong name here. This isn't part of the official USB spec, it's just a name some person found on some machine.
Apple’s MacBook Air page lists “USB 3.1 Gen 2 (up to 10Gb/s)”.
Now, imagine someone buying a USB hub or dock. You might ignore the details from above and just search Best Buy or Amazon for any usbc compatible device. You’ll most likely mess up. Alternatively, you could search for a device specifically compatible with “USB-C 3.2 Gen 2 (full function, USB+DP+PD)” as mentioned above for your Lenovo and get immediately super overwhelmed and confused.
If we're going to be talking about general consumers, most can't (and righteously shouldn't) handle anything more than one whole incrementing number. WTF is a "gbits", a layman would ask; is it Google Bits?
Car manufacturers have this drilled down: Each new model car gets a new version number based on year of release, nothing more nothing less. 2020 Honda Civic, 1995 Toyota Camry, 2022 Tesla Model S. Newer year? Newer model. Simple. Done.
Likewise, USB should be numbering each and every new different standard revision with a whole number if the goal is to attain general consumer understanding. By my tally, we're up to USB12 with USB4v2120gbps (USB11 for USB4v280gbps).
How fucking simple everything would be if instead of asking if a cable/device is compatible with USB3.2, which can be USB3 or USB3.1 and Gen1x1, Gen2x1, Gen2x1, Gen1x2, or Gen2x2 because nobody knows WTF Gens are, we ask if it's compatible with USB6 (aka USB3.2Gen2x2). It's simple and unambiguous.
USB naming schemes as it stands is pig latin to computer nerds, let alone the layman. If the goal is universal understanding by the public at large, aka the general consumer, it is failing hard.
> However, suppose the law changes this year. Some biscuits are now cakes, some cakes are now biscuits.
> Cakes can be cakes in both 2021 and 2022 law, but I don't think anyone will be confused if you write down according to what law they're considered cakes.
But all cakes are biscuits and always have been, better in every way.
And many cake-biscuit packages just say the year the cake upgrade was invented.
So if you put the cake year on something, people expect those biscuits to also be cakes, and not just biscuits.
Even if the packages are supposed to say "cake", what actually happens in practice should be taken into account.
If you use an R1 cable on a 1.31 device, you may end up with a brick, or even worse a fire.
Before plugging them in, I always dissect open the ends of my cables and examine the chips under a microscope just to be certain. Sometimes a delid is also needed, in such instances I head over to Ken Shirriff's house. :P
To make it less confusing they’ll be moving to the old WiFi style names. The new versions will be called, in order:
USB-A, USB-B, USB-R, USB-CZ, USB-L2, USB-S, then USB-D.
To avoid confusion they’ll use the same physical port, which is now Type-C, but renamed Type-1138.
USB-D is somewhat theoretical as we’re likely to go through more renaming by then. But I digress.
It’s important to note each version of the standard supports both active (“o”) and passive (“u”) variants depending on distance.
Power levels are a simple scale of 10^nth milliwatts, expressed as a Roman numeral for readability. So I = 10mw, II = 100mw, etc.
Soon you can buy a highly advanced passive USB cable capable of delivering 10kW to your car for charging.
This kills me because USB naming is actually super straightforward.
USB3.0, USB3.1, and USB3.2 are spec revisions.
USB3.1 (and 3.2) support Gen 1 and Gen 2 physical layer/transmission modes (gen 1 is 8b/10b encoding and gen2 is 128b/132b encoding).
USB3.2 adds support for using the extra differential pair in the cables to transmit data. So now you get x1 and x2 which correspond to the number of lanes available for data transmission (akin to PCIE x1, x4, x8, and x16).
So USB3.2 gen 2 x2 is just the USB3.2 standard using 128b/132b encoding over 2 lanes.
The problem is that everyone decided to use the tech spec identifiers rather than the consumer identifiers for labeling products. If you are in engineering working on making the product which uses USB ports, then you would look for ICs that support USB3.2 gen2x2 operation but if you are selling a consumer product, you are supposed to label it as a USB3 SS+ 20Gbps port.
On consumer devices, this would be "USB4 120Gbps". Look for this when you're in the store.
If you're designing circuit boards or writing drivers, you'll need the full spec.
Assuming the new spec pushes 60Gbps over a single lane, that would be "USB 4 Gen 4x2" going by the USB spec. Two lanes (x2) at the new lane type (4), part of the new USB 4 spec. A device only capable of driving a single new fancy lane would be "USB 4 Gen 4x1".
I think it's only a matter of they renaming every model every time the standard is updated. So models come with $standard_version $standard_revision $main_model_version $minor_model_version, and there are different sets of all of those numbers that refer to the same thing.
Or maybe they have another naming schema orthogonal to that that I don't know about.
But in all seriousness, this is one of the worst things marketer types make us suffer. Breaking consistency of naming and numbering. EACH and EVERY model series of ANYTHING successful just has to break the pattern at some time.
Remember when Nokia phones had 4-digit model numbers, optionally with "i"? Then Nseries and Eseries arrived which all had Symbian, but confusingly enough they continued to make 4-digit models with Symbian in parallel. They also started "classic" and "slide" series which often reused other unrelated model numbers. Around 2010 they changed it all to a hot mess like X3-02, C3-01, then came 3-digit optionally with Asha in front. Thankfully they were not as creative with Lumia which had 3-digit or 4-digit models.
The USB3 names were also internal (Proper marketing terms were Superspeed 5/10/20 Gbps). However motheboard manufacturers loved technobable as marketting.
Even easier. Just have the EU mandate that everybody will be required this year to support USB4 Gen4x2x2(v1) on all devices. This will solve everything!
80gbps is a bit of a weird one. To work with 120gbps, that would imply that 80gbps is two lanes and 120gbps is three lanes in a single cable. I suppose this is possible with the type C connector as it has plenty of pairs.
This would make USB4 80gbps (the sticker you see on the box) USB4 Gen 4x2, as there are two lanes of the fourth lane type. The 120Gbps one contains three lanes of the fourth lane version, or USB4 Gen 4x3.
There's no need for USB11 and USB12. Look at the 10/20/40/80/120Gbps sticker on the box. That tells you all you need to know.
>80gbps is a bit of a weird one. To work with 120gbps, that would imply that 80gbps is two lanes and 120gbps is three lanes in a single cable. I suppose this is possible with the type C connector as it has plenty of pairs.
As I understood it from TFA, 80gbps is realized with 2 lanes down, 2 lanes up; while 120gbps is realized with 3 lanes down, 1 lane up (yes, the opposite direction is capped to 40gbps).
So that's USB4Gen4x2x2 and USB4Gen4x3x1 if we use the current genxlane nomenclature? Who knows.
>There's no need for USB11 and USB12. Look at the 10/20/40/80/120Gbps sticker on the box. That tells you all you need to know.
I can't recall the last time I actually saw a gbps marking on USB cable/device marketing, so no I can't "look at the sticker" because it doesn't exist.
Just give me an unambiguous, whole, unique USB standard revision number for me to compare with my other devices and cables. The current system of superceding and overlapping standard revisions and meaningless genxlane numbers is fucking worthless and a waste of my time and nerves.
Playing devil's advocate: a fiber+power cable makes some sense. The main thing which limits the cable length is the data wires, not the power wires (just notice how one of the main counter-arguments for "one USB cable to rule them all" is "the higher bandwidth passive USB cables have to be shorter and thicker").
Fiber is far too fragile to deal with the abuse even babied USB cables put up with, unless you mandate some kind of semi-articulating steel conduit anyway.
Considering recycling is stupidly uneconomic for all but a tiny selection of materials and just makes the whole carbon issue even worse, why would we do that?
Hell, for plastic throwing it in a landfill is only second best to never making it in the first place.
I'd probably guess it's too expensive? IIRC the original Thunderbolt spec was a fiber standard and they moved it to copper because people wanted to power stuff and adding copper cabling for power + power negotiation was infeasible cost wise so they ended up with a copper only cabling solution instead.
So put power? You can run fiber right alongside 50 amp high voltage cables if you want without any signal corruption issues whatsoever.
For practical purposes they'd need just {fiber, 19VDC, GND}, and none of the schengens that USB-C has. No shielding issues, no signal corruption issues, nothing. And you could run it for kilometers.
Voltage converters are big and not that reliable at the consumer level. Putting the power supply in something replacable is a good thing. Going from 19 to 5 would be an extra conversion step.
Plus, 19v is too low. USB does 48v for the 240W mode. You could use more current, but then you are wasting copper and making connectors bigger and heavier.
I used to not be a fan of negotiation but it seems that it's highly reliable these days, stuff isn't getting fried because of incorrect voltage selection.
You can, there is no reason that the data transmission cannot be done with fibre. You just need a fibre to USB coupler at the end and a source of power. If you have a use case for it you could build it with two Raspberry Pi and a pair of couplers.
Is there a reason (other than design by committee) that USB is so complicated? You need to have the right kinds of connectors, cables, and device support. The speeds you get vary for any number of reasons. Why didn't someone just say "all the cables are fast and charge good and if you want slow or bad, use old USB"? It seems we've found the worst possible situation, where consumers don't understand what they need or can use or why, and instead of making it easier, we get the second version of the fourth version which has multiple versions that all do different things under different circumstances.
Unfortunately, USB has more features than USB manufactures want to support. So, the end result is a bunch of those features are optional. That complicates everything. Is this a USB-C cable that can support PD 3.1? Who knows! Because, not all USB-C cables NEED to support PD 3.1, some of them just need to transmit data at USB 2.0 speeds. Those can be made super cheap.
This is especially bad when talking about things like devices. Sure, every device now-a-days has the USB-C connector, but does your phone really need to be capable of doing 80gbps on that port? Nope, so it's optional. (and would be sold, rightly so, as a power saving measure).
Every USB feature has a fast, cheap, low power tradeoff going on and the standard allows for you to mix and match features to your heart's content.
> That complicates everything. Is this a USB-C cable that can support PD 3.1? Who knows! Because, not all USB-C cables NEED to support PD 3.1, some of them just need to transmit data at USB 2.0 speeds. Those can be made super cheap.
Assuming you just mean the higher power levels, I'd say that's one of the most minimal issues. Every cable supports at least 60 watts, and you rarely need to consider it.
Certified Thunderbolt 4 called are available and the fastest and most compatible you can get.
Expensive but reliable.
I don't understand all this hate for USB. Sure the labeling could be better but having a standard that is basically back compatible all the way to the nineties and now a single multipurpose port for charging/storage/displays/devices is a nice feat.
Thunderbolt adopting USB-C was definitely a clever move for both Thunderbolt and USB. Thunderbolt now effectively becomes a high-end branding for USB while being fully compatible for lower end devices. You're willing to pay more money for simplicity? Go for Thunberbolt. You wanna get an idea for how the next-gen USB will look like? Go for Thunberbolt.
I'd also just say this is a low information environment. Traditionally, we'd rely on the media as a line of defense. Product reviews would tell us who to trust, what the good deals are. We trusted the media to review & find problems & surface issues in the past. But those institutions have undergone enormous revenue crunch, and there's very few left who can really go in depth & do the deep, thorough, intelligent exploration we ought trust them for. Online marketplaces in the world today are bedlam, and there are few trusted authorities. This is a general world condition, but with tech it's extra bad because when things don't work, we almost ever know why/who to blame, it's all just frustrating wizard-shit that's broken & not going, & we are left bereft utterly impotent in the face of these high tech system's mysterious failures.
Honestly, I'd prefer not paying for USB-IF certification on my cables, and I don't care that none of the official logos designating certain compliances are met are embossed on my cables. But I also am a geek who will go spend 3 hours to find Benson (alas no longer active) or Linus's in-depth tech reviews[1], and who doesn't mind trying some cables (buy a couple different ones at a time), finding some not working, & returning those, to save a couple bucks at some hassle. For most people: just buy certified gear.
Ok but it is kind of ridiculous that we have to solve the whole "lack of trust in institutions and the media" and also "unreliable global supply chain" problems -- very real things we should be working on, sure -- in order to make sure our laptops charge at full speed reliably, right?
If you're trying to mandate the highest requirements for all manufacturers, then they're going to either not build or not conform simply because it's too expensive. This is an unfortunate consequence of being the universal standard; it's even worse if manufacturers decide to ignore the standard so you gotta try embrace them.
Yes, I think iPhone moving to USB-C will make a dent. Currently, nearly half of the mobile ecosystem doesn't work with USB very well and a large portion of them represents a high-end segment which the recent USB standards target for. Also Apple will produce hundreds of millions of high quality USB cables which will help economies of scale.
There is no reason they make the transfer speeds as hard to read/decipher as possible. However there is a reason that features are separated. These do cost money in implementation in regards to the cable, the host, and client.
But we make the up front cost trade-off at the expense of nothing working right. People buy cables or chargers that don't support the right level of charging. Cables work with some monitors but not others. Instead of paying extra for things that just work, we end up paying for impossible-to-distinguish hardware that may or may not work because of some flavor of the spec that doesn't work with the other piece of hardware you're trying to connect it to. A lot of that ends up being instant trash. Genuine, good hardware that is just incompatible is indistinguishable from badly made hardware: to the consumer they both don't work. And so we end up with waste and frustration in the name of saving a few bucks.
The charging situation isn't their fault. USB wasn't designed for it. Non-compliant devices started allowing charging before it was ever permitted. Then they added PD but nobody supported it and manufacturers cooked up their own ad hoc systems to detect a charger.
> The charging situation isn't their fault. USB wasn't designed for it. Non-compliant devices started allowing charging before it was ever permitted.
A standards body at least shares some fault when they move so slowly that real solutions have to work ahead of them to provide a demanded feature and things end up in a mess like this.
PD predates USB 3. Nobody supported it until it became integrated into modern USB controllers. Prior to that there were janky manufacturer specific resistor combinations used to signal the presence of a charger.
Because if you're selling a computer, and it has a USB 2 port, you don't want your spec sheet saying "USB 2" while others have "USB 3", but it's hard to upgrade your stuff mid-cycle, so you get "USB 3.1" to be defined as "USB 2" so you can look at least competitive.
Neither is true. Because those are document generation numbers being used as marketting terms by the product manufacturers.
USB-IF should refuse to certify manufactured products that doesn't use the marketing terms that do include the actual speed. But it is funded by those manufacturers so they do whatever they want to confuse and trap consumers.
USB wants to appeal to everyone, and like HDMI is ready to discount themselves (by using 'optional' capabilities) as hard as it takes to appease their members. This is why you can buy a USB device or cable for $1 — and why those devices are a compatibility nightmare. (The dollar amounts are higher for HDMI, but the same problem applies.)
Thunderbolt wants to appeal to "quality at a premium" and does not offer optional capabilities. This is why you can buy any Thunderbolt cable and use any Thunderbolt device — and why cheap devices decline to certify for Thunderbolt.
Then you buy a thunderbolt dock, and your mouse and keyboard stutter because of mutually incompatible bugs between the bios, the thunderbolt chipset and the USB chipset.
Implementation defects exist in USB, HDMI, Thunderbolt, Ethernet, Wi-Fi, and exist in all protocols.
Thunderbolt considerably reduces the defect surface for certified implementations when compared to USB and HDMI, by reducing the number of optionals for cables and implementors to zero.
That does not imply zero defects in use. It implies fewer defects in use. And invisibly-damaged cables are still the bane of all of us.
Pretty much every combination of laptop and thunderbolt dock I've tried in the last 6 years has been a mediocre, buggy experience. I've personally tried 4 docks across 3 different laptops.
I got a 2nd work laptop just so that one can sit permanently docked, never unplugged or power cycled. Every once in a while I have to get docker running on it, which is also a miserable experience, but that's not related to thunderbolt!
It's not even just that it's expensive: if you look at a cable that's full-everything including top-rating for power delivery, it's a pretty thick inflexible thing and not at all what you want for e.g. a mouse or a keyboard or whatever.
This isn't even new. Even USB 3.0 (the first gen, later rebranded as USB 3.1) micro-B to A cable, which has been around for at least 10 years at this point, is already not good for mouse. Nor is my USB-C 2.0 with 60W PD cable.
Which is fine. There's plenty of devices I own that just need a charging cable. But it's near impossible to know what cable does what. Even some of the expensive ones I've bought don't do what they say.
You would think with how much money is pumped into the governing body of the standard they could both force classifications of cables/devices and also verify they actually do what's advertised.
Every company using USB-C needs to pay $3500 USD to use the branding and a proper vendor ID. Every single company. And yet they still can't solve the very basics.
But then why make them USBv4? Just make v3 cables. Has it truly been politicized by the industry to add so much complexity so crappy manufacturers can sell bad cables/hardware with the latest version number instead of just making stuff that consumers can use?
Nobody would buy their old USB2 cables when they can get other faster cables, so the manufacturers of the old cables would actually rather pump money into politically pressuring the relevant standards committee into 'massaging' their latest technical definitions to include products which already exist.
I'm really starting to think this is absolutely on purpose. Clear versioning would make all cables/accessories instantly outdated/undesirable whenever the new generation starts to show up. Unclear, obfuscated versioning is used to dampen this effect - for most purposes the older cables are "good enough" anyway, and when not, the user will make another buy. Same thing with HDMI or WiFi.
Sure, then have a charging flavor and a data flavor. Put a special little logo on the connectors. Why do we need such a complicated spectrum of things that plug into each other but don't work well for data or power?
I've found that the good cables that I've got all have a PD wattage label at least. Except for the Apple cables, but I can recognize those on sight and I'm not buying USB-C cables I could confuse for then.
Does anything else use PAM3? PAM4 is all over the place (PCIe 6.0 from this winter, 25/100/200 Gbe, GDDR6X, I forget who else but more), but PAM3... apparently once was used for 100BASE-T4[1]. That's 100 mbit, not Gbit, not 1000Mbit, 100 mbit.
Just so excellent that the USB4 nee-Thunderbolt tunneled/packet-switched architecture means that, if we do get a speed bump, that boost is a win for displays, storage, egpus connected via displayport, thunderbolt, what-have-you. (Not necessarily for those devices, but the % of throughput they consume diminishes respectively). The leap to USB4 is a huge shift, but tunneling/packet-switching has so many downstream benefits.
I'm sure to be downvoted, but- grasping the third rail here: does anyone else tire of every time USB being mentioned, the thread turning into a mudslinging festival? I find repetitive, almost never constructive, usually incredibly poorly defined (my dock didn't work-- ok, how? at all? doubtful) complaints repeated ad-nauseam, and there's such limited energy to refute the waves of negativity from the sore & aggrieved. I want there to be room for these voices, but the mere mention of USB is a magnet for misery. I can & apparently should write some blog posts on why things are they way they are, how incremental evolution happens, and how progress isn't always smooth. But to the larger phenomenon of muck raking being the popular activity at every mention: it's definitely not just USB, in online communities this vocal persistent complaint about topics is a social condition that, so far, has no remedy, no redress. Systemd had to deal with this, Kubernetes, react, rails; so many have had their time on the planet of being targetted subjects. It just happens again and again and again, and the world doesn't deserve to have only the negative so amplified. It'd be so nice to redirect these energies to more focused gathering places, and to let the actual news & events be discussed in peace.
To add a usb naming joke, and to build constructively & consistently on the past, this would obviously be USB4 2x12. Two channels of of 60 Gbps (5Gbps * 12). The previous USB4 2.0 would be 2x8. There, I fixed it for you.
> I'm sure to be downvoted, but- grasping the third rail here: does anyone else tire of every time USB being mentioned, the thread turning into a mudslinging festival?
No. I was sold on USB being one cable to replace them all. Now we're back to making sure we carry all the right cables for our devices. The glorious 20 days of "all my accessories can be dealt with by one cable" was amazing and the USB folks decided to go ahead and screw that up.
to me it's exactly the opposite. but i've spent effort growing into this.
i carry a keyring of adapters, one good 1m $25 heavy-duty USB4 cable for connecting fancy stuff, & a regular short-ish light $8 usb3 100w cable for most daily usage. there are usb-c to anything pigtails/adapters up the wazoo: usb-a male/female, usb-mini-b, usb-micro-b, displayport male, displayport female, hdmi male, hdmi female, 20v dc barrel jack, 12v dc barrel jack. most are under $10, less than the specific cable. and a female/female c couplers for good luck/desperate times.
there's some subtle side-perks over dedicated cables to. the displayport & hdmi adapters often have power in, which means if the target is closer to power, i can run a ~2m usb-pd cable (if available) from power to target, & it shares the same extra 1m to laptop reach.
that i can carry a 100w battery that works with nearly any device is just chef kiss icing-on-the-cake/intensely better than where we were. thanks usb-c / usb-pd.
it sounds stupid, having a keyring of adapters. but i used to carry cables for everything, way too often/just-in-case. now i just carry one good cable, and a variety of jacks i can put on the end of it. it's a world of improvement to me. i'm not sure what world you lived in where one cable did it all- i always had display, power, data cables, and i also totally get how most people haven't realized how to properly/easily do usb-c with pigtails/adapter jacks, but wow, it's just night & day better to me, & i think it should be for you too.
Your solution is useless for all but the 10% most tech literate users. And also involves carrying around a veritable toolkit of adapters and multiple cables.
But it's still far far more accessible with way less effort & burden than what came before. Functionality comes easier, cheaper, with less weight & more functionality. It is a new domain of knoweldge & understanding than the old ways buts it's better, easier, lower cost & more accessible.
The obviousness speaks for itself & re-defines the antiquated literacy of before.
Eventually the adapters ought go away. Beacuse every peripheral ought, if they give any cares, have a usb-c port. You can do whatever else might help. But do the good thing: have a usb-c lort that works on you device. You should. You should free us from legacy adapters.
Given the world that exists, I'm interested in this good cable and the adapters. Previous experience has shown adapters to be janky at best, but that could just be an area where tech has improved or I don't know how to find quality yet - what do you recommend?
(I reserve the right to still be salty over the wasted ideal of one cable to rule them all though)
I buy no name adapters & not a one has ever shown any issue for me. "ChenYang" USB-C female to displayport adapter? Runs my 1440p170 & 4k75 monitors fine. $15. Ditto for the literally no-name similar looking displayport adapter.
Most people don't do high speed anything. I have a camera I can download video from but only at MicroSD speeds.
90% of USB cables are for charging stuff, or low speed things like keyboards. For an average person it's already close to perfect. Even PD over 30W is somewhat rare.
USB monitors will be a little confusing but still better than bulky HDMI.
The Base-T Ethernet standards get very creative about their modulation levels. 1GBase-T (IIRC) uses PAM-5. Using non-standard PAM levels is a very weird thing to do, and it appears to only be useful for optimizing speeds on old (or cheap) cables.
However, PAM-4 did originally have problems with transitions between the extreme levels (00 to 11 and vice versa). There was academic discussion of line codes that could eliminate these, but it appears that this kind of technology has fallen by the wayside in favor of PAM-3 for USB.
Also an Ethernet fun fact: with 100BASE-T1 and 1000BASE-T1, you can now get Fast and Gigabit Ethernet over a single twisted pair. It's primarily used for embedded and especially automotive applications to simplify wiring. Both use PAM-3.
Are you perhaps mistaking 1Gbps with 1Mbps (1000K Baud)? Automotive networks are extremely slow, and there’s virtually no need for them to be faster.
The industry standard is to just add more pairs if you really need it, so you see cars now with 15-20 individual pairs each running at 500K or 1000K.
But these transports are extremely reliable, and still work in most failure cases — even if you unplug a terminator, splice some long random aftermarket branch, or ground or short one side. And they don’t use small cables either, usually just 20 AWG unshielded twisted pair (not even in a shared jacket). There is not much need for shielding or differential matching on PCBs, it’s not that sensitive.
Some newer forms of automotive networking used here and there for a time, but OEMs keep trying them, ditching them, and returning to slow networking.
> Are you perhaps mistaking 1Gbps with 1Mbps (1000K Baud)? Automotive networks are extremely slow, and there’s virtually no need for them to be faster.
No, it's not a mistake.
Electronics is a large field, quite a few concepts can be niche and unfamiliar even to its practitioners, so please don't assume something is incorrect just because you personally have never heard about it. I recommend doing a quick search first before trying to refute anyone. Here are just two examples I've came across recently:
1. Someone asked a question about a "DC-link" capacitor. One experienced engineer replied and claimed it's nonsense, because DC cannot pass through a capacitor. But in fact, it's a standard term in the power inverter industry that refers to the capacitors between an AC/DC and DC/AC power stage.
2. An article about Ethernet "magnetics". A reader replied and claimed the term "magnetics" is nonsense because they've never heard of it, if it's a transformer, just call it a transformer! But in fact, it's a standard term in Ethernet hardware that refers to both the isolation transformer and the optional common-mode choke at an Ethernet port. To a lesser degree, it's also valid in the power supply industry as a collective term for the magnetic components like inductors, EMI chokes, and transformers.
> Automotive networks are extremely slow, and there’s virtually no need for them to be faster.
High-speed, Gbps-level digital link solutions for automotive applications are currently being heavily promoted by semiconductor companies, it started since a few years ago. This is something you can immediately know just by a casual browsing of their websites and take a look of their latest chips, like Analog, Maxim, or TI. I assume this interest first came from the demand side, by the OEMs. You don't just make these chips for nothing.
One major application is the transmission of HD videos in a car entertainment system.
> Some newer forms of automotive networking used here and there for a time, but OEMs keep trying them, ditching them, and returning to slow networking.
You may be correct. This can be another attempt at it, and may or may not be ultimately successful.
> The 1000BASE-T1 MediaConverter establishes one direct point-to-point conversion between automotive ECUs using 1000BASE-T1
> DP83TG720S-Q1 1000BASE-T1 Automotive Ethernet PHY DP83TG720 is pin-2-pin compatible to TI's. 100Base-T1 PHY enabling design scalability with single board for both speeds.
> 1000BASE-T1 from Standard to Series Production. Enabling Next Generation Scalable Architecture. Olaf Krieger (Volkswagen), Christopher Mash (Marvell).
I don't know what they want the bandwidth for, but it's a real thing.
-T1 matters. Thats an uncommon variant. Most base-t & what most people would assume Gbe base-t means is base-t4, four twisted pair (-T4), connected via 8 pin RJ45.
IEEE 802.3bp is I think when 1000base-t1 happened: 2018. Way way way after rj45's 1000base-t4.
All the things you listed are industry-wide standards or frameworks where people are essentially forced to use them. I mean sure, not at gunpoint, but they’re inescapable in practice, especially at typical workplaces.
All of those things are also fairly opinionated and all-encompassing, trying to be all things for all people… and failing to be a good fit for many of them.
Hence the grumbling.
You’ll see similar conversations about government too for the same reasons: forced on people, wide scope, poor fit for many.
Inevitably someone will say: “But you have a choice! You don’t have to use whatever the subject is!”
Sure… I can live on an island away from everything.
I’m a Windows systems integrator and I’m learning Kubernetes now even though it’s a poor fit for Windows.
Why? Because it’s the “new hotness” and several vendors support it and clients demand it.
So I’m forced to deal with something I don’t really want to. Not to say that k8s is necessarily bad, just that it’s not nice to deal with in my scenario…
USB deserves it. It's almost as bad as SCSI, but least the 50 different connectors were differently shaped and I could buy one that matches and know what it would do beforehand. I can't do that with USB.
This isn't the name of the USB spec, it's the name that someone noticed a random manufacturer used. The new name will follow the common pattern, or "USB4 120Gbps".
Why not? Regulations should help protect consumers make informed choices when purchasing things. Currently, USB + manufacturers are purposefully making it confusing to purchase cables because they want more cash, in exchange for making the experience worse for consumers.
Sound like exactly the right place for some new regulation.
Using a valuable rate-limited post to say: you can. USB4 mandates direct host-to-host connectivity. If you bought an Intel laptop with 11th-gen/Rocket Lake (March 2011) or better, the cpu has built-in 40Gbps USB4/thunderbolt that can do host-to-host, although not every laptop implements the actual PCIe side of that. This has been support in Thunderbolt since Linux 4.15[1] in 2017.
Alas more generally, things aren't super hot for USB4 devices & things like ethernet integration. USB4<->ethernet options are still very very limited. Paying multi-hundred dollars for 10Gbe is... unfortunate. We have a long way to go before USB4 IP starts coming to actual devices; the hubs shield us from the obviousness of the need by being, in effect, a far-away mini-southbridge with wild behavior-switching on each port. So it's easy to keep doing what we're doing. But eventually I hope we stop using bridge chips and have some re-usable PCIe tunneling or other just better actual USB4<->flash or USB4<->ethernet chips that really try to be modern. It'll be a while.
As another poster says, cable length is short. 2m with active cables. USB3 had a lot of great cables with active repeaters built in (for a while there my desktop was 10m away from my desk), and I admit, I'm surprised I haven't seen this happen yet for USB4, but it should. One of my desires from USB is to make a longer range USB spec. Please give me 20Gbps over 10m? But for people just wanting to plug their laptop or desktop into their NAS, or wanting to direct-attach a mini-cluster together... heck yeah, USB4 is there for you, today.
What's stopping us from adding a repeater every couple of feet? Transistors aren't expensive and integrating it into the cable shouldn't be too difficult.
It will be infinitely more expensive and harder than just using fiber. Two transceivers is 350 bucks total for 2km single mode. 200 for multi mode if you want cheapest possible
A 4 port switch is about 700 bucks.
You could also do 25gb x 4 and aggregate at probably 1/3 price. You could upgrade later, without changing fiber. You could also go with the switch but use 25g transceivers, and upgrade the transceivers later. This is about 60% cost.
In a year this will all be half that price. USB4 plus a ton of repeaters plus … is going to dwarf the initial cost of 100g fiber quickly. In 5 years, you will probably be able to run 800g over the same fiber if you are bored. A or aggregate 100g links. You would have to replace all the USB with new USB-9 2x2.4 SE stuff.
> Transistors that can pass 120gbps are probably expensive as hell, possibly into InP territory.
That's a strange argument, given that USB devices and your USB ports and CPU need a multiple of the amount of transistors in a repeater to process these signals.
Internal busses can operate in parallel, at some fraction of the serial bandwidth. You only need really fast stuff at the PHY. An interleaver/deinterleaver circuit is not much more complicated than the kind of signal shaping hardware you would need multiple copies of in the cable.
And we already have link aggregation, so you can have 120gbps "networking" over very short distances today, if you want it. All you need is four short, expensive cables. The problem is you will rapidly run into system-level limitations, given that 120gbps would consume more or less all of the I/O resources on many common desktop-class CPUs.
There are optical Thubderbolt 3 cables made by Corning/Wero up 100m ($$$$). While not TB4, I've used regular TB3 cables with TB4 devices without issue. I assume it would be similar for those optical cables given same bandwidth requirements.
I've also seen TB4 5m cables being sold with the listing saying they are rated up to 30gbps instead of 40.
There was at least one model of Dell switch that used HDMI for 10G connectivity a few years back, so it seems doable. I think most people looking for cheap and fast home networking end up going TCP over infiniband, which is a project I've been meaning to look at sooner rather than later myself. You won't touch 120G though, but I don't think you'll find disks that fast anyway.
My latest desktop has Thunderbolt 4. While I'm glad I have it, as it's a mITX system, I really only use it to connect to a docking station. It's also great for monitors or 10Gbit ethernet if needed. The external SSD that I use is limited to 1GB/sec, and I rarely hit that with my use case, closer to 500MB/sec. To take advantage of 5GB/sec the external drive would have to be pretty well-built (expensive). My drive is the Samsung T7 Shield 2TB, which is the best price/performance for sustained performance that I could find on the market today.
I do think external IO should try to keep pace with internal IO speeds, but it comes at a significant cost on the peripheral side of things. You also run into cable length issues. There's a reason why internal IO gets to faster speeds so much sooner, path traces don't have to deal with real-world external cable lengths. The link alludes to this, but I'd love to know how long of a cable you can use with 15GBps external devices considering the length on my passive Thunderbolt 3 dock cable is 2.3ft. USB 4.0 "v2" at 120Gbps will support a 2inch passive cable.
IIUC USB 4 has two variables power and bandwidth and few feature flags like display port. Why isn't it a requirement to lost the value of these explicitly on packaging/listings? That would solve most of the practical problems, yeah we may have to stop buying cheap/unbranded cables but branded ones would be easy to choose.
Thunderbolt provides easy but costly branding for high end.
Signal integrity first, then maybe cost or power usage. Commercial state of the art is around 100Gbit/s per pair using a gearbox chip, as seen in 100G single lambda transceivers. For ASIC/FPGA i think 58GHz PAM4 is available giving 116Gbps per pair. Actual data rate will be a few percent lower due to line coding and forward error correction.
Normal circuit boards start interacting with the signal from about a few gigahertz. At really fast rates you need some black magic like Teflon or ceramic substrates for the signal to even get through.
Well, the EM environment would be a CPU die on a motherboard full of gigahertz chips, it means this EM environment has been so much "improved" they can double the speed?? I am really curious at how they choose to limit the speed of USB lanes, that said I really don't like that as this smells "don't go too fast or we won't be able to sell USB controllers with really faster lanes". I would understand if one of the major limiting elements is the controller silicium process (2nm/5nm/etc).
Yep, finally my TB3 eGPU starts being at least in theory outdated. The fact that there was no improvement in TB speed for the last 5 years is ridiculous.
Edit: but probably PCIe lanes to the controller chip would be somehow a bottleneck as this is meant for display transport first
Are there any reputables places where I can buy a USB type C cable that supports everything? It probably wouldn't be cheap but I'd love to just stop having to check to see if my cable is the issue.
I keep dreaming the day when every peripheral (Display, Speakers, Keyboards..) and component (RAM, SSD, HD, GPU, CPU) will come with a just single connector ( like RJ45 or Fiber optics) and they all will be connected to a switch (yes like a layer 2 switch or layer 3 switch). They will comunicate with each other through packets like regular TCP/IP connections.
You have computer A and B with a single display C. In the morning you use your computer A and in the evening you want to switch to computer B? Easy: go to web interface of display C and set the IP address of computer B. That's it. Same for every other peripheral and component. The GPU wants more RAM? Easy: just comunicate directly with the RAM through the switch.
I know, it will be security nightmare. I know, fiber optics (I guess this is the fastest network cable today?) cannot reach RAM speed. In the end we are just sending bits over the wire (or wireless) and we have a gazillions type/revisions of cables and sockets. There will come a time where Network speed will reach RAM speed (I guess this is, after CPU registers and caches, the fastest component?) or close to it.
I don't understand it. Maybe because I'm missing something at a fundamental electrical level.
High end displays can. 8k60Hz at full (8-bit) color depth and chroma (4:4:4) is around 64Gbps. The article notes that with some features of the spec to reduce bandwidth, this enables "8K144 HDR with Display Stream Compression and no chroma subsampling" with room to spare.
It's a common complaint of even current generation displays & docks that dual display 4k 60Hz docks are few and far between, and good luck getting even one display at 4k 120Hz on docks. My experience there is mostly miss, rarely hit.
Yep I’ve been itching to upgrade my monitor, but 4k 120Hz is still as good as it gets without compression. This is a DisplayPort limitation which should improve over the next year or two as new GPUs are released
I mentioned in a previous version of this discussion that PCIE5 NVME is specced to > 14GB/s, and I'd really like to have external drives capable of that.
Cable noob here: can someone explain how this is possible? It's faster than a max speed cpu, so is this done by multiplexing things in parallel over like 10+ wires or something at a low enough voltage where there is minimal induction between them?
USB v4.1574787547zXQ 120gb mode will be enabled if you buy a rare cable captured from Hades by Hercules himself and your laptop has a chipset certified by Intel, Huawei and WWF.
Side question: Does anyone know where I xan find a USB display cable for under 30$? (Don’t remember the version)
That's even worse than I thought. So SuperSpeed USB 10Gbps is known not only was "USB 3.1", but also "USB 3.2"? And SuperSpeed USB 5Gbps and 20Gbps are also both known as "USB 3.2"? So when someone says "USB 3.2" what are they referring to?!
You can't tell without that Gen suffix, because it got retroactively renamed so that manufacturers could stick "USB 3.2" label on their old USB 3.0 devices.
USB is no good, and HDMI is no good. UEFI is also no good, and Unicode also is no good.
I should quote Steven J. Searle:
> The sad fact of the matter is that people play politics with standards to gain commercial advantage, and the result is that end users suffer the consequences. This is the case with character encoding for computer systems, and it is even more the case with HDTV.