Hacker News new | past | comments | ask | show | jobs | submit login
Apple's Pro Display XDR takes Thunderbolt 3 to its limit (fabiensanglard.net)
231 points by WithinReason on Nov 24, 2023 | hide | past | favorite | 170 comments



The article states the wrong resolution for the Apple display and it’s an interesting mistake because these days there are actually 2 versions of 6K in consumer-marketed computer monitors: the one used by the Apple display (6016x3384) and the slightly larger one used by the Dell U3224KB 6K that came out earlier this year (6144 x 3456). In fact, an interesting thing people found out when they use the Dell 6K display on Intel MacBook Pros running Mac OS between 10.15 and 13.6 is that the Mac cannot do Display Stream Compression at the Dell's native 6144 x 3456, hence the Mac can only drive the monitor at 30hz instead of 60hz. However, if they can fool the Mac into thinking the display is 6016x3384 (same as the Apple display), DSC magically works and they get 60hz on the Dell (at the expense of sacrificing some screen real estate). Apple must probably hardcode the 6016x3384 resolution somewhere in their OS code. Thankfully people report that this problem has been fixed as of Mac OS 14.1 but that bug existed for 4 years.

Edit: this problem only seems to happen on Intel, not Apple silicon machines.


Thank you for pointing the mistake. I had no idea, this is super interesting.

I have fixed the article and added a footnote to this comment.


> that the Mac cannot do Display Stream Compression at the Dell's native 6144 x 3456,

Can't, or won't? M1 MacBook pros for some reason can't do 4k120 over hdmi unless you buy a specific usbc-hdmi adapter and fool it into thinking it's displayport (or something like that, I'm paraphrasing. You can find info if you search for cablematters DDC 4k120 m1.)


There's no "fool it into thinking it's displayport". What you're describing is having the Mac actually literally emit a DisplayPort signal, and a separate device converting that to an HDMI signal. The USB-C HDMI Alt mode standard was never implemented by any real products, and all USB-C to HDMI converters are active adapters that consume DisplayPort signals and emit HDMI signals. Not all of those support HDMI 2.1, which introduced a drastically different signalling mode for HDMI in order to support much higher data rates (and also added display stream compression, further increasing the maximum resolution and refresh rate capabilities).


You're missing the point -- you have to use custom firmware on those adapters or Apple still only puts out 4k60

I went deep on this last night shopping for a cable


The custom firmware isn't actually all that interesting of a point, because slightly broken display behavior is extremely common if you look closely at anything other than normal everyday TV resolutions and refresh rates.

USB-C/DP to HDMI adapters often need to do some amount of rewriting EDID information because they need to be transparent to the host computer and the display, so it's the adapter that's responsible for ensuring that modes that cannot be handled on both sides of the adapter are not advertised to the PC. When you layer that complexity on top of the existing minefield of ill-conceived EDID tables widespread in monitors, on top of the limitations of macOS (limited special-case EDID handling, little to no manual overrides/custom mode settings), it would be more surprising if there weren't some common use cases that theoretically ought to work but are simply broken. Applying the necessary EDID patch via adapter firmware is simply the easiest option where macOS is involved.

Even on Windows with a DP cable directly from GPU to display it's not all that rare to need a software override for EDID in order to use modes that ought to work out of the box (eg. I have a recent Dell monitor that cannot simultaneously do HDR and variable refresh rate out of the box).


That "some reason" is that a standard DP-to-HDMI 2.1 protocol converter can't negotiate beyond HDMI 2.0 link rates without the host computer knowing about and doing FRL training on the HDMI side. Completely unrelated to any limitations related to 6144 x 3456.


Couldn’t the adapter do that?


As I understand it, automatic fallback/limitation to HDMI 2.0 speeds was desired by VESA in the event of using an 18gbps cable or other signal integrity issue, so ultimately they chose to require the host to be more aware of HDMI for the converter to enable HDMI 2.1 speeds rather than requiring the converter to be smart.


Displayport to HDMI 2.1 adapters already seem quite complex: https://www.paradetech.com/products/ps196/

Is the problem expressing the maximum supported bandwidth end-to-end, i.e. not as a function of the DP channel quality that the source sees?


Yes, as a specific example, if the HDMI sink wants DSC, maximizing the quality (minimizing the compression ratio) fundamentally cannot be done without knowing the end-to-end bandwidth.


I can verify that an M1-M3 Mac running Sonoma (14.1.1) the U3224KB supports 6144 x 3456 at 30bit (60Hz). Under Ventura this did not work. Seems fixed now.


I got 6144x3456@60 on my m2 studio ultra from the day it (studio) arrived, back in... I want to say July (so not running sonoma)


Yes, 6144x3456@60 was working, but not at 30bit, only 24bit. Sonoma fixes that part (for me)


What is an “M1-M3 Mac”?


A Mac which has the M1, M2, or M3 chips.


I still desperately wish the industry had been able to push through to an optical data + plain power interconnect standard ages ago instead of it falling apart. It's so pleasant to deal with fiber. The same OS2 or OM3 I installed over a decade ago for 10 Gbps is still fine for 40 or 100 Gbps. Cost was a complaint at one point, yet even without the enormous economies of scale a general standard would bring price and performance lines have ended up converging anyway. 40G SR (so still good to 150m) modules are now at $40 or less, even 100G is less than $100. Putting that onto simpler fiber instead of MTP/MPO with SFP56 remains much more expensive, a 50G-SR is still like $280, but that appears to primarily be a product of it being very new and not yet scaled, not that it couldn't have been years ago. And that still then runs to 100m with duplex LC.

Meanwhile, Apple wants $70 a pop for a single, 1m Thunderbolt 4 cable, and that's only been increasing. What will Thunderbolt 5 be? Corning and I think one other briefly did optical Thunderbolt cables, but those were $500-1000. Whereas premade quality duplex OM4 with helical steel armor runs more like $2.20/meter.

Feels like we somehow ended up in a yet another technology path dependent evolution path where decisions that saved a bit at the time have then imposed major costs forever more :(. Man it'd be so cool to just be able to run a screen and input boxes hundreds of feet away from a workstation for $60, or have an $8-12 cable be good for decades of evolution in bandwidth barring regular wear (and when it's that cheap who cares even if it breaks after 5 years?), or be able to route displays/TVs and PCIe and whatever else around like any other networking with no compromises. Sigh.


> It's so pleasant to deal with fiber.

Consumers are absolutely brutal to cables. Minimum bend radius of copper cables get violated all the time, but copper has a decent chance of continuing to work if bent back. Not so with fiber. Fiber would only work with some heavy armor.

Consumer fiber was actually tried in the past: Optical TOSLINK was briefly popular in the audio world. It was cheap relative to what you’d need for modern high speed fiber, but even that was too much to win out over copper. It faded away.

Even within professionally maintained data centers, direct attach copper is often preferred over fiber interconnects when it’s possible to get away with it.

Fiber is great when called for, but copper wins in practicality when you can get away with it. It’s been proven over and over again across industries.

> Meanwhile, Apple wants $70 a pop for a single, 1m Thunderbolt 4 cable, and that's only been increasing.

You picked the absolute most expensive cable as your benchmark. Look anywhere else and prices are lower and decreasing.

Also, you are wrong about Apple cable prices increasing. They’ve actually dropped the price of their longer cables.

But again, look to the overall market and you’ll see prices going down.


Have you played with the newest bend insensitive fiber? You can wrap it around a pencil and an otdr shows no or minimal loss. But I think you’re correct about damage from average consumers. They’d destroy the ferrule in no time.


The failure mode with consumers is usually pulling cables against a sharp edge. Even a pencil has a larger radius than the edge of your desk or the corner of your computer case. You can buffer the radius by putting a thick jacket around the fiber, which is about the only thing that works.

But fiber still isn’t a free lunch. The longest high speed cables already use fiber internally, but they’re expensive because they need extra optics and transceivers inside. Moving the extra fiber hardware into the laptop and client device would make the cables cheaper but make the hardware more expensive. That’s not a trade off that most people would take as most people don’t actually need long cable runs.

The above comment was trying to use Apple premium cables as the reference for being too expensive, but opening Amazon shows plenty of Thunderbolt 4 cables in the $20-$30 range from other vendors. It’s really hard to imagine a scenario where forcing every cable to be a combination of fiber and copper would make things cheaper for us consumers.


B3/A3 Bend insensitive ginger can handle 5mm bend radius. You can wind it around a pencil with no issue. It does fine on the edge of a desk. 5mm is less than a quarter of an inch. I can send you video of beating the crap out of it against the edge of a desk if you want.

I've watched tons of modern fiber and cat6 be abused. My dog dug up and chewed some. 0.5db loss. The copper next to it was a complete loss

The fiber always wins.

As for pricing. Once you get to 10gbps+ its at best a wash.

Transceivers are 10-20 bucks. Cheap rj45 10gbps sfp+ are 2-3x that price. Produce a ton of heat as well. This will only get worse.

25gbps fiber transceivers are now almost as cheap as 10 gbps.

Meanwhie, higher speed copper standards are much more expensive and produce even more heat.

Higher speed fiber is dropping in price.

Higher speed copper is not because nothing drives it anymore.


Do third party thunderbolt cables suffer the quality issues that usb cables have? (counterfeits, different standards, unclear specs, etc.?)


Give my cat five minutes...


Pun intended? Cat 5?


:)

How did you know how they test for these specs?


>Consumer fiber was actually tried in the past: [...] It faded away.

Not everywhere. It's still not as popular as copper but it's not extinct.


to be fair TOSLINK was a particularly bad standard and offers lower audio quality to similar or cheaper analogue devices. It's whole selling point was basically "it's fiber optic so that makes it better" while not actually being better.


It's literally the exact same digital signal sent over copper S/PDIF. The only meaningful advantage it offers is electrical isolation.


S/PDIF is also crappy in comparison to analogue audio of the time.

EDIT: Looking closer at the wiki (it's been forever since I saw it used), it's when it's being used for surround sound that it compresses it super poorly, for stereo it's just PCM and sounds fine.


Didn't stop manufacturers from making gold plated TOSLINK connectors though.


It fools the same people who somehow think toslink audio quality is worse over a bitperfect link


No one's saying it didn't transmit the data bit perfect, but it re-encoded the media at a relatively low bitrate with a decently crappy codec which is where the crappy audio came from.

EDIT: Looking closer at the wiki (it's been forever since I saw it used), it's when it's being used for surround sound that it compresses it super poorly, for stereo it's just PCM and sounds fine.


I wish I could dismiss that as a joke, but I have in fact seen some of those.


Lower audio quality? Nope. Toslink sends the actual bits that make up you audio digitally over the cable. If the receiving device does a shit job at converting the signal to analog, this is barely the fault of the standard.


The codec they transmitted over the wire (regardless of the wire media) was crappy, not the wire itself, but in this case those imply the same thing as as far as I'm aware there was only the one format that it'd transmit over that connector.


I've used spdif in coax and fiber (toslink) to transport audio from tv (atsc1) and dvd where you're just bitstreaming the data from the antenna or the disc. It's also fine for 2-channel PCM.

Dolbly Digital (ac-3), dts, and 2-channel PCM are fine for what they are. More channels in PCM would be nicer, as would other newer higher bandwidth, lossless codecs, but as a unidirectional signal, it's hard to add support for more stuff. It's not terrible for a standard from 1985 to not support full bitrate audio on blu-ray.


What codec? It's PCM.

You could also use that PCM stream to transmit Dolby Digital or DTS, but that's up to the device, not TOSLINK.


okay, looking closer at the wiki (it's been forever since I actually tried it), for stereo it's full bandwidth, it's when it's surround sound it's super compressed.


It's super compressed but it's the same super compression that is on any DVD so for any surround audio source a consumer would have it was perfect.

The only people who would lose out on using TOSLINK was someone listening to a surround SACD (not exactly common).

It was never updated for the higher bitrates on Blu-ray, so that's when it fell out of favor.


Super compressed compared to what? Bluetooth?


Compared to pretty much any other output you'd have on an av receiver 15/20ish years ago (so hdmi, XLR, or some other standard analogue like rca or speaker wire). Possibly similar to the compression level of blutooth, don't have access to any systems that'd be easy to compare against these days for toslink.


XLR is one cable per two channels. You could conceivably run one toslink per two channels and get functionally the same as XLR (minus phantom power)


Balanced XLR is three conductors per signal:

- Pin 1: Ground

- Pin 2: Signal in Phase

- Pin 3: Signal flipped

Two channels over XLR are done in two ways:

- digital using the AES standard and 110 Ohm cables

- analog using two balanced lines and a XLR-5 connector, although this is not an offical standard to avoid confusion with DMX


I def could see doing that but don't think I've ever seen a system with multiple toslink for additional channels like I've seen done for XLR.


No ground loops is a feature.


It's still used to link audio interfaces with ADAT.


I've never even heard of a minimum bend radius, and I care about my kit so I do tend to read the docs that come with it. Where are such details given, because I'm now curious, thanks.


I don't know about all cables, but Cat 6 has a generally accepted minimum radius of 4 times cable width and there's some standards to back that up.

You can also find datasheets for industrial cable that specify it for fixed installations and repeated flex applications: https://www.sab-kablo.com/fileadmin/user_upload/pdf/catalog_... (notably this has 5 times the diameter, so it wouldn't pass some standards)

Consumer cables, not just Ethernet, probably do have such specifications when produced in bulk, but the manufacturer that turns reels of 1000 metres of raw cable might not include it in the end manual (just like they usually don't include the frequency response curves in there).


Alec from Technology Connections covered the topic of consumer optical standards - or rather, the lack thereof - in a video that's now 4 years old:

https://youtube.com/watch?v=CwZdur1Pi3M

It mostly boils down to the reason others already mentioned: fragility, cost (optical transceivers have only recently become cheap enough to enable the use case of optical cables that behave like copper ones) and the requirement for copper wires anyway in order to carry power in addition to data.


Everyone else is talking about bend radius but as someone who has abused fiber (just your standard cheap 10Gtek patch cables from Amazon) that's not the problem, modern fiber is quite bend-resistant.

The problem is the ends. Inside each fiber plug is an exposed fiber. Dust is a huge problem if you expect users to toss the cables in their bag, and worse is damage to the exposed ends of the fiber which can get chipped and then then in turn damage the exposed fiber in the device you're plugging into.

It would be cool if instead of those active Corning cables you could just get an active Thunderbolt-to-SMF plug, and then bring your own cables. Then the expensive part (the conversion circuitry and optical transceiver) can be reused for different applications and you just replace the cheap fiber with whatever length you want (or use pre-existing fiber in your home and update the optical transceivers when a new standard comes out and not have to re-wire)

edit: thinking about it more, it would have been a lot cooler if instead of thunderbolt we just got a better Ethernet. Then we could use any medium we wanted to.


Would high speed optical work for connections that require regular unplugging?

https://en.wikipedia.org/wiki/TOSLINK and https://en.wikipedia.org/wiki/ADAT_Lightpipe are limited to about a megabyte/second, if I’m doing my math right. https://en.wikipedia.org/wiki/MADI#Sampling_frequency gets higher, at around 12 megabytes/second. That still is at least a factor of 100 away from this cable.

I think that’s because it isn’t easy to make a plug that works reliably for a gigabytes per second fiber cable.


You would need a new connector because dust can break the signal and a cable that needs a dust cap sounds like a horrible consumer product.


3M is on the bleeding edge here: https://www.3m.com/3M/en_US/data-center-us/applications/inte...

used in ai data centers, probably years away from consumer adoption...


Fiber doesn't bend well, which for a cable that will be moved regularly by ordinary customers, is game over.


>Fiber doesn't bend well

Eh? Typical min bend radius is like 10x diameter in static conditions, so for a 2mm fiber cable that'd be 20mm or ~0.8". That's not an issue at all with consumer usage, and if you're really worried is trivially solved by just building the thing up to the level of thickness Thunderbolt cables have already. That's why Corning for example advertised their old TB optical cables as "zero bend radius" [0], adding more polymer/armor around a fiber cable so that someone can do whatever with it and it naturally won't go out of spec isn't a big deal.

FWIW, anecdotally high performance copper doesn't like abnormal use either. 10g USB-C cables are cheap enough that I tried taking one and using a vice grip to actually really squash thing at an angle, and it didn't like that at all in terms of working reliably afterwards. I doubt DP, HDMI and the like would do better, and those are plenty thick too. It never comes up though in normal use. And when cables are cheap vs stupid pricey making a mistake no longer is such a big deal. If you use an $8 cable a bit too hard and it stops working, well grab another one out of the drawer. If your $70 cable breaks that's a touch worse.

----

0: https://www.corning.com/microsites/coc/ocbc/Documents/CNT-00...


I’m with you and want more fibre, but people are brutal on hardware.

I’ve been involved in the replacement of two fibre optic cables at work, used for pulse oximetry in an MR scanner. It was very expensive, twice.

People force a bend and destroy them. It doesn’t matter what you tell people, they break them.


Was in healthcare IT as well: https://www.fs.com/products/41028.html?attribute=35025&id=61... for a couple of dollars you can get 2 paths of crush protected bend insensitive high speed fiber that barely degrades signal quality when you knot it up. To get around the limitations of the LC connector and wall jacks being regularly obliterated by equipment/tables being moved around use recessed jacks, that way they just pass by instead of break the connector (applies to rj45 as well). We had great success with this approach for a couple thousand locations.

Of course, the real problem 95% of the time is really around the device not the technology the device uses. There is a misalignment of incentives on who can work on the device, whether standard parts are used, and whether the approved parts are the $2 type solution that will cost $800 in on site contractor fees to replace again or the $4 solution which actually stands a chance to being used. Once the device is bought nobody is going to be incented or allowed to do anything but fix it to status quo. It's one reason I had to get out of healthcare IT - it was more often the system getting in the way of what patients and nurses needed to do than the actual technology itself so solving things from a technology perspective felt like running on a treadmill and going nowhere.


I'd go like: "third time, I'm wrapping it in a kiddy pool noodle" for their embarassment


In my experience they wouldn't even mind or be embarrassed, so long as said solution sounded like it would stay out of the way and let them use the device more conveniently. To them that's the whole point of IT: make it less painful to use the devices how they want to use them to do their job. IT isn't the savior that designs things they want, it's the cost center that makes the things they have to use less annoying to use.


Until someone figures out how to supply bus power over optical fiber, I don't see it as a viable alternative for typical consumer peripheral I/O applications, no matter how inexpensive and robust the cables are.


The most common use cases for the kind of long range high bandwidth connectivity that fiber is good for are networking and displays, neither of which typically carry power in consumer use cases. USB cannot be replaced by a purely optical connection, but Ethernet and DisplayPort certainly can, and tunneling USB alongside DisplayPort to split back out at the monitor doesn't present any power delivery challenges.


It's very common now for monitors to support usb-c or Thunderbolt input, and provide power over that link to charge a laptop (or tablet).


We have FTTH everywhere in India and seeing how fiber is installed in all kinds of nooks and crannies, I can tell you that fiber is more resilient than you think. And frankly, I have experienced far less Internet outages with FTTH than I used to with twisted copper pairs (ADSL2+).


How thick are those cables versus standard consumer cables?


G.657B standard cables have a pulling (stress) bend radius of 5mm. So, you can abuse it quite a bit and it's fine. Also, while it is abused during installation, at rest, it is not under so much stress and usually have a sturdy casing to make sure it stays above its min bend radius.


I don't know if this is a helpful comparison for you, but my FTTH cable is almost identical size and material-wise to Apple EarPods cable. Has been working fine for years now, though obviously I don't move the router that much.


Most GPON is quite thin. Thinner and tougher than standard fiber patch cables.


It bends just fine. There's tons of fiber cables for "ordinary users" including HDMI, DisplayPort, Thunderbolt, VR link cables, and, of course, toslink. You just end up with the extra cost and complexity of having transceivers built into the cable which is a waste especially at these speeds where copper is clearly a limitation.


I didn’t know about fiber HDMI, DisplayPort or Thunderbolt…!

Are these used for very long runs - as in a video source to a projector (via hdmi) hundreds of meters away, like in a stadium..?

Thank you!


Those are used in multiple cases:

- long runs that are not that long. hdmi does not like long cables at all. even an overhead projector in a classroom requires super expensive cables (ever wondered why it's still mostly vga?)

- packing multiple displays in a single cable. fiber is so thin that with trunk cables you get a lot of strands in a single cable, capable of running a lot of displays

- just getting a really thin cable that can be run in existing conduits or hard to reach places

I did buy such a cable for home for the third reason, to run from the PC in the office to the TV in the living room. It runs on a bog standard OM3 MPO cable. The specific one I got comes from HeyOptics and their website already showcases a few usecases [1]. (not affiliated, just a great product that just works)

As for stadiums and more generally broadcast video, they're using SDI instead of HDMI. Those are indeed most of the time fiber, both for range and weight (think of the cameraman running along the terrain during a sport event, and their long tail of cables). When they use HDMI it's more in the control room.

[1]: https://www.heyoptics.net/products/8k-hdmi-mpo-optical-cable

(edit: fixed list formatting, I always forget this is not markdown)


8m or so is the limit for HDMI passive cables being 100% reliable.

If you want to go 15-30m--- using a video source from another part of your house, or to drive a projector in the middle of a classroom--- and you buy an "active cable", odds are it's fiber optic inside.

If you want to go hundreds of meters away, you'll get a purpose-built box that uses your own optical cables instead of a cable that hides the optical transceivers inside.


I have a 50 foot fiber HDMI cable to get 4k/120hz signal from the PC in my home office to the TV in the living room. Works great!


That's quite amazing. How thick is it? For metric folks, that's more than 15 meters!


It's fairly thin, quite a bit thinner than most of my traditional copper cables I think. Apparently it's nominally 15m, they also have a 20m one: https://www.monoprice.com/product?p_id=43328


Copper thunderbolt cables are only good for 2-3m, after which you need active cables or repeaters and fiber is easily the best option at that point.

HDMI and DisplayPort are good for a bit longer than Thunderbolt over copper, but not by all that much. HDMI 2.1 can only go to around 3m as well now.

So we're in a world where "long" is a mere 5 meters / 15 feet. This is why so many VR headsets are using fiber cables - it has to be long to enable the movement and logistics of connecting a PC to someone freestanding in a room, but modern video signals are just too hard to drive over copper at that not really that long distances.


The Meta Quest link cable is optical and way more than 1m long.


Fiber cables are often more flexible than copper ones by virtue of just being a lot thinner.


Which exacerbates the problem of users easily exceeding the minimum allowable bending radius.


USB is still (as far as I know) a mess and that sucks.

But DisplayPort has great optical cables available! I was early in & they were very cheap then (no one trusted them yet), 50m v1.4 for $60. Turns out to be vastly longer than I needed, & got a 20m cable for gaming from the roof. There's even DP 2.1 cables now.

A USB4 with optical transport would be divine.


Meta/Oculus makes a 5m optical USBC cable https://www.meta.com/quest/accessories/link-cable/

USB 3.2 Gen 1 though, not USB 4


There are optical USB-C cables available! The standard explicitly supports that use case.


This is my dream. To those worried about bending, I would be really curious to try one of these with levels of sheathing to see how they hold up in reality.


I have one of these. My Dad died last August (just before his 102 birthday) and he had bought himself a Pro Display XDR about 6 months before that. My brother is a PC person and I am a Mac person so I got the Pro Display.

It is an amazing display and I love it, but I would never buy one for myself. It is obviously fine for programming, but for me it really stands out as something for consuming entertainment, even though I only get 4K content. It is capable of, I think, 7K with the right computer and has 10 bit color depth. When my Dad first bought it, I used it to play Apple Arcade games on my iPad Pro - that was fairly spectacular.

EDIT: my Dad had a Black Magic video camera that I think had 8K resolution, and so he had a lot of fun with his setup.


I do visual work (graphic/UI design, photography, video editing) along with programming and there is no display with the resolution and color fidelity of the XDR at its price point. I got one shortly after release, and if it stopped working today I'd buy another one in the amount of time it takes me to click "Submit" on the Apple Store. It's just that good.

When I look at a high resolution scan of a large format negative on it, it feels like looking at it directly on a light table. It's insane. My only complaint is the local dimming, which shows its limit when you're doing fine white on black linework in a dark room. Hopefully we'll get a pro OLED display of that quality in the next decade which will solve that one issue.

The only other piece of hardware I've spent money on that comes close of giving me the same satisfaction is my Happy Hacking Keyboard, which I've used for over a decade now and I hope I will keep using until I cannot use computers altogether anymore (I have a few spares just in case).


Thank for that. I used to be into photography and I did just once try shooting raw images with my Canon and view/edit.


I'm sorry for your loss, but I have to say that your post made me smile. How awesome that you and your brother got to enjoy dad at 101 being able to nerd out with video and high end displays.


Thanks!


Your 101 year old father was out shooting on an Ursa 12K? What a guy.


The frail elderly are a very prominent group in society because their needs are so great. Robust “old-old” adults tend to blend in because they are inconspicuous and they go on about their business.

I think we’re going to see a “silver tsunami” of robust elderly persons as millennials and Gen Xers age simply because healthy lifestyle activities that had been a part of their lives.

I.e. don’t buy into Acorn Stairlift and Lifealert futures.


> I think we’re going to see a “silver tsunami” of robust elderly persons as millennials and Gen Xers age simply because healthy lifestyle activities that had been a part of their lives.

> I.e. don’t buy into Acorn Stairlift and Lifealert futures.

Trends in obesity–which is a huge driver of poor health in America–don't seem to support this hypothesis. I think the great majority of health and wellness activity in recent years has been concentrated among people at the upper end of the socioeconomic scale, which also drives perception since companies will spend a lot on advertising to attract people with money. Things in this country look very different depending on how far you are from the nearest Whole Foods/Equinox/Soulcycle/Sweetgreen.


As soon as Semaglutide et all become generic, obesity will be completely over, worldwide. It will be looked at as an awful period in history like Opium dens are now.


Better wait at the very least 20 years for the patents to expire, so I would bet good money on 50 years.

It's not a silver bullet anyway, but "become generic" is doing some extremely heavy lifting.


We can expect countries that don't give a damn about patents and IP will mass produce this and flood the global market to the same degree they have for boner-pills.


> robust elderly persons as millennials and Gen Xers age simply because healthy lifestyle activities that had been a part of their lives.

You're probably leaving out the issues around teflon, microplastics and antibiotics poisoning all food, air and water, general increased stress and anxiety about the wars, economy, job market, environment, debt and unaffordable rent/housing, the loneliness epidemic plaguing the west, which have already tanked their/our sperm count so we can't be too sure they'll/we'll see much healthier retirements if these keep piling up.

Those with solid careers in tech in developed countries yeah sure, they'll be fine and happy, most likely retired early, house and debt paid off and focused on enjoying their hobbies instead of working the 9-5 grind. The rest, not so much.


This 101 year old lived through a world war with rationing, would have been born into the great depression, saw the rise and fall of Nazi Germany, the Cold War and Cuban Missile Crisis, the Kennedy Assassination, the Nixon years, the oil crises and recessions, Gulf wars, 9/11...

On the healthcare front there was the proliferation of lead (in paint, toys, fuel, everything), smog from cars and coal burning, toxic fertilizers, the rise and fall of smoking, the discovery of HIV, Polio outbreaks, things like the Cuyahoga river fire (where rivers were so polluted they literally caught fire every couple decades). The mining town my family lived in would just throw the arsenic and mine tailings into the lakes because they figured it couldn't hurt them there, and that was a common thing to do at that time.

Gen X and Millennials are not the only generations who have faced adversity. It's a rough moment now for sure, but it's not unique. We shouldn't fall into baseless optimism but also don't shouldn't neglect human strength and creativity. We have new problems, and we have new tools.


> We shouldn't fall into baseless optimism but also don't shouldn't neglect human strength and creativity. We have new problems, and we have new tools.

Thank you for this comment, it helps to contextualize two moods that I have, as one who has struggled with depression (not currently, but off and on):

When I’m in a low mood it’s easy to see and dwell on the new problems and discount the efficacy of the new tools.

When I’m in good spirits it’s easy to see the new tools and (temporarily) forget about the new problems.


>This 101 year old lived through a world war with rationing, would have been born into the great depression, saw the rise and fall of Nazi Germany, the Cold War and Cuban Missile Crisis, the Kennedy Assassination, the Nixon years, the oil crises and recessions, Gulf wars, 9/11...

Sure, not stealing his thunder, but that's how selection bias works. Not everyone got to live to 101 despite maybe even living healthier lives. I know people in their 40's who already died of cancer. Life can always throw you a curb ball.

>Gen X and Millennials are not the only generations who have faced adversity.

Fair point.


Living to 101 is definitely not representative.

My point really was that second one.

As I said, it's a rough time right now, we're going through a lot. But we passed environmental reforms before, we removed lead from gas, we invented vaccines, we set standards for chemicals, we've cured a few people of HIV, there is good to find out there.


Yes, there is obesity and yes, there will be long COVID, but the health-positive initiatives (more women actually encouraged to work out; men not perceiving weightlifting as gay; marathons are a normal thing now; herpes zoster vaccines helping with long-term immunity against a probable cause of Alzheimer’s dementia; cigarette smoking as socially unacceptable behaviour) will tip the scales in favour of longevity towards making it to 100-120.


Just because you're old doesn't mean you're rich. China and HK has old ladies working out on the streets.


That was kind of my point. If you wanna enjoy your old age you also need to be somewhat wealthy or at least financially very table.

The old people working in the streets till they drop in China and Korea do it because they have no wealth to rest on, not because they enjoy doing that kind of work so much.

It's doable to be young and poor, but being old and poor sucks.


He said robust old people. The Chinese old ladies are robust and impoverished.


Wow it would have been neat to talk to the kind 102 year old person who is buying this kind of hardware. I'd love to know what he thought about the progress of technology and how he felt it had impacted society.


Apple made a big deal about using it for video production and how it could replace extremely expensive reference monitors during its introduction, if I remember correctly.

Through that lens it seems like a useful product.

For everyone else it seems like a pretty amazing monitor if money doesn’t matter. It’s most useful quality is probably being 6k, so you have tons of screen real estate.


It doesn’t really though. There was hope it would be a dual-layer LCD device that could, but alas we’re stuck with $20k+ Sony monitors for that still.


In theory, there will be at least two manufacturers making 4K HDR OLED panels with 1,000 nits peak brightness (for a 3% window) in 2024: https://www.flatpanelshd.com/news.php?subaction=showfull&id=...

However, I've been reading about announcements like this for about five years now with close to zero products actually available to buy.

It doesn't help that Samsung decided that from now on all displays must be curved and ultrawide.


Not to presuppose anything intimate, but if your dad was 101 buying that monitor, he was basically buying it for his kids as much as himself ;) sounds like he was into neat stuff!


Your dad was shopping for cutting edge Apple tech at 101! That's super cool. I aspire to be like him and never lose my sense of wonder about tech. Sorry for your loss, but I also am happy to hear you got to enjoy many years with him.


Hopefully I’ll have my InfinityK display at 100 that can get passed on to my kids


I have the younger brother - the regular 5K studio display and it’s leaps and bounds better than anything else I’ve ever used. 60hz max is brutal for a gamer but for programming it’s incredible.


I just don't get why such premium pricing doesn't include ProMotion feature.

It means a lot for productivity use as well, like smooth cursor movement, smooth browser scrolling, app scrolling etc.


It doesn't work like that, the 40gbps is actual bus bandwidth AFAIK. https://www.thunderbolttechnology.net/sites/default/files/Th... figure 7 says 5120 x 2880 @ 60Hz which requires 22.18gbit/s leaves 18gbps data bandwidth. (This figure is quite important because this is one of the two only "official" sources which admits the raw data transfer limit of TB3 is 22gbps, the other is Dell at https://www.dell.com/support/kbdoc/en-uk/000149848/thunderbo... otherwise you'll only see the 40gbps speed.)

What happens rather is much simpler, the blog post forgot to set the calculator to 10 bit https://linustechtips.com/topic/729232-guide-to-display-cabl... if they did you'd see the data rate required is 38.20gbit/s so the bus is near full. USB C has separate 2.0 wires so unless you have DP 1.4 for DSC you can only use those for USB data, there's no space for anything else.


That's not really the full story, when this is about displays. I currently have a display that requires Displayport 2.1 to get everything out of it, and with DSC and DP 1.4, I can get a "theoretical bitrate" that is higher than what they would normally allow. (Of course, this doesn't change what the link can do, it just allows you to do more with the same bandwidth) On thunderbolt 3 systems, the link is on HBR3, which with DSC allows 120hz, 10bit at 7680x2160. No DSC support limits the refresh to 60hz and 8bit (as then you won't have the higher rates available anyway, since DSC is mandatory with newer standards).

You can check https://tomverbeure.github.io/video_timings_calculator to see what is possible.


Thunderbolt 3 doesn't use those USB 2.0 wires; that was one of the big changes in USB4/TB4 to use them. Once you're in Thunderbolt 3 mode, all USB ports are provided by PCIe xHCI controllers by the TB3 device.

In actuality, the mode where the XDR display consumes 38gbps of uncompressed display bandwidth is an Apple-only mode requiring special Titan Ridge firmware that aggregates 6 lanes of HBR3. Contrary to the article, Alpine Ridge does not support 6k, which specifically is why the iMac Pro doesn't support 6k output. It requiring special firmware is also why this mode only works when directly connected to the Mac.

But yeah, it's annoying that everyone throws bandwidth numbers around without mentioning or even thinking if it's link rate or data rate.


I'm not sure about this mode requiring special Apple firmware, but it definitely does not use 6 lanes of HBR3: https://forums.macrumors.com/threads/caldigit-introduces-usb....

I don't think Thunderbolt ever uses anything other than 4 lanes per DP connection if the GPU doesn't limit the lane count, because Thunderbolt deserializes DP packets, transmits them over the link and serializes them back on the other end, so it can't negotiate the lane count on the host side.


Thunderbolt 3 uses the four high speed lanes of the USB C connector. The USB 2.0 separate wires are present and are definitely used.

Yes, it's USB4 which added USB packets to the bus, TB3 only had PCIe and DP.

That's rather interesting there, what do you mean by aggregating? Thunderbolt could always carry 40gbps worth of DisplayPort packets and afaik it was not lane bound.


I mean, if you have access to the actual Thunderbolt spec saying otherwise then so be it. Personally, I sincerely doubt that the PCIe packets containing USB 2.0 data are specially routed over the USB 2.0 wires like you say since that runs counter to literally everything public about how Thunderbolt 3 works.

Maybe the spec itself doesn't define a maximum bandwidth allocation for DisplayPort packets, but the DisplayPort stream has to come from somewhere and be output somewhere else, and with one single exception, the actual implementations support two HBR2 streams or one HBR3 stream, with each stream obviously being individually capped to 4 lanes at the physical DP interface.

(AFAIK the special mode combines one 4-lane HBR3 stream with a second 2-lane HBR3 stream)


What...? No. There are no PCIe packets, it's completely separate. The USB 2.0 wires are not part of the Thunderbolt specification, it's part of the USB C connector specification. USB C is a physical connector with four high speed lanes, a separate pair of wires for USB 2.0, one for negotiating things and one "extra". Here is a TB3 dock with a USB 2.0 port: https://www.bhphotovideo.com/c/product/1512891-REG/belkin_f4...

This is the same reason you see some adapters using DisplayPort alternate mode have a USB 2.0 port only -- in this case, it's DP packets which occupy the four high speed lanes much like Thunderbolt above.

Yes, Titan Ridge must connect via DisplayPort lanes and you are right, it definitely looks like it only takes five lanes, wow. https://support.lenovo.com/us/en/accessories/pd029622-displa... says two 3840*2160@60hz and a 2560 × 1440@60Hz monitor is supported which is 30.71 Gbit/s -- a third 4K would require 37.62. https://www.dell.com/support/manuals/en-us/dell-wd19tb-dock/... explicitly says it's five lanes. https://www.dell.com/support/manuals/en-us/wd22tb4-dock/dell... even TB4 is?? Wow!


My understanding is that your average thunderbolt 3 equipment, when running in thunderbolt mode, did not directly pass through any USB traffic. Instead a dock with USB 2 or 3 ports had to contain a USB controller that connects back over PCIe. This was very common, lots of docks have them.

Here's a good breakdown: https://www.reddit.com/r/UsbCHardware/comments/mjz2pu/usb4_a...

"Titan Ridge, however, would disconnect the USB 2.0 and USB 3.1 hubs immediately upon entry into TBT3 mode."

"USB4 (and Thunderbolt 4) don't do this for the classic USB 1.1/2.0 wires of D+ and D-. When a hub is operating in advanced USB4 mode, classic USB 1.1/2.0 signals still ride through a normal USB 2.0 hub"

"disabling PCIe also means disabling the way that all Thunderbolt 3 docks get to USB 1.1/2.0/3.2 devices at all"


Does the MacBook Air with M1 (or M2) actually support 10bpc on the 6k Display? It is never mentioned in the specs about the Macbook Air, AFAIK. My guess is that it does not. The M1 from 2020 also did only support 8bpc on the internal display.

For the MacBook Pro with M1 Pro it is explicitly mentioned in the tech specs that it supports 6k with 10bpc.

This page claims that even the M1 MacBook Air from 2020 supports 10bpc on the external 6k monitor: https://support.apple.com/en-us/HT210437

But the actual technical spec page for the machine never says it, AFAICS.

https://support.apple.com/kb/SP825?locale=en_US

> Simultaneously supports full native resolution on the built-in display at millions of colors and: One external display with up to 6K resolution at 60Hz

My impression was that Apple implemented Thunderbolt themselves (and got rid of external chips) and that at least the first generation M1 machines lacked the 10bpc feature in the GPU and/or the Thunderbolt part.


Do you have ideas for how to test it and find out definitively in person? I haven't found a way to confirm one way or another.

The Graphics/Displays page in System Information unfortunately says nothing about colors, only resolution (https://github.com/shurcooL/home/assets/1924134/af4b19a3-b85...).

When looking at a 16-bit PNG of a white to black gradient, I'm not able to visually spot any banding even when zooming in. It's fairly easy to spot steps when looking at a 8-bit PNG version of the same gradient. But the same happens on the built-in display despite it supposedly having only 8-bit color ("support for millions of colors").


Yeah, that's definitely my experience, too. It is difficult to actually get color depth information. Several other Macs, especially the older ones tell you there the color depth.

You might want to try out SwitchResX and see what it says about the screen. In some menu, there is color depth information about the screen. But I don't know if it is actually accurate and where the application gets this info from.

https://www.madrau.com


> Let's remove 20% due to 8b/10 encoding

Thunderbolt 3 uses 64b/66b encoding (unlike display port 1.2 alt mode), so there's more bandwidth left over for non-display protocols.


Thank you for pointing out this mistake. So the 40 Gbps value advertised by Intel is not the pre-encoding rate but post-encoding rate. This means TB3 pre-encoding rate is 40.25gbps. Minus encoding we get post-encoding = 40Gbps (before TB headers).

This leads me to question my USB maths. E.g.: USB 3.2 Gen 1×1 advertised as 5,000 Mbps, but is that pre-encoding or post-encoding?

Is it `5000 Gbps pre-encoding -> 8b/10b > 4000 Gbps` or `6250 Gbps pre-encoding -> 8b/10b > 5000 Gbps`

Same question for USB 3.2 Gen 2×1 which uses 128b/132b (I double checked this time :P!).


USB-IF defines their data rates in terms of link rate. So 5 Gbps after 8b10b encoding is 4 Gbps. Makes sense, since fastest transfers never exceed 500 MB/s over USB3. Same applies for the 128b/132b variant, except it's only a ~3% speed reduction, so it's much harder to see that in practical testing.


USB 3.2 gen 1x1 is a 5Gbps data rate. With the 8b/10b coding that provides 500MB/s throughput.

USB 3.2 gen 2x1 runs at 10Gbps. After the 128b/132b coding this provides 1212MB/s.


came here to comment this


One thing I'm jealous of as a Windows user is that HDR "just works" in the Apple ecosystem.

I was just watching a YouTube video describing how to edit HDR videos in DaVinci Resolve, and step #1 was basically: "Get a Mac".

There is simply no monitor equivalent to the Apple Pro Display XDR available from any other vendor, short of the ludicrously expensive Sony mastering monitors that are made to order.

There are no 4K OLED HDR monitors that can hit 1,000 nits, for example. Meanwhile my local electronics store has an entire row of TVs with that spec! There are 6K and 8K monitors... but they don't do HDR, or do it so badly that it's actually making things worse.

Even if there was proper HDR display for PCs, or if I simply forked over $5K for the Apple XDR monitor... it wouldn't work. Not in practice, not for video editing or for photography.

There's something fundamentally broken with the entire Windows and Linux ecosystems that it has been a decade of widespread HDR support everywhere else, but in the PC world it's just crickets chirping and wolves howling in the distance.


Excited to upgrade to a Mac with Thunderbolt 5 which should allow for 6K at 120Hz with 10 bit color over a single cable. All this with an OLED panel is just about peak display for me.(though miniLED would be fine too)


Add a nice integrated iPhone quality webcam, and I think it would be endgame display. If there is one company that could do this, it would be Apple.

Although I have an LG Ergo 4k 32inch at home and I think the 16:9 ratio is just not enough for having 2 windows side by side. You'd only get 1500pixels per window on retina 6K.

I would've liked it to be a bit wider. Or maybe I'd go for a smaller 27 inch, and put my laptop screen as a secondary screen next to it.


Can we please stop having displays be 'special'. I should be able to plug hundreds of displays into my laptop with any combination of USB hubs and have them all just work.

Displays shouldn't need allocated bandwidth - they should give the best display quality possible given the available bandwidth.

Nothing moving on the display - no bandwidth used. Just a little animated gif? Just a few kilobits used. Full screen HD video? Gigabits used. Gigabits not available? Frame rate and/or quality drop.

This is the exact behaviour of plugging in hundreds of USB ethernet adaptors. Why should displays be different?


I upvoted to counteract the downvotes. The question is genuine and needs a thoughtful reply.

OP, the tech exists however contrary to expectations, to have multiple displays attached with a bandwidth constrained connection, the display tends to have all the special bits in it (contrast it with your “please stop having displays be 'special”)

To support no bits moving when image is static, the display must incorporate a framebuffer and once you add franebuffer to the display, it stops being a dumb display. Eg. You can add smarts to it and expose higher level primitives for “display acceleration” and reduce the bandwidth required further… and very quickly the display is just a computer with memory and video accelerator (aka graphics card) connected with a cable.

This is what RDP. Xdisplay and VNC accomplish. The basic complexity is not reduced but moved elsewhere. However the function gained is very useful so they exist and it’s a competitive landscape!


> the display must incorporate a framebuffer

I don't think there are any consumer displays sold today that don't include a framebuffer...


The really cheap ones that don't support VRR (variable refresh rate) only buffer a few lines of pixels and rely on the sample-and-hold LCD panel to act as a framebuffer between frames


You have a similar experience like you described above with those USB video adapters that don’t run DisplayPort. They use lossy compression and - as a byproduct - have increased latency compared to a DP/HDMI output.

Displays aren’t special. It’s just that they’re moving huge amounts of data hundreds of times per second, so they use a specialized protocol to do so.


Besides the dynamic bandwidth allocation, you pretty much described how USB 4 and all versions of Thunderbolt works.

Personally I'd hate to see my display bandwidth drop when I copy files to a flash drive so I'm glad no monitors support that part of your suggestion. Input lag = horrible UX.


> I'd hate to see my display bandwidth drop when I copy files to a flash drive

Well… if you connect the flash drive to the monitor and not the computer that should be expected. OTOH, I’d love to have faster data rates when my monitor image doesn’t change much (such as when I’m writing code or letting the computer move lots of data between external drives.


You're limited by the cost per PHY. Instead of adding a 40G port to your monitor, it would cost almost the same to add it to your PC and avoid the bandwidth contention. Most monitors only support 10G which is dirt cheap.

Also USB4/thunderbolt already lets you use the same PHY for either a monitor or a flash drive


The real reason? Because display protocols don't work that way. If they did, it would require that your displays retained state more than they do today.


FWIW: pretty much all displays have one or two screen buffers today... You need at least one to enable overdrive. And you need a second one for low frame rate compensation.


You’d need an RDP-style protocol to do that, and at the cost of the increased latency this brings.

And now think about what this will mean when you’re actually doing RDP on top of it.


Displays don't work this way. LCD panels do not have memory, they need to be periodically refreshed at least a few times a second[0], and so any sort of intraframe compression capability requires having extra memory in the controller and scaler. This was never done because displays have always had allocated high bandwidth channels since the dawn of console televisions. If you needed to get intraframe compressed video into them, you plugged in a decoder box of some kind (e.g. your cable box, a computer with streaming software, etc) and that device would spit out the fully decompressed image.

If you're just using normal HDMI or DisplayPort cables there's no bandwidth to share, and since that's the vast majority of display inputs[1], we haven't even had interframe compression until very recently when people started wanting extremely high resolution and high refresh rate displays. The only reason why this seems incongruous now is that we also want to shove this video data through a USB-C cable. USB shares bandwidth across multiple devices based on demand, so why doesn't video over USB? Why don't we just standardize "MPEG over DisplayPort" today so we can shove a video wall of 4K displays over a single cable? Well, a few reasons...

- Computer users expect lossless reproduction. Most lossy codecs only do well on pictures, and absolutely murder text and graphics - which is what most computer users are actually watching. For various technical reasons involving an Nvidia driver bug[2], one of my three displays actually already uses a compressed input - specifically Miracast - and text 'twitches' every few seconds at every GOP[3] boundary. I hate it.

- Lossy compression codecs add latency. This is not merely an artifact of the encoders being slow, some codecs also have 'algorithmic latency' - as in you get no output until you give it a minimum amount of input because the codec needs data to reference so it can remove redundancy.

- Transient loss of USB bandwidth is extremely difficult to debug. Just as an example, Windows can't help me track down the rogue USB device in my setup that unplugs itself at 3PM sharp every day. It just says something about "the last USB device you plugged in", as in, "I can't explain what USB topology is so I'll throw all that bookkeeping work onto you". Now imagine that instead of a minor annoying pop-up, it's a device diverting bandwidth away from my display to itself. The OS developers aren't even going to flash a pop-up for that, they're just going to have the screen glitch out and hope for the best, in that I'll blame the display manufacturer rather than the computer. Neither party wants to have to deal with a tech enthusiast plugging in more displays than their cabling or USB topology can handle and not understanding what limits they hit.

[0] Judging based off the minimum refresh rates in variable refresh rate mode

[1] Desktop systems outright don't support USB-C video altmodes at all, outside of special motherboards or add-in cards for Thunderbolt that give you DisplayPort injection almost by accident. The only monitors that support USB-C video are Apple displays; everyone other monitor company ships DisplayPort or HDMI. This is why Apple users have to carry around adapters and dongles all the time, to the point where Apple actually had to walk back their "single cable future" and put HDMI back in their laptops.

[2] I have three monitors, but only two optical DisplayPort cables to go to my other room. I used to use a DisplayPort MST hub to get my two smaller (1080p) monitors on the same cable, but for some reason my 1080Ti can't read the EDID data off one of the monitors when it's behind an MST hub. I worked around this with custom resolutions - i.e. manually inputting all the display timings - and it worked until a driver update last year. Now, if I ever plug in that second monitor to the hub, the mouse cursor (which I suspect is a hardware overlay) gets stuck on my primary monitor and shows corrupted image data, making my computer unusable.

[3] Group of Pictures - the smallest seekable unit in an intraframe compressed video stream. Keyframes - i.e. frames that do not reference prior frames - are the start of a new GOP and take up significantly more bandwidth as a result.


> The only monitors that support USB-C video are Apple displays; everyone other monitor company ships DisplayPort or HDMI.

Eizo has mass-marketed USB-C models.


Samsung, dell…


ASUS…


Everyone has USB-C monitors.


Is Thunderbolt 3 like the rest of the USB-C protocols in that dedicated pins are used for USB2? If so, the bandwidth doesn't need to add up -- the cable carries the Thunderbolt protocol and USB2 separately.


https://www.etechnophiles.com/thunderbolt-pinout-1-2-3-4/

Based on the D+/- pairs I believe you’re correct. On Titan Ridge and later add in cards USB 2 is expected to be passed through via a motherboard header as the controllers lack a dedicated USB 2 chip.


With Thunderbolt the USB controller is built in to the display, with full bandwidth available to each USB port.

But if the dedicated USB2 pins were being used, it would just be acting as a USB hub, and the total bandwidth on all ports would be limited to 480 Mbps?


No. TB3 still stuck with only using PCIe-USB3 controllers over tunneled PCIe. The other replies are right on the host side, where newer TB3 controllers pass-through USB2 from the chipset. But that is only used for non-TB connection modes. Same with TB3 Titan Ridge Docks. They are wired to the USB2 pins, because in legacy mode (DP Alt mode + USB3) the USB2 will be physically on those pins together with the 2 high speed wire pairs used for USB3.

Only USB4 started using those USB2 wires on USB4 connections.


So I test this theory daily, I run 2/3 full 4k desktops, 30/60hz when I can get it, but it's a desktop under linux. Kde is ass, so is gnome, not much deals with the hz.

I would like one day a desktop that works under linux, I'm trying gnome from kde currently, still a mess.


The display still supports USB 3 even without DSC, because all USB data gets sent over PCIe in Thunderbolt 3. The USB controller in the Titan Ridge doesn't downgrade the ports to USB 2.0, but because 1.8 Gbps is much lower than the USB 3 bandwidth, Apple says it doesn't support USB 3 on models without DSC.


That Al Gore setup is really awesome, specially for 2007.


> That Al Gore setup is really awesome, specially for 2007.

Yeah, the OP links to an image of Al Gore in front of 3 displays with the caption

Climate central Gore in his Nashville home office. where he wrote his new book. Mind map software and huge Post-it notes help him order his thoughts [0]

I did some medium-searching but couldn't come up with a name for the mind mapping software Gore used.

Also interesting is how the 3 displays present in aesthetically pleasing way because they are identical (resolution, size, bezels, etc).

[0] https://fabiensanglard.net/xdr/al.webp


> I did some medium-searching but couldn't come up with a name for the mind mapping software Gore used.

Purely a guess, but probably OmniGraffle back then.


Those are the 30” Apple Cinema Displays, if I’m not mistaken. I dreamed of having one of them. 3 is just preposterous


When I was in Uni I got a remote side gig developing some software. I spent my whole first paycheck on a 30" Cinema Display, replacing a rig of 3 hand-me-down CRTs. Never regretted it for a second, that thing was pure luxury.


It’s not the thunderbolt controller totally, the Vega and GCN (RX580 etc) Radeon cards on older Macs don’t support DSC. The 16 inch MBP had RDNA cards (which do)

For example the Vega MPX cards on the Mac Pro don’t do it, but the 5700 etc cards will give you the added port bandwidth.


Would people be happy if Apple refused to support incremental improvements? Sorry, the sticker says Thunderbolt 3, you can't use fast USB devices here. You have to wait for the next product cycle to get a new sticker.


I have one, its amazing obviously but the backlighting is not 100% perfect. Looking forward to the next one of course. I even got it to work with my PC via a [Belkin VR cable](https://www.belkin.com/support-article/?articleNum=316883). Even has USB support.


From the article:

> "But why is the 16-inch MacBook Pro able to run USB 3.1 (10Gbps)?"

USB 3.1 Gen 1 is 5 Gbps, not 10 Gbps.

(My Thunderbolt LG Ultrafine 4K also has USB 3.1 Gen 1 ports on it, so I should know...)


I think they mean USB 3.2 Gen 2x1, which does support 10Gbps throughput.

Apple's tech support page about their Thunderbolt 4 cable (https://support.apple.com/en-om/HT210997) also states "USB 3.1 Gen 2 data-transfer speeds up to 10Gbps".

I'm a little confused by Apple's machines wouldn't support USB 3.2 Gen 2x2 when they support USB 4 and Thunderbolt. I guess they couldn't figure out how to get faster USB out of their chipset? They list the same limitation on their iMac USB 3 ports (https://support.apple.com/guide/imac/take-a-tour-imac-apd2e7...).


> "I think they mean USB 3.2 Gen 2x1, which does support 10Gbps throughput."

They explicitly say USB 3.1 Gen 1 in the article, and the linked technical document [1]

> "I'm a little confused by Apple's machines wouldn't support USB 3.2 Gen 2x2 when they support USB 4 and Thunderbolt."

Yeah, it's weird, but Apple has never supported Gen 2x2, AFAIK. 2x2 means you have two 10 Gbps channels running in parallel on two sets of pins, but USB 4 does a similar thing to get 40 Gbps and they do support that. Shrug.

[1] https://fabiensanglard.net/xdr/Pro_Display_White_Paper_Feb_2...


Apple's got nothing left to offer here, Thunderbolt died with their relation to intel. It's a corpse still running.


Does this support HDCP? At what maximum distance will it support an encrypted connection?


Why not use a HDCP stripper (often marketed as a port splitter) and an optical cable? That'll solve both of your problems


I'm really waiting on a new version of XDR to get one.


Normally I find Fabien's articles to be a lot clearer, but I came away from this one not understanding what it was really talking about.

Does the monitor have XDR? Does it auto-negotiate extra bandwidth on newer MacBooks?


XDR means nothing. It's an Apple buzzword.

With DSC, you have at the minimum a fixed amount of bandwidth gain because blanking intervals can always be compressed, and DP is packetized without strict timing limitations. That should be enough to guarantee the additional bandwidth.


You wanted really 4k over that? Silly human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: