The author is correct in that media DRM is tied to GPU vendors on the field right now.
But hardware backed DRM can be so much more invasive beyond that. I have no doubts the long term goal of MS is to have a Windows version of Play Integrity.[0] So total control over everything that happens on your device. Just to give an example of what could happen if this becomes reality: https://en.m.wikipedia.org/wiki/Web_Environment_Integrity
This tech extended to browsers could easily mean that sites could refuse to serve you if your machine is running any bigcorp unapproved software. An easy example of that would be adblockers.
Unless we get lucky with secure world compromises like the Tegra X1 bootrom exploit[1] or get real good at passing legistlation that forces companies to give you all the private keys to your own machine, the future for personal computing is looking grim.
> The author is correct in that media DRM is tied to GPU vendors on the field right now ... hardware backed DRM can be so much more invasive
I expect mjg59 to know what they're talking about but like you say, I wonder the same thing about the strength of (what you call) Media DRM v Hardware-backed DRM.
GPU vendors have quietly deployed [hardware-based DRM] ... [which] works just fine on [boards] that [don't] have a TPM and will continue to do so.
Work fine? Even if a section of GPU's vRAM is out of the reach of the OS (here, to implement DRM), wouldn't TPM / DICE be needed to establish trust / measure GPU's firmware?
No, the GPUs have their own hardware RoT that measures the firmware. Modern GPUs are basically parallel computers with their own RAM, bootup sequence, BIOS, operating systems (drivers and firmware together are basically an OS), compiler toolchains, debuggers, sub-drivers and so on.
One which needs to be opened to users/owners instead of locked away. A price-doubling 100% sales tax on Universal Machines which lock owners out like with video cards (and their firmware), should make products which are not fundamentally significantly GNU-ideals friendy unaffordable to the average consumer (and therefore not economically viable anymore). Siemens can still sell their $5MM machine for $10MM to BASF or whatever, because BASF can afford to borrow double to pay the tax, but Cletus and Dorothy will not be buying sony playstations and apple iphones because $2,000+ isn't worth it.
The GPU is a completely separate computer running proprietary software. "Operating systems" do not operate anything anymore. They are just some user app, to be sandboxed very far away from the real action.
> "Operating systems" do not operate anything anymore.
Not entirely fair. There is still a kernel and a privileged userspace layer. That hasn't changed. The OS implements a common API that abstracts over ISAs and other finnicky hardware details that are under constant short term churn.
It's just that peripherals themselves have become so incredibly complex that many of them now require their own embedded systems in order to operate. The hardware was always a black box it's just that now it contains an entire embedded OS.
BS. Either we're privileged and can copy their precious content, or they're privileged and we cannot.
The current status quo is they sit above us in the truly privileged hardware modes while we are isolated, virtualized and sandboxed for their safety. It's not our computers anymore, they're just allowing us to use them.
Not what I meant. For example, on most mainstream linux distributions systemd fulfills the role of privileged userspace layer that I was referring to there.
> truly privileged hardware modes
The presence of a hypervisor doesn't imply paravirtualized hardware. Neither does the presence of an entire OS on modern GPUs imply a reduction in kernel responsibilities. Ring 0 is still ring 0. The OS is still managing and abstracting hardware in the same way that it always was.
That doesn't mean that these other things aren't concerning developments. Particularly having an entire unauditable shadow OS running on the CPU is an incredibly dystopian scenario that almost seems unbelievable. But technical accuracy is important when discussing these things.
> The OS is still managing and abstracting hardware in the same way that it always was.
Not at all. The OS is not "managing" anything. It has no direct access to the real hardware. Only the firmware does. The OS is just talking to the API the firmware presents.
They're not our devices anymore. They're intel's, nvidia's. They dictate how we use them. The hardware's just sitting there, waiting for the right electrical signals to come in. But the OS is not the one sending those signals. Their firmware's in charge of that. It's the middle man between the OS and the device we paid money for. If the firmware doesn't like the tune we're singing, it shuts us down.
There are completely separate computers inside these things. They don't run our code, they only run signed code. Whoever has the keys to the machine's code owns the machine itself. And it sure as hell ain't us.
"Managing" and "talking to an API" are not mutually exclusive though.
Yes, firmware has continuously become more complex. Yes, if you go back far enough (quite a long ways) there wasn't any.
Peripherals have always been a black box that increased in complexity over time. That increase in complexity does not imply a decrease in management complexity on the part of the kernel. Far from it! Modern device drivers are far from simple.
> They're not our devices anymore. They're intel's, nvidia's.
This is arguably true, but it is also a rather separate topic of discussion.
> They dictate how we use them.
That's largely only in theory. Now if you had said that Apple or Samsung were dictating how we use our phones I would have been inclined to agree. But I don't think gating certain features in the CPU or GPU for the purpose of market segmentation qualifies as dictating how I use my device. I don't like the practice, but I can't deny that I am able to use the APIs provided by the device in an arbitrary manner without it phoning home to the manufacturer or otherwise authorizing the specifics of their use.
> But the OS is not the one sending those signals.
Depending on how you define "sending those signals" and where you consider the boundary between sender and receiver to be you could reasonably argue that the OS never did that to begin with, or alternatively that it has always done so and still does. It's really quite arbitrary and depends entirely on where you consider the boundary of the device to lie.
I purchase a peripheral. It is a black box that implements some device or manufacturer specific API. The kernel has a device driver that abstracts over this and provides a generic userspace API that will (hopefully) remain relatively stable for multiple decades. That's the extent of the contract and that hasn't changed at all.
The device driver situation is already nearly unmanageable. Imagine how much worse it would be if the kernel needed to manage every last minute hardware detail down to the model and even sub-model variants. For example, for every USB mouse and keyboard, past and present. And that's before we even consider things like the firmware for the USB controller on the mouse, which in all likelihood is its own modularized unit from an entirely different manufacturer. But we're going to need to account for every last detail of that ourselves if we fully commit to the "all opaque firmware bad" route. After all, for the kernel to "truly" be in control of the hardware I suppose it will need to manually manage every last pin that falls under software control.
Technical accuracy and nuance is really quite important here. There are many different nefarious things happening at once. Conflating them only serves to confuse the discussion and leads people to (wrongly) believe that there's no need to worry about those weirdos ranting and raving in the corner.
> That increase in complexity does not imply a decrease in management complexity on the part of the kernel.
Complexity is not the point. Control is. The operating system should be in complete control of the system, and it isn't.
Complexity is part of the reason for that. The actual hardware is exceedingly complex, so manufacturers simplify it with firmware that presents a more convenient API.
That's convenient but it means we are no longer in control of the hardware. We merely interface with the convenient abstraction presented to us. It's that abstraction which actually drives the hardware, not our "drivers".
And that obviously becomes a mechanism by which to control us. Access to perfectly good hardware could be denied by the firmware for unacceptable reasons such as market segmentation or copyright enforcement.
> But I don't think gating certain features in the CPU or GPU for the purpose of market segmentation qualifies as dictating how I use my device.
BS. I want to copy stuff. It's not letting me. It's that simple. Some nonsense about "protected video paths".
The hardware is working and able but a fundamental computer operation cannot be performed because the firmware doesn't want to. Computer says no.
> The device driver situation is already nearly unmanageable. Imagine how much worse it would be if the kernel needed to manage every last minute hardware detail down to the model and even sub-model variants.
If that's the cost of maintaining control, we should pay it gladly. Better than growing comfortable with the manufacturer's convenient abstraction which also conveniently allows them to control what we do with "our" machines.
> There are many different nefarious things happening at once.
There is exactly one thing happening here: corporations usurping control of our devices to protect their interests and profits. The means by which they do so are far less important, they are merely details.
These details are irrelevant in the grand scheme of things. It's all about control, about giving you less of it, the minimum amount of it. The exact mechanism by which they do it is irrelevant.
It's always some abstraction, some indirection, a little bit of clever cryptography. Maybe there's an even more privileged hidden OS running on the CPU which can access everything while we can't. Maybe there's some signed firmware running in a completely separate computer in the hardware and that computer acts as a middleman and gatekeeper. It doesn't matter. Our goal should be to take over the functions those components are doing, whatever it is that they do. They should be running our code, doing our bidding.
> Conflating them only serves to
confuse the discussion and leads people to (wrongly) believe that there's no need to worry about those weirdos ranting and raving in the corner.
What else is new? Stallman has been warning everyone about exactly this for nearly half a century already and people still treat him like some lunatic religious zealot despite the cyberpunk reality we live in today. Even I made that mistake at some point in my life.
If they won't listen, they'll suffer the consequences. They'll end up living under the control of corporations. Might as well remove the word "hacker" from this website's name because everything it ever stood for is over.
In my opinion, Stallman's mistake is he's way too nice about it. Always speaking softly and being reasonable about everything. Always getting bogged down over precise wording and irrelevant details. GNU has an entire glossary page dedicated to precise wording.
Meanwhile, the entire industry has worked around his ideas by isolating his free software and maintaining control with firmware. To have a truly "freedom respecting" computer with no firmware blobs, you gotta get one from literally decades ago. Because these days everything has firmware which you do not control. If you're lucky. If you aren't, you get something that's literally locked down to the point you have no choice whatsoever. What good is free software if you can't run it? It's worthless. It's worse than worthless: one day you wake up and you realize you were working for free for the corporations who are now profiting off of you while denying you the control you wanted.
It's all very simple. Free computers are subversive weapons. They have the power to literally wipe out entire sections of the economy. They have the power to defeat judges, armies, nations. They are quite literally the most important invention of mankind.
Naturally, corporations and governments will do everything in their power to control what you can do with a computer. First, they reduced computers to toys which could run all programs, except the ones they didn't like. This sort of "computer" is what we are discussing right now. Computers where you can do everything except copy their precious content. They are currently in the process of reducing computers to toys which refuse to run all programs, except the ones they like. That's the mobile landscape. Does it matter that hardware remote attestation is the mechanism by which they're doing it? Not much.
I can barely find the words to describe how disgusted this status quo makes me feel. I know what they're doing and I know they're succeeding. It makes me sick. Like I'm witnessing something great be destroyed due to greed and fear. I feel sick.
If that makes me the weird fellow raving in the corner, so be it. I'll keep raving in every thread about the subject until the day I get banned by dang. There's no point to this site if they win anyway. What good is Hacker News if you can't hack?
I always said a hefty sales tax (50%? 100%? 200%?) on final sale of any product containing just a single Universal Machine which has artificial designs/locks that prevent the owner from replacing any and all firmware/software with versions he has authored, and/or which lacks complete enough documentation of design and interfaces that would enable a knowledgable and capable owner to author his own software/firmware. This should apply to PCs, phones, watches, microwaves, televisions, CPAP machines, automobiles, toasters... everything which contains a Universal Machine. Uncontrolled [by owner] Universal Machines are a national security concern which has the potential to turn grave at any moment.
A "tax" like this is essentially equivalent to a fine, and a fine is a price
Also, companies can just price the additional cost in, blame the government for the price increase, and mislead consumers about the tradeoff being made. A ban is harder to do that about
Yep and you can come in and make a fully open and compliant competitor product, because your closed and uncompliant incombent is forced to charge a price which should give you enough margin to succeed.
I am admitting that yes closed beats open at money extraction/harvesting from customers, which is why you only ever see closed hardware. The whole idea is to kneecap business models which depend on handcuffing owners with digital locks. This is economic lawfare, I am not hiding that. We The People are not animals on a farm to harvest dollars from occasionally, as if they were milk and methane.
Yea but you know what's even better than a tax? Not being able to release those products onto the market. I really doubt products whose "value-add" is lock-in would be able to survive black-market dynamics. But Apple, Google, Microsoft, John Deere, what have you can totally afford extra taxes, and they probably even have the market power to just foist those increases on their customer base and not lose them because they're effectively a captive market in the current regulatory regime, and that's also the kind of market power that makes it so that you don't have to compete on price. I consistently can get cheaper laptops by seeking out ones that don't include windows licenses, but this clearly hasn't affected the laptop market much
Also, let's say I'm gonna undercut someone on price for electronic devices. Unless I'm starting from a place of great personal wealth and don't take any capital at all, this needs investment, which means that an obvious solution to any scenario in which I'm meaningfully harming the bottom line of one of these incumbents could just be buy it out from under me, which is indeed how this generally plays out in the real world
If we are serious about regulating monopolies, we have to understand that remedies that rely on raising operating costs are simply always going to be ineffective
So the idea is to ban the practice for smaller players without the scale to eat the costs?
No thanks, an outright ban is necessary. This will not prevent manufacturers from doing business no matter how they may whine about it, and frankly if this does somehow kill their business it should
The idea is to make it almost completely commercially unviable to sell locked down DRM hardware to small players, and only somewhat harder for XYZ Healthcare to buy the multimillion dollar GE MRI machine unless the MRI is fully open and compliant (XYZ can borrow and amortize, but Joe can't do that to buy playstations and cars).
Wait so you're saying that it's important to allow predatory business models to continue in the industries that do the most harm through constant consolidation to support ballooning costs?
Like the healthcare system's consolidation and scale that allows it to deal with massive extra costs and the degree to which that system is beholden to predatory technological models is if anything a great motivating example for the potential benefits of a ban
I have trouble understanding your use of the term DRM. Media DRM makes sense: the copyright holders want to "manage" their rights digitally. How is that relevant to Play Integrity or WEI? Whose right is being protected or managed? If I have an Android without Play Integrity there are certain apps that will not run, but I don't see any rights being managed here: an app developer has the right to refuse service just like I have the right to refuse running an app.
In fact I see no relationship between DRM and Play Integrity other than a tenuous connection that both are about controlling what a user cannot do on their device. If this is what you mean, then you have made the same mistake as FSF by conflating unrelated technologies.
Ultimately, DRM is untenable without users also being locked out of their own devices.
Consequently pressure to support more effective DRM will always translate into pressure to restrict what users can do with their devices.
Furthermore, the only defense against this is large open device market share: once closed devices comprise most of the market, DRM proponents can announce they'll stop supporting open devices, creating a downward spiral that further decreases the availability of open devices.
This is an FSF level understanding. Android devices are fully open and you can reflash them to whatever OS you want. Some remote servers won't give you service if you do that, but nothing is locking you out of your device. As Android dominates the global market, you already live in that world where most devices are open.
>Some remote servers won't give you service if you do that
This is exactly my problem. Before ideas like this surfaced, the demarcation line between who controls what was purely based on ownership. The machine that I own acts only on my behalf and in my best interests, the server that you own does so for you (or atleast for PCs this has always been the case)
TPMs, attested bootchains and whatnot trample on this whole concept. It's like your very own hardware now comes with a built in Stasi agent that reports on your conduct whether you like it or not. It bothers me on a visceral level and I'm constantly wondering if it's just me.
It's not just you but what people who hate remote attestation tend to forget is that it's a sword that cuts in both directions. Servers can remotely attest to you, not just the other way around. Signal is an example of an app that demands a remote attestation from the server before uploading your sensitive data.
Attestation is just a tool. It can be used for all kinds of things and doesn't privilege one side or another. The average app developer doesn't truly care what device you use, they just want to cut out abuse and fraud, which are real problems that do require effective solutions.
Ultimately, trade requires some certainty that both sides will act as they promise to act. Attestation is more important for individuals attesting to companies because individuals have so many more ways to hold companies to account if they break their agreements than technology, like the legal system, which is largely ineffective at enforcing rules against individuals due to cost.
> Attestation is just a tool. It can be used for all kinds of things and doesn't privilege one side or another.
It priveleges the side that designs and uses it. By and large that's going to be the corporations, not individuals or those acting to maximize their interest.
> The average app developer doesn't truly care what device you use, they just want to cut out abuse and fraud, which are real problems that do require effective solutions.
I don't doubt that. But the price of attestation, if it's not properly isolated from the hosting OS (like Microsoft's completely unrealistic attempts of bringing the whole OS into the trusted computing base, kernel and applications and all), would be a homogeneity of computing I don't think is necessarily worth the benefits.
The good news is that such proper isolation is not only possible but even desirable (it keeps the trusted computing base small), and if done well could actually replace annoying half-measures such as "root detection": Who cares if my phone is rooted, as long as my bank's secure transaction confirmation application is running in a trusted, isolated enclave, for example?
Fair points. I was aware of this anti fraud angle of WEI/attestations before.
From this point on this is more of an emotional argument rather than a technical one, but I feel like the negative effects way outweigh the positive ones. Giving MORE power (be it technical or poltical) to big tech companies is just tipping the scales in their favor so much we will even worse off than we already are.
But if you work in anti-fraud and are fixated on solving this problem as effectively as possible, I can imagine not caring about this too if I were you...
Fully agreed on attested bootchains. General-purpose level OS-wide attestation is indeed a blight on open computing: It's ineffective because it implies a gigantic trusted code base (what are the odds that the entire Windows kernel is completely free of vulnerabilities?), and conversely it does tie you to somebody else's more or less arbitrary kernel build.
Almost complete disagree on TPMs. A better comparison than a spy would probably be a consulate (ok, maybe an idealized one, located underground in a Faraday cage): Their staff doesn't get to spy on you, but if you ever do want to do business with companies in that country and need some letters notarized/certified, walking into their consulate in your capital sure beats sending trustworthy couriers around the world every single time.
To torture that analogy some more: Sure, the guest country could try to extend the consulate into a spy base if you're not careful, and some suspicion is very well warranted, but that possibility is not intrinsic to its function, only to its implementation.
By that same logic evil is not inherent to attested bootchains either. When used to verify that the computer loaded the OS that the end user expected it is a very powerful security tool. It is only bad when the keys aren't under the control of the device owner.
You're mixing up the authentication and attestation parts of secure boot here.
You can absolutely install Linux, run secure boot (e.g. to protect you against "evil maid attack"), use your TPM to store your SSH keys, and live a happy and attestation-free life.
You can also do other things, but if you don't want to, why would you?
Attested boot chains aren't normally being used to attest a whole general purpose OS. They attest up to a small hypervisor that allows partitioned worlds to be created and chain attested, and then sensitive computations are done inside that.
> It bothers me on a visceral level and I'm constantly wondering if it's just me.
It's not just you.
It disgusts me so deeply I wish computers had never been invented. A wonderful technology with infinite potential, capable of reshaping the world. Reduced to this sorry state just to protect vested interests. They used to empower us. Now they are the tools of our oppression.
While I don't agree with the FSF on even close to everything regarding trusted computing, I think for a fair discussion you'd have to at least steelman their arguments here:
I think it's fair to assume that in a world in which almost every device supports attestation and makes it available to any service provider by default, without giving users an informed choice to say no or even informing them at all, service providers are much more likely to provide access exclusively to attestation-capable clients.
That, in turn, has obvious negative consequences for users with devices not supporting attestation (whether out of ideological choice, because it's a low cost device and the manufacturer can't afford the required audits and security guarantees etc.): Sure, these users will always be able to just refuse to transact with any service provider requiring attestation.
But think that through: We're not only talking about Netflix here. At what availability rates of attestation will decision makers at financial institutions decide that x% is good enough and exclude everybody else from online banking? What about e-signing contracts for doing business online? What about e-government services?
I am at the same time excited about the new possibilities attestation offers to users (in that they will be able to do things digitally that just weren't economically feasible for service providers, since they often have to cover the risks of doing so) as I am very wary of the negative externalities of a world in which attestation is just a bit too easy and ubiquitous.
In other words, the ideal amount of general purpose attestation availability is probably high, but significantly below 100% (or, put differently, the ideal amount of friction is non-zero). Heterogeneity of attestation providers can probably help a bit, but I'm wary of the inherent centralizing forces due to the technical and economical pragmatics of trusted computing.
The ideal amount of attestation on a general purpose computer which is owned by me is zero. Any nonzero amount implies that control of the device has not actually been turned over to me. It implies not only the slippery slope to which you refer but also things about back doors and opportunity for dystopian political regimes and much more.
When it comes to financial or legal matters (and this includes online banking) a small dedicated hardware element for signing fingerprints is all that's ever been required. Anything more is an overreach.
> back doors and opportunity for dystopian political regimes
No, this is a misunderstanding of what a TPM is.
A TPM is a secure element inside your computer, similar to the chip running your credit and debit card. That's it. Without you using it (i.e. your OS or an application you installed asking it to do something), it's exactly as dangerous as a blank chip card in your house that you don't use and didn't open any account for.
If you don't want anybody to talk to it, don't install applications or OSes on your computer that do things you don't want. You have full control over that! Not running software that's not acting in your own best interests is generally good practice anyway, TPM or no TPM.
> [...] a small dedicated hardware element for signing fingerprints is all that's ever been required [...]
You might be happy to hear that that's exactly what a TPM is, then!
I am fully aware of what a TPM is. I was speaking about trusted computing - ie the "general purpose attestation capability" that you referred to above.
As you say, a TPM alone can't do much of anything and doesn't pose much of a threat. Of course expanding the acronym - Trusted Platform Module - is a bit of a giveaway. They were always fully intended to serve as the root of trust for much more nefarious things.
> the only thing I’ve ever seen TPMs used for is full disk encryption and user authentication.
Aren't all device attestation schemes underpinned by authenticated boot which itself is underpinned by a TPM? This is certainly the case for Android - AVB is implemented on top of secure boot on all the devices I've ever owned (and Play Integrity, if I had ever permitted it to run, on top of that). Do I have some misunderstanding about the stack?
> Conversely, DRM is alive and well on almost universally TPM-less devices.
You mean software DRM I assume? Because the only TPM free hardware backed DRM that comes to mind is GPU based encrypted streams where the GPU does the decoding and final compositing locally. And even then the TPM-equivalent exists, it just isn't accessible to the end user.
SGX can be used to do various interesting things without attesting the state of the broader system, but none of the examples that immediately come to mind feel much like DRM to me.
> comments in this thread end up dead
Thanks for letting me know. I guess I should email them?
You think there's no value in your laptop being able to attest its state to your phone in order to give you confidence it hasn't been tampered with? That's something that would be entirely under your control.
So don't normalise manufacturer locking. We're not going to prevent the bad thing from happening by arguing against the hardware that enables the bad thing - we're going to need to argue against the bad thing.
When this remote attestation business started, people tried to minimize its impact by saying only apps that really needed it would use it. Such an absurd argument. Everyone is going to use this technology. It will literally become the default.
Everyone loves cryptography and wants it working in their favor. Everyone. It's great for us when it protects our messages and browsing from surveillance capitalism and warrantless government espionage. It's extremely bad for us when it becomes the policy enforcement tool of corporations and governments.
Remote attestation means we either we run the software which does their bidding and protects their interests and bottom line or we don't participate in society or the economy. Only way it could get worse is if the government starts signing software as well. One day even the goddamn ISPs will refuse to link to our hardware if it fails attestation.
It's literally the end of free computing as we know it. Everything the word "hacker" ever stood for, it's over.
Meh. I didn't reflash my phone. I didn't root it. I didn't do anything to modify its system files whatsoever.
I just installed KDE Connect, and an open source keyboard. Banking apps refuse to run because of those (because my keyboard might see my keystrokes!!!). They don't even need a failed hardware attestation to refuse you service.
So even if you don't try to modify your device, your device might still end up like half a paperweight. I either can't do banking, or I can't use the functionality I want.
The ability for someone with a news article or a game to only have you experience it if you pay their fee or watch their ads, preventing you from copying the content off your device or modifying it in some way that is unauthorized (removing ads or otherwise modifying the behavior to circumvent protection mechanisms) is pretty obviously the exact same idea -- not some mere metaphor -- and is a protection of the exact same "right" conferred by the exact same laws as allowing someone with a movie to only have you see it if you pay their fee or watch their ads... I am honestly having a difficult time understanding your confusion here :/.
You are still talking about DRM in the context of copyright. If someone has a news article or a game, they have copyright on that article or game and they use DRM to protect their copyright. All these are applications of DRM.
Applications like Play Integrity could be quite different: say a bank can refuse to move money if your instructions to move money comes from a device deemed not trustworthy by Play Integrity. That's like a bank can refuse to let you into their branch if you are dressed in swimwear. A game can also deploy this tech for anti-cheating purposes; really no different from a real-world casino refusing a customer who is known to be good at card counting.
And this is the root cause you fail to understand - the idea of copyright contradicts the idea of information freedom. You should be able to make a copy for you own purposes such that when you go back, the information is still the same and not manipulated and you should be able to actually share this information given it's important. For example a news story about corruption that has been taken down.
Also why the hell you believe that the same copyright rules that apply to a movie that can take millions to make and keeps relevance for years should apply to a news article for example? It's madness.
Information freedom is merely an ideal not a right. It is an ideal by techno-optimists. But there is no legal basis for information to be free. Indeed I agree with you that the idea of copyright contradicts the idea of information freedom. And guess what, copyright is in our constitution, and information freedom is not.
Furthermore, there is also no legal basis in differentiating copyright by the budget involved to produce the work.
> an app developer has the right to refuse service just like I have the right to refuse running an app.
In this case it feels like an app developer having the right to punch[0] you in the face just like you have the right to refuse being punched in the face :-P.
Not GP, and don’t have their patience anyway. But while I see them as real computers, they aren’t any that I enjoy using, so I care relatively little for them.
In most well designed systems the only keys that are useful are held in HSMs that won't export them to anyone, so you can't easily do that. You could at best sign a few things with the keys if you were able to compromise HSM credentials, but, once you were caught your access would be revoked along with anything you signed.
Consider a benevolent cryptographer, who is able to break modern asymmetric cryptography, but refuses to use it for petty personal gain, and is fully aware of the dangers of publishing it (why this cryptographer put it in dead man's switches instead, with recipients randomized over nearly all power blocs, political groups, companies, ...)
The cryptographer never implemented it on daily compute devices.
Perhaps this cryptographer would be willing to risk a low communication round release of private keys corresponding to public keys in ROM or burnt in eFuses etc... but only if the public key dump is sufficiently large and encompassing.
From the perspective of the cryptographer we are all whining wankers, and we should just collect all the public keys as a wishlist.
The cryptographer care naught about "liberating" hour long advertisements for the militaries or intelligence agencies etc. The cryptographer does wish sovereign compute to fellow humans, a primordial requisite for effective democracy.
====
While I understand the average programmer would ascribe an incredibly low probability to the above, the absolute absence of such a comprehensive public key dump is not in proportion to the probability considered.
> the future for personal computing is looking grim
I don't know. They could lock up the hardware stack as much as they want, in the end it's pixels being pushed to arrays. It's extremely hard to prevent these pixels from being intercepted. You'll have pirate groups just going deep in the hardware (opening the monitors and soldering and hacking and whatnots) and eventually tap these.
As for personal usage: I've got hardware from the eigthies still working fine.
Instead of:
movie2025-WEBRip1080p-x265.mp4
people shall download:
movie2025-WEBRip1080p-DRMfree-x265.mp4
And people shall just play that on their DRM-free hardware, either brand new or old.
For example people can still buy brand new CRT (!) screens today. Not just CRT screens but also brand new CRT PCBs to drive either new or old CRTs. It's 2025 and people can still buy brand new CRTs. That's kinda rad.
And if worse comes to worse, if it's really impossible to go "tap" into the pixels being sent to a DRMed monitor (which I don't buy for a second), there's still the analog hole. Pirates are just going to use old (non DRMed) gear to rip, analog style, DRMed content and then they'll just process the result with some AI models to get it back to near perfection.
Heck, the day's probably not very far where I can use, say, two handcams from the 90s to film a movie at the movie theater and then use an AI model to give back a near pristine movie file (as in: one where it's impossible for the layman to discern from the original).
> This tech extended to browsers could easily mean that sites could refuse to serve you
That's already the case: some content is geo-blocked. People use a VPN or just fire up Frostwire or qbittorrent.
Even a Raspberry Pi 5 goes a long way: when are these going to play the DRM game and make the future look grim, instead of bright?
I don't doubt there are really deeply sick, evil, people out there thinking about how they can ruin of collective future but I also know that they'll encounter people who have systematically owned their sorry arses.
We're not concerned about DRM because it will (or won't) stop us from redistributing and playing content. The stated goal of DRM (blocking copyright infringement), and DRM's general failure to meet that goal, is the least interesting part of the story.
We're concerned about DRM because what it does accomplish. DRM creates a vertically-integrated market wherein every layer of the stack is authoritatively controlled by a colluding oligopoly of vertically integrated hardware+media corporations (Apple, Amazon, Facebook, Comcast, etc.)
The greatest problem with DRM is drivers. NVIDIA hardware only works well in Linux because it's important to NVIDIA's business. Even so, there are longstanding issues that would have been fixed decades ago if kernel devs were allowed to collaborate. Instead, DRM (and copyright in general) demands that the driver dev team be siloed away from the kernel devs. This way, NVIDIA can use the exclusivity of its CUDA implementation as an anticompetitive advantage in its hardware business.
Copyright is, fundamentally, a wall between would-be collaborators. DRM is an implementation of that wall, but instead of isolating people, it isolates software. The wall DRM provides is not used to monopolize the distribution of content: it is used to construct moats in our software ecosystem.
There's a reason I prefer the experience of torrenting a Netflix rip over streaming Netflix on my Roku: the entire hardware+software stack is superior. I can actually sort and navigate my library. I can decode&render with my faster GPU. I can adjust the audio delay. I can adjust subtitle placement & font. I can mix the audio so that dialogue is actually audible. I can do frame interpolation with SVP (again using a better GPU than whatever your "smart" TV has onboard). I can seek forward&backward quickly without changing bitrate. I can let the credits play without being interrupted by an ad. The list goes on...
I don't want a goddamn CRT. I want modern hardware. The more we let corporations abuse us with DRM, the less compatible that hardware will be with real software.
The issue isn't preventing piracy, it is defending GPU market segmentation. In the old days you could flash Quadro firmware to Geforce cards and unlock features or modify clocks. The common thread is artificial scarcity.
Yes, you can never "plug the analog hole" completely, but you can definitely lock stuff down to the point it's impractical for 95% of people.
For instance, imagine some sort of audio / video fingerprint system that resides in Intel and/or nVidia's GPU drivers. Content gets played through the on-GPU HEVC / h.264 decoders already. Doesn't seem like a huge stretch to add a fingerprint authentication system to that stage.
Have a list of content IDs that are protected, and require a valid license to play.
Yes, your source file is unprotected (video camera in front of monitor), but all of your devices are unable to play it.
Yes, your ancient, circa 2024 desktop PC will still play it, but your new 2030 model TV implements this fingerprint system as well so you can't just cast this file to your 100" display in your living room.
This is to say nothing of other forms of content (applications / games / web pages) that actually could require attestation / DRM HW / always-on internet to run.
I was thinking of someone hacking a capture device that sniffs the output matrix of a display in order to capture the video and has a line-in plugged into the drivers on the speaker. Way out of reach of most people, but only a very small number of people need to be have the wherewithal to do it to keep the pirate scene going, especially if they live in countries that don't care about your DRM laws. The analog hole exists so long as people don't have DRM directly implanted into their eyeballs.
As I understand it, that's common now - cheap HDMI splitters do the HDCP negotiation on the first port, and then the unencrypted digital video and audio signals are cloned to both ports, ready to be captured.
Oh yeah, I had to buy one of these (I called it the HDCP defeater) because my receiver was otherwise unable to forward the negotiation between the Roku and the TV fast enough. I would turn on the TV and the screen would blink on and off for several minutes before the HDCP handshake managed to win the race. In theory those devices might be defeated with newer versions of the protocol, but that part that drives the matrix of pixels can never be encrypted until you have DRM built directly into your eyeballs.
Now I don’t really follow the Windows world but I thought the goal of the newer TPM stuff was to be able to provide a trusted boot chain the way Apple does. I’m under the impression that some of the earlier versions allowed the TPM module to be a separate piece of hardware from the CPU and thus exposed an hardware attack path where someone could snoop or man in the middle.
If you have a full trusted chain you can certainly use that to ensure that the DRM isn’t being tampered with. But I kind of doubt that’s the main reason behind all of it. There are enough good reasons they may want better security on the hardware outside of that it seems justifiable that they might push it.
I’m not arguing it’s good or bad, I just don’t think it’s 100% about DRM and the rest is a smoke screen.
> to provide a trusted boot chain the way Apple does
Your flaw is assuming that Apple's only doing that for your security and has no ulterior motives. But iOS apps are disabled and Netflix reduces to a lower resolution when you disable System Integrity Protection on a Mac (among other things?). The trusted boot chain is clearly a DRM enforcement tool in addition to being a security feature.
Deploying some sort of TPM remote attestation for DRM requires every component from every vendor to play nice, so I don't think you'll ever see that rolled out for Windows.
I would guess that the actual push for TPM is to have 'better' BitLocker, and Passkey support.
In practice the default BitLocker+TPM configuration isn't that great (no user entropy/pin, dTPM is basically worthless).
I have no actual understanding for how TPM is involved for Windows Hello/WebAuthn/Passkey or whatever, but at a glance it would seem Biometrics without a TEE seems like a very weak link.
I figured it’s more about ensuring the kernel and boot loading and OS are 100% unmodified by attackers/malware.
If that helps with bitlocker or passkeys or whatever that’s great. But I assume at its base it’s a pure integrity play.
I would think that would also let you know the public key stuff used to communicate with hardware authentication like a fingerprint reader is secure too, but I don’t know how that stuff works well enough to know if that’s true.
TPM can measure the Secure Boot state for later reporting (attestation) but when it comes to DRM, that’s not a terribly interesting bit of information, knowing the firmware and kernel are valid, when the configuration of the OS and installed applications is really the important part.
As far as I know there’s no real scalable way for that to work in the Windows ecosystem.
That makes sense to me. It just doesn’t seem that useful for DRM, seems like kind of a reach.
Especially in modern systems where the graphics card could do all of it and so the host PC never has access to the decrypted data or keys in the first place.
DRM is a government sanctioned desecration, by corporations, of your private property rights, by its very definition.
Whether it’s in the GPU, CPU, TPM, or any other part of computing property you ostensibly own, is an utterly irrelevant distraction, the root is the unholy alliance of government and capital power.
No, if anything, the fact that governments allow businesses to only "license" you digital content, i.e. not give you the option to actually acquire property rights in it, is. DRM is just a technical implementation detail downstream of that.
This is a much more accurate statement than the hate on the TPM. As the article describes, it is the GPU that has its own separate memory space that it can show on the screen without the CPU being involved at all.
I expect next generation workarounds will involve virtual GPUs.
If that worked it'd have been done over a decade ago.
The remote server is handshaking cryptographically with the GPU itself, which identifies itself using certificates and keys tied at the factory. You can't emulate such a GPU unless you find a way to steal the keys.
I have to wonder A) What does DRM realistically accomplish for the media companies? And, B) How are these DRM schemes actually being defeated? I do occasionally don my pirate hat* and have never had an issue finding what I want at the quality I want within an hour of a episode/movie being released to streaming. That would seem to indicate that these efforts at DRM are actually failing to have any noticeable effect at all.
[*] Jellyfin & and the -arr daemons are far more usable and stable then wading through the various streaming services interfaces, so I'll download episodes even though I do actually pay for the streaming services.
DRM is really about control. It's a technical trick that thanks to DMCA anti-reverse engineering clauses becomes a legal trick to dictate exactly who and how can play the content, much tighter than what copyright and consumer laws allow by default.
For example, without DRM you couldn't effectively sell separate licenses for computer screens and TVs, because users could just connect their computer to a TV.
DRM allows negotiating everything about distribution, up to who pays who for having a button on the TV remote.
Those who control the DRM have a veto power over everything, and have it viciously enforced internationally thanks to it being tied to copyright.
What are you talking about? You can connect your computer to a TV just fine. No, lost sales are not 'just a convenient excuse', the sales they lose to piracy are far more numerous than the ones they'd gain with this fictional system that relies on people being willing to throw away money for no reason. 'It's about control' is a favorite element of conspiracy theories but corresponds to no real-world corporate need.
DRM has likely had a big impact in shifting the casual consumer conversation to "hey they're gonna start down on account sharing" from early-2000s style "here's a straight-up copy I made for you." And this helps prop up the "they'll get a Netflix account to binge the same three shows over and over" part of the business model. The cumulative monthly cost adds up but it feels cheaper than forking over a few hundred bucks for a few box sets + buying a disc player.
None of the hurdles stop 100% of people. But every hurdle causes some people to stop bothering.
In many cases, downloading torrents and watching on a laptop/PC has a better UX than using streaming services.
For example, it's impossible to watch 4k content on popular streaming services if you use Linux, and even with macOS/Windows you need a specific combination of hardware + OS + browser, if a service even offers it.
To be fair, UX isn't only about the point of consumption. 4k torrents don't grow on trees (luckily, 1080p is good enough for my own tastes), and for old or less-popular movies, it's often tough to find seeders, or they all upload at 100 kbps or only have half the file or something dumb like that. (At least on the public trackers I'm aware of: I have no clue what goes on in the super-duper-exclusive private trackers that some love to boast about.)
So I'd put accessibility and consistency as important parts of UX that torrenting can often miss out on. For the common person who is using Windows/Chrome, macOS/Safari, or a gaming console, those parts can easily be more important.
Of course, these methods start to shine when legitimate methods are even less accessible. For instance, U.S. sports streaming is an absolute mess with multiple networks, regional blackouts, etc., on top of buggy apps, so that you sometimes can't watch a game legally for any price. People have widely picked up illegal streams as an alternative, usually preferring familiar platforms like YouTube if the streams aren't taken down quickly enough.
On private trackers you can sometimes even see 4k blurry remuxes up before the blueray is even available in your local area due to different release windows around the world.
As far as I’ve seen, they pretty much grow on trees as far as films are concerned. TV shows are a very different story though and outside of hugely popular series are far more inconsistent.
Without giving them any further instructions, ask all your non-technical friends and family members to (1) watch a popular movie on Netflix/iTunes/Amazon/Google Play and (2) torrent the same movie. Report back on how many are able to successfully do 1 vs 2. That'll tell you how good the respective user experiences are.
It has much better UX, but much worse accessibility.
You have to learn how to navigate an ever-dwindling list of trackers and probably a VPN, which is already too tall a hurdle for the overwhelming majority of people. Time is often worth the price of a 4K Roku and a subscription, even though that's still a technically inferior experience at the end of the day.
Piracy has two very hard problems: privacy and moderation. Moderation requires authority; authority requires trust; and trust relies on identification. I think we might be able to resolve this by replacing moderation with curation, but that's going to take a lot of ground-work that I'm too ADHD to do myself.
There's always an analog loophole. Even if the OS is unable to access the memory storing the decrypted data, you could always just plug the output of the machine into a capture card and capture the decrypted stream that way.
I suppose some monitors and TVs have "features" to cryptographically handshake with the GPU and ensure a secure link, but at some point the data must be decrypted and decoded to be displayed. This doesn't seem like much more than a speed bump for a motivated individual.
The end goal is DRM all the way to the screen. No capture cards will be allowed.
It's a cat and mouse game, but I wouldn't discount these efforts as a mere speed bump. Screen enforced DRM will make things much harder. A motivated individual with the right tools and hardware hacking know how may be able to jailbreak a screen to record stuff, but that's going to make things out of reach for most people.
It doesn't matter at all how out of reach it is for most people. As long as one kid in Russia can do it, the torrent is available for everyone in the world just as soon.
This has already been shown with videogame DRM like Denuvo. It's so hard to crack that only a handful of people know how, and yet they end up racing eachother so eagerly every time a new game comes out that it's usually done in under 24 hours. Unless you can beat "so secure that only a handful of people in the world can crack it" the situation will always be the same.
Denuvo has pulled back into the lead lately, it's taking a very long time for cracks to appear, if they ever do. For example Dragons Dogma 2 came out in March and still hasn't been cracked. Avatar: Frontiers of Pandora hasn't been cracked for a full year.
DD2 is a single player game, those generally don't maintain their active player counts forever. It peaked at 228,285 concurrent (not total) Steam players which are pretty good numbers.
Even if we do go by concurrent players, Black Myth Wukong had one of the biggest launches in Steams history with a peak concurrent of 2.4 million players, and that hasn't been cracked either after five months.
Apparently the people doing this kind of work have been disproportionately in Eastern Europe and what's going on in Ukraine has so disrupted that part of the world that they currently have bigger problems.
So then you're waiting for either that region to stabilize or demand for cracks to cause people somewhere else to get into the game, and in the interim you effectively have a temporary supply chain issue.
But it's hard to give credit for the ravages of war to the DRM pushers and it's not at all obvious that they've secured any kind of permanent advantage.
Not quite. The problem is that when you involve hardware, things are exponentially harder. When you tie it with content streaming, it's essentially a losing battle.
Hardware: makes cracking much much harder and out of reach for a lot of people. Even the people that can do it are going to be drastically slowed down due to this.
Streaming: means you can block specific device keys once you know they are compromised (the hacker managed to mod the TV to be able to record from it).
Back in the day when piracy was quite literally just copy and paste it was a very active scene.
But cracking Denuvo takes real skill- and there's no financial reward in it.
Back in the 90s bootleg DVDs and CD-ROMs had organised crime making money from it.
I took a cursory look at breaking it and it seems rather trivial in retrospect, just annoying at best since you have to rebuild the executable’s imports, relocations and section headers, along with removing the giant bloat sections that they add (seriously, when the main .text section of a game is 4MB, and then their extra obfuscation sections end up being over 250MB, something is ridiculously wrong.)
With how good modern screens are, and how good cameras are (and how easy both are to hack), you could always play back the video and capture the photons through the air.
There was something called Macrovision back in the VHS/DVD days that tried to defeat digital/analog conversion, and I'm sure visual techniques could be devised...
But I imagine someone with a good OLED and a good mirrorless camera (or even a cell phone nowadays) could make a pretty good 4K replication of any media that displays.
Probably would be better carefully tap into the signal lines to the LCD panel, and record and decode that data to then make a video. However if we assume that even the cable going to the panel is encrypted and the board on the panel is decrypting it. (although I have never messed with a panel like that). However it still has to got to drive the rows and columns of the display, so then data to column and row drivers is still in the open.
If we were to even assume the Column/Row drivers chips only accepted encrypted data they still have the individual traces coming out of them. The pitch of the traces is super tiny, but still possible to tap, but would be a massive pain, but still do able.
Although you can get devices that strip the encryption from an HDMI signal these days so it's kinda moot. So it's not exactly something anyone would need to do these days.
Especially when you add HDR to the mix, I think it's still extremely difficult to get a high quality screen recording, if only because it's so hard to get the exposure right.
You'd think so, but I've already run into a situation where DRM broke our screen capture for a live talk recording and I simply set up a camera to record the screen.
With a little bit of work (display a few calibration targets and build a quick and dirty LUT to match your display) you can get really convincing results.
It was good enough for the moon landing. The video feed from Apollo 11 was some special format that was specially decoded onto a particular monitor. There was a camera pointed at the screen to rebroadcast the feed globally.
Yeah, I guess it really depends on what your standards are. It's certainly getting better but I have trouble imagining anyone would consider this a good solution. At minimum, if this was what pirates had to resort to, then I would think the DRM has done it's job, in that it very significantly degraded video quality for the pirates.
I'd imagine they do this via huge (non-consumer level) cameras as well as by professional editors and graders who spend countless hours on the process.
But that doesn't really contradict your point. I don't know. I've never seen a good screen recording but I don't download pirated films so perhaps I've never seen an instance of someone really trying to get it right.
The cameras you can buy used for a couple thousand dollars have essentially the same sensors as huge cinema cameras, if not better in this application assuming you'll take stills.
Professional editors and color graders have to lower the dynamic range, because there is basically nothing that can get as bright as, say, the sun, and because basically no display can sustain peak brightness over the screen, which introduces an EOTF transfer curves, reducing the peak brightness and thus dynamic range.
You're right about pirated films, but that's because they're typically recorded in a run of the mill cinema while it's playing, not in controlled conditions in front of a carefully calibrated screen-camera combination taking a photograph of every frame.
Although it can only trip on certain devices or media players (mainly Blu-ray players, including PS3 onwards), I did read an idea that suggested Cinavia being placed inside an OS's kernel in a secure enclave to make it system-wide.
> The end goal is DRM all the way to the screen. No capture cards will be allowed.
Sure, but the closer you get to the eye ball, the bigger the loophole is.
It's not common anymore, but _way_ back in the day, some releases were made *in the projection booth* with a semi-pro camera on a tripod pointed at the screen. (look for old NFO files with `TS` or `TeleSync` in them to get an idea of when this was common-ish)
The analogue loophole will remain open until there's a HDMI to optical nerve technology that we're all forced to get at birth.
> The analogue loophole will remain open until there's a HDMI to optical nerve technology that we're all forced to get at birth.
This is kind of a pointless tangent, but you might not have to go that far. It's probably hard to get a recording of the Apple Vision Pro for instance.
> It's probably hard to get a recording of the Apple Vision Pro for instance.
I hadn't actually thought about that! For 99.995% of my time on this earth, "screen" meant "flat, glass, viewed from some distance". I guess it's time to spend some time thinking about what new ways to exploit the analogue loophole are...
I wonder which part would be harder: designing something to fool the "am I on a head? Where are the eye balls looking?" bits or the optics needed to re-combine the stereo?
Likely not an issue. Convincing consumers to strap a brick to their face has proven to be a persistent challenge, which even Apple has not been able to overcome. However, there is also a nontrivial percentage of the population who medically cannot use VR/AR. This population is large enough that there is a market for "2D Glasses" for removing 3D effects from movies in cinemas. Releasing a title as a VR exclusive means excluding this population from your sales figures entirely.
Yes, this is why I called it "kind of a pointless tangent". :)
However, the reason I think it's only "kind of" pointless... it is in fact true that as far as I know, there is no way to pirate any of the "immersive" TV shows Apple has released. You can't watch them on any other VR headset, or even watch some 2D version on a flat screen.
Which means there are videos out there in the world right now which are immune from the analog loophole, at least as far as I know. It's a very small subset of all the content that has been produced, and it will stay that way, but it does exist.
And no one I know was ever happy about accidentally downloading a telesync release. They were at best a stopgap for the FOMO crowd before a proper dvdrip inevtiably took their spot on the trackers. Yes you can make an actually high quality analog hole rip but it takes much more effort.
There are many USB capture dongles with chips that ignore DRM, easily available for cheap at popular online stores. Nobody has to go as far as jailbreaking screens.
In this case the piracy model might change into something like the software cracking scene where groups with specialized skills and equipment would be the ones doing the uploading. Regular people wouldn't be able to make copies with a capture card to send to their friends but popular films and shows would definitely still be released by those groups.
There is no analogue loophole, that's like 15 years behind the curve. Cinavia closed that a long time ago and meant that licensed devices like Bluray players, even TVs, can detect cammed recordings even those cammed in movie theatres.
Of course you can try to play them with hardware that doesn't follow the rules. But there's a finite number of vendors, so that isn't necessarily easy.
It doesn't detect the act of recording live, it detects that a piece of media was obtained via recording. So, you can still point a camera at the screen and obtain a video file without any disruption to the original signal. However, that file won't play properly on Cinavia-enabled devices.
It's not clear to me how widely Cinavia is actually deployed. The Wikipedia article hasn't really been updated in over a decade, and that's where I'm getting my info from.
However, the detection and enforcement can theoretically be done by any device or software that has access to the audio signal. The monitor, the GPU, the playback software, the operating system, etc. could each individually decide not to play the file, making it not work. Some of those can be bypassed in various ways, some can't. But instead of computers, there are smartphones, commercial media players/receivers, and televisions/projectors, which seem the most likely places to target for enforcement, and those would affect most people.
Nevertheless, I do wonder how real this actually is. Again from the decade-old Wikipedia article, it seems like Cinavia was meant to target both recording devices and playback devices. However, the Aurora theater shooting happened not long before the article stopped getting meaningful updates, and I wonder if public safety concerns stalled its deployment. Also, the article mentions that people were finding ways to remove or neuter the signal. I also didn't encounter any problems with what I assume to be protected media (a 4K movie and a 1080p TV show), either recording my screen with my Android phone, nor with playing it back on that phone and with VLC on my Windows computer with an nVidia graphics card.
In practice, only licensed Blu-ray players (hardware and software) implement Cinavia. While there was an idea/fear that an OS can implement Cinavia system-wide, that did not happen (yet).
Depends who makes your computer, phone or TV and the licensing etc. The tech is perfectly capable of stopping that. The device detects the Cinavia watermark and simply silences the audio after a few minutes.
I don't know whether streamers use it but it was widely deployed in the era when movie piracy revolved around making pirated Blurays. For instance the PS3 would silence the audio on a burned Bluray that had a theatre or TV cammed title protected by Cinavia on it.
A lot of this is about catching the fat head though. People who play videos using some hacked up VLC on Linux don't bother the studios, they're long tail and don't make a revenue impact. They're after the ordinary people who want to watch pirated stuff on a regular home cinema system.
Yeah, but pirate groups are getting the original streaming service's compression without re-encoding (so-called "WEB-DL"), even of 4k content. There's a weaker link somewhere.
WV L1 Keys/ PR SL 3000 keys require breaking into the TEE to steal those decryption keys.
Ever wondered why netflix 4k web-dls take a while for less popular shows?
Netfliy monitors these more tightly apparently and blacklist keys that are used to download. Then the group needs to buy some new device, the old one is burned.
I think there's some kind of watermarking going on, so once a rip is released to the public they can trace it back to which device keys were used to decrypt it.
Watermarking would require a separate version of each encoded file for each target device, which is not amenable to efficient CDN-ing.
It's quite easy to grab the encrypted media files, as they go over the wire - do this from two devices and compare what you get. (you don't need to strip the DRM to see if the two files are identical)
They wouldn't necessarily need to serve different data to each client when they control the whole playback stack, they could get clever by including duplicate frame data with subtle differences and making each device key only able to decrypt one of the variants. Repeat that throughout a show to add additional bits to the signature until it's uniquely identifiable.
But they don't control the playback stack, once the attacker has the keys. The attacker brings their own stack, decrypting the data with their own software.
Watermarking was a problem when Widevine L1 was first introduced. Pirates seem to have found a way to scrub the watermark from their releases. Either that or someone is burning a _lot_ of cash on playback hardware judging from the rate of 4K WEB-DL releases.
It doesn't need to be a lot - just replaced in the same cadence as the latency from initial broadcast to key revocation. Even if it's all in-house in Netflix and the watermark sufficient to identify the specific device key not all releases are made instantly after being made available on the platform, it still has to be downloaded, verified, watermark extracted before the key can be revoked.
If that's just a total of a single day, 365 cheap netflix devices per year certainly isn't out of the question, especially with the number of people involved in the many ripping groups.
Depending on the bit size of a watermark, device-based watermarking should be easy to defeat using a quorum of devices to agree on bit values. It should only take around log2(n) attackers to remove an n-bit watermark.
Yes, DRM are a perfect example of the "Smart Cow" problem [0]. This is so obvious that, as you say (A) it's quite obscure why media companies still bother with DRM?
The only beneficiaries of DRM seem to be hardware vendors, and even for them it's unclear if it's a net benefit, since it makes everything more expensive.
What advantage does this have over just criminalizing the underlying infringement regardless of DRM?
Also, how does criminalizing it actually help anything, since the difficulty is in the scale of it happening and the difficulty of detecting it rather than the severity of the penalties, and imposing draconian penalties on random kids only turns the public against you?
I think it plays differently before a jury. Juries can easily understand copying files and could potentially invalidate. But it's different when lawyers get to move the conversation to scary hacker garble about technical skill and intent. Evidence of intent is the real value.
Juries are far less incompetent than they're made out to be, not least because both sides get to describe what's happening.
And you can't even get evidence of intent from this anyway because DRM circumvention tools don't actually come with a ski mask and a set of lockpicks. You install a tool called "video downloader" which supports a hundred sites and 10% of them have some kind of DRM which it automatically strips in the background, you may not even be aware that it's happening when you use it.
DRM has definitely made pirating more difficult, and that is good enough for media companies, even though it is not enough to stop all forms of it. Also as others have pointed out, often it has more business/legal meaning than technical meaning.
One example -- it has made creating pirated videos almost inaccessible to most people. In the past, if all other methods fail, you can always just record your screen with a common recording application. That's not possible with GPU enabled DRM, which is enough to stop a casual consumer to share a movie to their friends (even at a less ideal quality).
> have never had an issue finding what I want at the quality I want within an hour of a episode/movie being released to streaming.
That's because you are consuming mainstream/popular media. You often won't find recordings of a lot of performance art (ballet, concerts etc)* and I-am-not-going-to-name-it-content because there is a lot less demand.
* an interesting exception is that a lot of content released via Blu-ray gets decrypted, ripped and torrented.
> A) What does DRM realistically accomplish for the media companies?
Control publishing rights, platforms, software and hardware that is used for the consumption of said media.
The publishers control the DRM, which then needs to be licensed by television makers, software writers, and such things. Then that gives them control over how is it presented, how it is sold, how it is consumed and it forces everybody to agree to their terms.
It is a power thing. They want to have power over other businesses. DRM laws help them do that.
> How are these DRM schemes actually being defeated?
Well I don't follow DRM piracy stuff, but at a high level the people that want to consume the media must be able to decrypt it to enjoy it. So if you buy one of these DRM devices and figure out how they work then you can decrypt anything that is compatible with them.
And you only need to decrypt it once since digital media can be copied a infinite amount of times.
> It is a power thing. They want to have power over other businesses. DRM laws help them do that.
This is the argument for repealing them, which is why you rarely see them making it out loud.
Instead they come up with some rubbish about making it marginally more difficult (spoiler: it's still easier to pirate stuff than use legal services and the only thing actually preventing everyone from doing it is that some people want to follow the law). So it's good to knock those fake arguments down when you see them and leave no excuse to keep the bad laws that ought to be repealed.
Accepting their actual motivation like it's a legitimate reason to keep those laws is like saying the reason we should keep doing the stuff Snowden revealed is so the intelligence agencies can spy on the elected officials regulating the intelligence agencies.
IIUI it's mostly a question of a mess of contractual language and incentives. Rightsholders license content, and in their licensing contracts they require a certain level of DRM for certain products. So streamers, etc, implement the DRM to comply with those contracts. Nobody at any level has an incentive or leverage to change the contracts, so the DRM continues.
That and also various principals are under the impression that DRM is possible, therefore they should implement it because it protects their IP, and protecting their IP is a fiduciary duty, therefore they must if they can.
That's not how copyright works. The copyright owners want to maximize earnings, and the licensees/distributors also want to maximize earnings -- they are typically for-profit businesses, and as such they have a fiduciary duty to maximize earnings. If they think that DRM will help them, they'll want DRM.
Of course for music DRM has proven to be pointless. People want to stream music, not buy music, and preserving media across so many media obsolenscence events has been such a pain that streaming is the only manageable solution for most people -- consumers don't want to make and manage copies anymore.
The same should apply to movies and such, but maybe not -- it's not clear yet.
This is not how corporate fiduciary duties work (courts repeatedly ruled there is no explicit responsibility to maximise profits or minimise taxes; Swedish Aktienbolaget are a notable exception there), though it is a common misinterpretation of them.
> B) How are these DRM schemes actually being defeated?
1. Disable video hardware acceleration in browser (preferably FF)
2. Open OBS studio
3. Record screen while streaming service of your choice is running.
Still works in modern OSs like Windows 10.
You're technically not circumventing the DRM decryption routines when you do this since the pixels displayed on screen have already been decrypted (just like recording cable to VCR post-decryption), so the legality of it is towards the lighter grey end compared to ripping DVDs. IANAL though.
That only works with the weaker tiers of DRM which are typically only allowed to stream low resolutions. As mentioned in the OP article, the stronger DRM tiers never make the cleartext visible to software and those are mandatory for high quality streaming.
Not to say the stronger tiers never get broken but it's a lot more involved than just recording them with OBS.
That device makes no mention of stripping HDCP and I can't find any evidence it does that on the maker's website. What makes you think it will strip HDCP?
> I can't find any evidence it does that on the maker's website.
I believe that's intentional (it would be illegal to import if it was advertised).
> What makes you think it will strip HDCP?
I've got 6 in use for lecture/talk recording without worrying about HDCP. Especially useful for presenters casting to chromecast or presenters using macbooks with DRM software (as blackmagic SDI converters don't support HDCP at all)
For example Macs and RedHat systems require HDCP if you've used Spotify or Apple Music since the last reboot (which applies to most speakers).
Some Chromebooks can't provide direct display out but cast to a ChromeCast instead, which also always requires HDCP.
We've also had talks on media and cultural studies which use a clip from e.g. a Netflix or Amazon Prime show as part of their talk. HDCP is almost guaranteed in this case.
If your HDMI chain signals that it can't handle HDCP, some computers will obey that (and downgrade or stop playback). But most broadcast HDMI tech can't even signal that HDCP is unavailable, so you'll get HDCP by default.
That's why every major venue, university or event has HDCP killers stockpiled. For 1080p60 that used to be cheap chinese HDMI splitters, nowadays it's mostly these Hagibis cards. If they're really fancy they'll have an HD Fury with HDCP removal license, but those cost ~$600.
Netflix limits FF and Chrome on Windows to 1080p. On Linux it's even worse: 720p.
And up through Dec 2023, FF and Chrome on Windows were limited to 720p. That's right, it wasn't until 2024 that Netflix on Chrome on Windows supported 1080p... That's what, 15 years after 1080p monitors became common?
The sort of person who can set up -arr daemons isn't going to really be on the radar of anyone pushing DRM. Those skills are so rare people will pay for them. The point is that there is a huge market of people who barely know what an internet is but want to watch media. As long as they can't figure out how to get pirated content up and running quickly then DRM is doing its job.
Pirated content represents a relatively small and motivated community. There'll always be something like it, so the question for rightsholders is how to manage the size and visibility of that community.
I'm not convinced you're making a point. Physical skills are also something of a rarity. Not as rare as technical administration, but manual labouring tasks are demanding and not for everyone. There are a lot of unfit, old, young or weak people who I wouldn't want to see hauling dirt because they aren't up to the challenge and wouldn't do a good job. Similarly there are a lot of clueless individuals who I would not want setting up -arr demons professionally.
> How are these DRM schemes actually being defeated?
Stripping the more advanced forms of DRM usually relies on compromised device keys which can and will be revoked if it becomes known that they've leaked, so the details are deliberately kept very quiet. If you've ever experienced a device suddenly losing the ability to play 4K Netflix, it may have been because its keys were revoked.
DRM implements the same problem Copyright does, but in a different place. To explain that, here's some backstory:
Copyright defines art as a good (instead of a service), and demands everyone play along. An artist can use their copyright to monopolize both the distribution and the derivation of their work. Effectively, this places a wall between any would-be collaborators, because collaboration is derivative. In a world without copyright, you could collaborate with the work of Disney by making derivative work. With copyright, however, Disney can demand you stop that work by monopolizing its copy. By abusing this demand, Disney can entrench itself as the only Mickey-Mouse compatible corporation.
In the software world, collaboration of work requires source code redistribution. Because of this, the social incompatibility that copyright is founded upon translates into literal software incompatibility; including proprietary software platforms and libraries. For example, Microsoft Office has entrenched itself as the "industry standard" for rich text and spreadsheets by leveraging the incompatibility of its data formats. While collaboration isn't impossible, Microsoft is granted a legally-enforced anticompetitive advantage from its copyright monopoly.
NVIDIA uses the copyright monopoly of its CUDA implementation to sell more hardware. It is able to do this because the hardware and software engineers are both part of the same vertically-integrated corporation. Because of copyright, AMD's software engineers are not allowed to collaborate with the CUDA developers, and AMD drivers cannot be made CUDA compatible.
This is where the story gets to DRM: Apple, Amazon, Facebook, Google, and others are all vertically integrated hardware-media-advertising corporations. Each of them wants to abuse their respective copyright monopolies (their media businesses) to sell hardware, just like NVIDIA does with CUDA. To accomplish this, they told us the exact reverse story: Digital Rights Management.
The story of Digital Rights Management says that hardware needs to be incompatible in order to enforce the copyright monopoly. See what they did there? Now any anticompetitive advantage that we get in our hardware and advertising businesses was all just from us doing whatever it takes to support those poor starving artists!
I can hear you asking yourself, "But where is the hardware incompatibility?". That's the extra sneaky bit on top. Unlike having a clear winner and a loser like NVIDIA and AMD, hardware-media-advertising corporations are all winners. Each one of them benefits from the other using DRM. All of their moats intersect into one giant ~~swamp~~, I mean lakefront development.
Here's an example to chew on: App Stores. Both Google and Apple have their own separate incompatible app stores. Sure, it's a loss to Google when a popular app only works on iOS, but that's a two way street. The important part is that they have a moat at all: when the little guys try to make a competitive alternative, they drown. There is plenty of room for two players at this game, and the intersection of moats guarantees there will never be a third. Even when Apple's moat starts to flood Android Island, what's left standing will be worth more than a drained swamp.
There’s some technical details missing here. I get decrypting the video on a gpu makes it harder to screen capture, but can’t you just still emulate the GPU in software or directly capture the digital video output? The GPU still has no unique hardware private key, right?
Capturing the digital video output is supposed to be prevented by HDCP encrypting the signal, but in practice that's pretty well broken. That is a (slowly) moving target though, each time they roll out a new HDMI version (e.g. for 4K) they get to enforce a new version of HDCP which needs to be broken all over again.
I don't think the version of HDCP attached to HDMI 2.1 has been broken yet but that's kind of a moot point because no current video formats require more than HDMI 2.0.
It's hilarious to imagine the meeting where they finally convinced themselves they could put worthwhile lasting encryption in consumer devices with a 10 year+ installation lifetime.
I suspect bad encryption still does exactly what they intend, because it means there is no simple one click solution built into an OS or browser to download streaming media for later watching or sharing with friends. For example, a lot of regular modern OSs have the ability to rip and share an unencrypted audio CD in a simple intuitive way with no shady pirate software to install.
It's a legal hurdle, not a technical one that prevents the 'above the board' software suppliers from adding this feature.
Pirates clearly are able to extract the 4K video and upload them to torrent sites, but the average media consumer would rather pay a netflix subscription fee that deal with the shady underworld of those sites with the virus installing and crypto mining popups, warning letters from your ISP, etc.
They've managed to make it hard enough that the number of people that do it is insignificant to their bottom line.
Torrenting hasn't been the most popular form of piracy for a while: many subscribe to a couple streaming services and use pirate streaming sites to fill in the gaps [1]. This is so prevalent that even entertainment industry talent use pirate sites for both series [2] and sports [3]. Takedowns mean that sites change from year to year but FMHY-style curation makes casual piracy easy: one can always find a site with 1080p content (unsure about the bitrate though) and great UX.
Hilarious ... if you don't understand modern DRM, yes.
Modern games console security shows you can easily build DRM that lasts 10+ years. Xbox One came out in 2013 and was never properly breached during its entire lifecycle, Xbox X/S replaced it and have also not been breached. Microsoft figured out how to make strong DRM ~15 years ago on devices they design and manufacture. There's nothing wasted about that effort given that it lets them subsidize the console costs and block cheating.
All the HDCPs are broken by those cheap Chinese splitters which downgrade it to 1.4 (allowed by the specs for some reason) and 1.4 is thoroughly broken. At least that was the case last I checked.
The parts involved in protected Audio/Video path do have their own encryption keys and hardware support outside of anything touched by the OS. In fact it's major part of what Intel Management Engine does if you do not have the "advanced" license for remote management, and AFAIK why AMD PSP on normal AMD cpus has closed source firmware. Both are responsible for setting up protected media path and both are interrogated by DRM modules to setup encryption.
The details don't seem clear, and I don't know that there's necessarily a unique key rather than stuff being batched, but basically yeah there's a cert chain back to a "trusted" source
How does the remote streaming server know a key is an authentic hardware GPU that hasn't been compromised, and not something you just generated in software, to enable software level decryption of the media?
It seems like you'd need some central SSL like certificate authority to verify and revoke credentials that were universally implemented in the same way by all GPU manufacturers.... surely there is no such thing?
At least for HDCP, that's exactly how it works. From the HDCP 2.2 spec [1]:
> Device Key Set. An HDCP Receiver has a Device Key Set, which consists of its corresponding Device Secret Keys along with the associated Public Key Certificate.
> Public Key Certificate. Each HDCP Receiver is issued a Public Key Certificate signed by DCP LLC, and contains the Receiver ID and RSA public key corresponding to the HDCP Receiver.
> The top-level HDCP Transmitter checks to see if the Receiver ID of
the connected device is found in the revocation list.
Thanks, that clarifies my confusion about how this could be realistically implemented. I couldn't see a practical way to verify every device on every connection via a central authority without massive scaling and reliability issues, but maintaining a small revocation list that can be cached everywhere media is distributed from seems quite practical.
There doesn't need to be a central CA, you just need to establish trust with the DRM vendor. The GPU vendors coordinate with Microsoft to make Playready work, Android devices have certs that can be validated by Google for Widevine, Apple just does their own thing.
Making your own GPU sounds intriguing. You could hook up a small ARM computer to the PCI slot and implement a GPU in software. A very slow GPU obviously, but fast enough to decrypt the video frames. I'm not sure if you'll be able to write a driver for it that will seem legit to Windows.
Yes, you can get around it by playing the video in a virtual machine and capturing it from the host. For widevine videos playing in browser it is also as trivial as disabling hardware acceleration from the browser's settings.
Doesn't the article forgot to mention that TPM allow to do trusted boot and remote attestation ? It sounds like to me that could very well be used to make software DRM more efficient (by making sure you run a DRM friendly OS for example)
But so does a USB-connected security dongle. Does that make USB "complicit in enforcing DRM"?
TPMs are really just embedded Yubikeys. Unless your UEFI/BIOS "conspire" to supply them with boot measurements, and your OS in turn conspires with that to carry these measurements forward and provide them at the application layer, TPMs can't harm your freedom.
TPMs are a much more "freedom neutral" technology than people generally assume in these discussions.
The TPMs are already provided with boot and OS measurements for secure boot purpose which would allow DRM to confirm you use an approved OS kernel, so I guess the computer is already conspiring. And the conspiracy could be enforced by videos distributors in exchange for the privilege of having HD content.
It could, but why? They've come up with a solution that avoids having to place any trust in the OS at all, so why introduce additional complexity and fragility?
If they don't have a GPU that implements this then the decrypted material would be available to the OS, which is precisely what the streaming media platforms want to avoid.
It's because they don't trust the OS not to leak the decrypted bytes, but they could thanks to TPM (the OS would have to have a memory memory chunk not readable from admin/root etc). In this scenario the OS is part of the DRM.
Mandatory where? Most of the devices people are streaming video to these days aren't even PCs!
Macs haven't had TPMs for a while now (I think Apple never really used it and dropped it even before the Apple Silicon switch), but of course they have their proprietary equivalent.
Yes, but Windows isn't where the vast majority of people watch Netflix, so I don't see the incentive for media DRMs suddenly also making TPMs mandatory.
>The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry.
This sounds 100% on-brand for the FSF. The FSF's primary public-facing persona has peculiar computing habits so far removed from the mainstream that it's likely he has absolutely no clue how the real world works.
In fact by his own statement he has to rely on volunteers to update his website.
It's disappointing to me because the FSF could be so much more influential today, but the cult of personality around RMS has really destroyed their public credibility among "normies", the most important demographic to convince.
When the FSF finally realizes that a political organization such as theirs needs a public face with charisma and social skills, it will be too late.
“Normies” are never going to care about the stuff the FSF is interested in. I don’t think you can extract the philosophy from the eccentric personalities that created it, they’re one in the same.
And how are they best convinced? Besides personal benefits like bribes, public opinion (re-election) and consumer habits (company profitability) seem to matter significantly. Please do add the options that I am forgetting.
No matter who is re-elected, there's a preset window for law & policy, which perhaps only public outrage (and opportunist politicians) can shift. Outrage is a high bar (may be perhaps outside of Twitter).
If you believe that normies deserve computing freedom (this doesn’t seem to entirely be consensus in the scene), it ought to be a goal to explain the benefits of it in a way that they will understand. Some may still not care, but my experience is that a good part actually does. If nothing else this is good leverage to influence change for one‘s own interest.
The benefits are incomprehensible to "normies" and they have no power to effect change. They're just going to use whatever software gets put in front of them. All the progress - which has been substantial, free software is basically everywhere and does everything - has come from highly motivated and technical individuals who are anything but normal.
That follows a basic pattern for any effective change, normal people pretty much always just whinge and achieve nothing. They're lucky to even be allowed the pittance of political power that is voting, historically speaking.
Most people just want to be able to access media easily with no effort- which they already can do with cheap streaming subscriptions. They have no interest in owning the rights to use it forever, or in downloading or copying it. They wouldn't want to take the time to figure out how to do that, even if they legally could when they can already just click and play.
I think if you want people to care, you need to find a real world case where they are being blocked from doing something they really want to do- the abstract philosophical arguments about freedom are total non-starters.
Possibly an alternative media supplier that was fundamentally less hassle, faster, and more reliable because it didn't have these systems could get people to switch. But good luck getting the digital rights owners to let you put their content on your platform.
Or maybe convince people they can get higher quality media that way. I have a newish Mac with an amazing HDR screen, but few of the streaming sites are willing to stream the HDR content to my device.
I think that's a misunderstanding of what the FSF stands for overall, though. The FSF can never be a diplomatic negotiator for the benefit of free software; they are idealists, even when it serves against their own interests. Their whole shtick is not settling for half-baked appeasements, and so they're destined to be a pariah of the tech industry at-large. Neither you nor me can stop them, it's entirely within their right to advocate and for practice simpler software.
The statement criticized by the OP certainly seems warranted, but it's less endemic of the FSF removing itself from the mainstream and more like the mainstream has abandoned free software.
> The FSF's primary public-facing persona has peculiar computing habits
You know, the FSF would probably argue that our computing habits are the peculiar one. And unless you can tell me about the code your iPhone runs in detail, they're probably (albeit begrudgingly) correct.
There's no misunderstanding on my part; it's why I said that their ignorance is totally on-brand.
>more like the mainstream has abandoned free software.
Indeed, because free software development is largely driven by ideological purity rather than feature parity. Mainstream users see Free Software people as irrelevant kooks, and thus easy to dismiss, which is why Free Software has so utterly failed as a movement.
>You know, the FSF would probably argue that our computing habits are the peculiar one.
I'm sure flat-earthers feel that my belief that earth is an oblate spheroid is peculiar, too. Of what relevance is that to anyone?
>And unless you can tell me about the code your iPhone runs in detail, they're probably (albeit begrudgingly) correct.
We'll have to agree to disagree. The emacs developers don't even understand how large chunks of emacs work (per emacs-devel), for example. There's too much software out there for one person to keep in their head. This is not a reasonable heuristic.
> Indeed, because free software development is largely driven by ideological purity rather than feature parity.
This "ideological purity" didn't come out of nothing, it came out of the very practical issue of who is in control. People forget that RMS came up with the whole thing because he wanted to fix a broken printer and was denied the source code that could help him fix the issue.
He wasn't siting in some ivory tower coming up with abstract philosophical questions, he was in some lab and had an actual practical problem he wanted to fix.
> Indeed, because free software development is largely driven by ideological purity rather than feature parity.
Ideological purity is a valuable thing. Look at Minix, hell, even look at the BSDs today. These are projects that have collapsed because of their feature obsession and ignorance of ideology. The differentiation of ideology is what makes free software uniquely successful - it is the feature.
> Mainstream users see Free Software people as irrelevant kooks, and thus easy to dismiss, which is why Free Software has so utterly failed as a movement.
Mainstream users don't think about Free Software at all. They certainly use it though. They rely on it, to provide and maintain the runtime their cell phone and iPad and router all depend on. It probably runs an RTOS on their grandpa's CPAP machine, it probably occupies the DVR for their cable TV and it's likely running on their games console and personal computer, too.
Free software is even more inescapable than proprietary software. If users cared enough to understand the difference, you and I both know they would accuse the businesses of being the irrelevant kooks. Not a single "maintream user" I know would defend Apple or Google or Microsoft's business practices as software companies. No one.
> I'm sure flat-earthers feel that my belief that earth is an oblate spheroid is peculiar, too. Of what relevance is that to anyone?
As the other comment suggested, this is both an insincere response and one where you are the flat earther here. The FSF has reasons that they hold the principles they do, and you haven't refuted any of their ideology. You are the guy lambasting Gallileo, and when Gallileo asks you why heliocentrism offends you, you are replying "because the mainstream clergy sees you as kooks." It's not a response at all.
> The emacs developers don't even understand how large chunks of emacs work
Nobody is so stupid that we expect every kernel dev to understand the whole of the kernel. It's folly, and not what I was asking anyways. Nobody at Apple understands how the entirety of iOS works either, but that's not an implication that it's inherently insecure. What makes the FSF balk at Apple is the inaccountability. The lack of reason associated with their statements asserting the privacy and security of a system that sues it's auditors.
If you have a more reasonable heuristic to suggest, I'm all ears.
>You are the guy lambasting Gallileo, and when Gallileo asks you why heliocentrism offends you, you are replying "because the mainstream clergy sees you as kooks."
I'm lambasting the people who think this fictional Galileo is a good public persona to lead their political movement, because this Galileo can't convince anyone of anything because he is almost entirely devoid of the skills one needs to advance a political cause even if Galileo might have written some good C code 45 years ago.
>If users cared enough to understand the difference, you and I both know they would accuse the businesses of being the irrelevant kooks. Not a single "maintream user" I know would defend Apple or Google or Microsoft's business practices as software companies. No one.
I can see we have irreconcilable differences. I find this statement ludicrous.
I know lots of people who understand what free software is and choose to make a living selling proprietary software.
> I know lots of people who understand what free software is and choose to make a living selling proprietary software.
That's not what I asked you, though. Do those same people defend Microsoft and Google and Apple's business strategies? Do they respect what the apex of proprietary software looks like, replete with advertising, data collection, vaporware promises, removed features, integrated spyware and mandatory junk fees? Unless your friends are an LLM, I suspect they don't, because they've been burned before and know better. As no serious economist promotes laissez faire economics in the 21st century, laissez faire software is not healthy for humans either. The abuses are right in front of us, and the blame is simple to dole out.
It's for your own good that you stop replying to my comments if you're going to twist my words and avoid the topic. Free software isn't bound by the pragmatic demands of a market, and yes, that means that it can fail, but it can also end up displacing entire product categories as well. Anyone familiar with the past 3 decades of computing history knows this to be an irrevocable and proven fact. We would not be having this conversation on the internet if proprietary networking standards prevailed over open ones.
The flat earthers are the people dismissing the concerns of the FSF though.
(The Earth being round doesn't directly matter in practice to most people. It does have inevitable consequences though.)
Or perhaps a better example is anthropogenic climate change : here too the implications are extremely inconvenient for most people, so denial is rampant.
The FSF has turned into the crazy old aunt that insists you unplug the coffee pot after use in case it's bugged. It's taken me a long time to come around to the reality that they are holding Linux back at every juncture, probably still salty over the GNU/drama.
Modern TPM support in Linux and systemD now permits automatic disk unlock for LUKS encrypted volumes using a key stored in the TPM - some ~15 years after Windows could do it.
I wonder what the TPM support is like in the HURD - ha!
The only complaint I have about the TPM is there is no standardisation in connectors, pinout, or bus type when it's not soldered onto the board. I have three motherboards with plug-in TPMs and each required a different, unique part that was difficult to source.
While I broadly agree, I think it’s worth pointing out that they have made some compromises for practicality, the inclusion of MP3 software before patents had expired comes to mind.
We have had "FDE" and secure boot with TPM in higher-than-commercial (defense) and the higher end of commercial settings for Linux, BSD, and illumos since TPM 1.2 was available, and I'd have to dig in some places to confirm but probably before Windows did in actual practice anywhere (let alone officially).
Yeah, Debian/Ubuntu, Fedora, etc didn't have this, but as the saying goes: you get what you pay for. Although enough of the Gentoo users (the real Gentoo users) have such a thing had it around that time too, if they wanted it (and they tend to put together what they want).
Some essential context: if you think the "Linux community" is elitist, wait until you see the niche commercial (and higher) players. I'm probably an example of such, to be fair.
> there is no standardisation in connectors, pinout, or bus type when it's not soldered onto the board. I have three motherboards with plug-in TPMs and each required a different, unique part that was difficult to source.
It certainly has, and they have repented themselves of killing Windows Phone, turns out that when one wants to push stuff like AI and XBox ecosystem, having 10% market share is way better than not having none at all.
Then again, they have been so busy with Azure and XBox profits, that Windows development has turned into a mess, of GUI teams fighting for resources, while the apps division couldn't care less, now filled with people that grown up using UNIX instead of Windows, and see Web UIs everywhere.
Hence why Windows might be my main desktop, yet I eventually returned into Web/distributed computing world, disappointed with how UWP/WinRT development turned out.
> It's disappointing to me because the FSF could be so much more influential today
I mean, open source advocacy already includes both business-friendly convenience-focused pragmatists and social-friendly, principled advocates of digital freedom who were essentially turned off by RMS's personality and/or approach.
Taken together, their work seems like it sets a reasonable ceiling on what FSF-- or any freedom-based organization-- could achieve.
If I'm wrong I'd like to know what exactly the FSF could have achieved in your opinion that's above that ceiling, as well as the tactics they'd have use to get there.
It's been very clear to me for many years that the FSF is staffed by a bunch of out-of-touch boomers who believe that Microsoft is the end-all be-all of evil tech. That was probably true 30 years ago, but from their rhetoric, they've ignored how the computing landscape has changed. Namely, the ways smartphones are walled gardens that screw over people, often in the same ways Microsoft has. I've heard them mention in passing that Apple, Google, and Facebook are bad, but the volume of material directed at Microsoft overwhelms anything else. To the FSF, if it doesn't happen on a PC, its not a priority. It still amazes me that they're hurt over Linux stealing their GNU name/tools/momentum, but hardly a word is written about how Google stole Linux to make Android, and how the Android ecosystem is a complete betrayal of free software's values.
The embedded politics of the “t” in “tpm” and “tee” are super interesting and revealing. They are “trusted” only from the perspective of the developer; to the user, they represent the complete lack of trust.
On the contrary, it gives me various ways to determine that my laptop is in a trustworthy state before I type a password into it, and it makes it possible for Signal to verify that the server it's communicating with hasn't been tampered with. It can be used in ways that hurt the user, but it can also be used in ways that benefit them.
Suggestion for a compromise: Make it mandatory for TPM vendors to provide a user option to wipe all attestation keys and rebrand them as “embedded security keys” (and maybe promise to never use them for DRM, which per TFA nobody is anyway).
I feel like untangling the attestation capability (which I do believe has non-user-hostile/non-zero-sum uses!) from the secure key storage one might ultimately help their adoption.
1. Companies offer service that people don't want to pay for, and blame piracy.
2. Someone realizes that they can eliminate piracy and make lots of money by offering good service.
3. Piracy slowly dies, because people prefer €5 monthly subscription over torrent.
4. Other companies catch up. The market gets fragmented. By the nature of the market, it becomes impossible for one company to offer clearly good service.
5. Piracy gets fashionable again because it's more accessible than having twenty €50 subscriptions, half of them with ads.
6. Companies offer service that people don't want to pay for, and blame piracy.
But afaik the TPM (or fTPM if no chip is present) is used to establish and restrict trusted access to the replay-protected memory block that the GPU (or other) DRM chain services depend upon to do their thing.
IMHO the author does overrestrictively interpret the FSS statement to discredit them.
No, TPM isn't involved with PAVP at any point. Matthew is correct about how it works. This is a typical case where social activists are light years behind the curve and don't really know what they're talking about at all.
i've done a lot of work with Arm TrustZone, OP-TEE, and Arm Trusted Firmware. it's really nice. in Arm, the TEE is user-supplied, not vendor-supplied, so it gives an isolated execution environment for any sensitive code you might want to put in there. hardware peripherals (tzc/spu) allow you to designate certain bus addresses or memory ranges as "secure" during the early firmware initialization, meaning Linux (or whatever OS you use) cannot read or write to them. furthermore, unlike a TPM, the TEE isn't running in parallel on a co-processor -- it only runs when Linux yields control (cooperative scheduling) so it provides functionality without wrestling away control.
TEE is nice, but it's a pretty different use case all around, and the two are actually quite complementary:
TEE is effectively an execution environment below ring 0, together with some hardware isolation as you mention. But by itself, solutions based on it can't hold any trusted key material, so can't be used in attestation contexts.
TPMs and other types of secure enclaves or secure elements include secure storage and can come pre-loaded with external root of trust keys, which allows attestation (and by extension trusted computing use cases), but also completely local useful things like enforcing a PIN retry limit on usage of a hardware-stored SSH key.
But since TPMs are by design self-contained and don't have any input or output capabilities, mediating user access via a TEE and some minimal OS providing a user confirmation UI can be very powerful (for example so that malware can't lock you out of your own SSH keys by just entering the PIN incorrectly repeatedly).
The author seems misinformed about the purpose of TPM to DRM schemes.
The purpose of a TPM, in this case, is not to provide encryption, but instead to provide so-called ‘authenticity’. A TPM with its attestation capabilities can allow a remote validator to attest the operating system and system software you are running via the PCRs which are configured based on it, with Secure Boot preventing tampering. [1] Google tried to implement APIs to plug this into the Chrome browser, which was later abandoned after backlash. [2]
In this case, the TPM can allow services like Netflix or Hulu to validate the hardware and software you are currently running, which provides the base for a hardware DRM implementation as stated in the article. Don’t be surprised if your non-standard OS isn’t allowed to play back content due to its remote validation failing if this is implemented.
TPMs also have a unique, cryptographically verifiable identifier that is burnt into the chip and can be read from software. This allows for essentially a unique ID for each computer that is not able to be forged, as it is signed by the TPM manufacturer (in most cases Intel/AMD as TPMs on consumer hardware are usually emulated on the CPUs TEE). If you were around for the Pentium III serial controversy, this is a very similar issue. It's already used as the primary method of banning users on certain online video games, but I wouldn’t be surprised to see it expand to services requiring it to prove you aren’t a “bot” or similar if it gets wider adoption.
There is a great article going more into detail about the implications of TPM to privacy from several years ago, which was the basis for this reply. [3]
I'm extremely familiar with the capabilities of TPMs (I've worked on deploying remote attestation services at multiple companies), but here's the thing - streaming vendors don't use TPM-based remote attestation. None of them. It doesn't happen. Could it happen? Yes, but it would buy almost nothing - remote attestation is something that's viable in enterprise environments where you can bind TPM identity to inventory entries, and not in the real world where you could just plug in a second TPM on a USB adapter and fake the measurements. And how would you prove the attestation came from the same device that has the reported GPU key? Remote attestation is only useful when bound to other hardware keys, and there's no way within current specs to perform binding between the TPM and the GPU - pirates could just pass the attestation query to another machine.
Depends on your definition. If you count video game anti-cheating software as DRM, the answer is yes. Apart from that, I’ve only currently seen TPMs used as a hardware identifier (in the same way a monitor serial is) for software licensing. The capability does exist however.
> GPU vendors have quietly deployed all of this technology
Citation or technical details needed.
Obviously it "makes sense" that for 4K HD content you "probably" want to offload the decoding into the GPU, but this is the first time I see this mentioned and there are no links to technical details.
In contrast, TEE / TrustZone and even the recent AVF with pVM - these are well documented technologies.
The Playready docs make it clear the implementation is either in TEE or implemented in GPU hardware, and x86 has no TEE, so. You can easily find driver changelogs describing it being enabled for different hardware generations.
Not really; AMD have PSP (which, okay, isn’t x86, but it’s on the die) and Intel, as you mention in your post, had SGX and have ME. Google use PSP TrustZone to run Widevine on Chromebooks, for example. PowerDVD used SGX to decrypt BluRay, which led to BluRay 4K content keys being extracted via the sgx.fail exploit.
You’re right though that PlayReady is usually GPU based on x86; on AMD GPUs PlayReady runs in GPU PSP TrustZone. On Intel iGPUs I think it runs in ME.
The lower-trust (1080p only) software version of PlayReady uses WarBird (Microsoft’s obfuscating compiler) but this is of course fundamentally weak and definitely bypassed.
Anyway, none of this takes away from your post, which I agree with. The FSF (and many HN commenters) have been whining about TPM in unfounded ways since the 2000s.
Not in general, Intel briefly had a program for allowing vendors to deploy apps on ME but closed it years ago. But yes, ME is involved in this for Intel iGPU.
It was a big deal when Vista was released, with coincided with a lot of generational change in home computers (Watching Blu-Ray on computer still seemed to be a thing to expect, HDMI with HDCP was introduced, etc).
There was a lot of talk about protected media path in Vista, how it linked with HDCP, how it killed hardware accelerated audio (including causing considerable death blow to promises made by OpenAL), etc.
Even game consoles moved into software accelerated audio, as it turns out doing it in software, with CPU vector instructions is fast enough, while being more flexible.
This is also the way of the future for graphics, do way with any kind of hardware pipelines, and go back to software rendering, but having it accelerated on the GPU, as general purpose accelerator device.
EAX and the like were actually that - software components running on DSP inside sound card, and it was supposed that they would be something you would handle in the future akin to how GPUs are programmed.
However while audio accelerators came back the protected media path business means they aren't "generally programmable" from major OS APIs even when both AMD and Intel essentially ended up settling on common architecture including ISA (Xtensa w/ DSP extensions, iirc), and are mainly handled through device specific blobs with occassional special features (like sonar style presence detection)
Integrated GPUs exist. Wouldn't it make more sense that the "high value" content should not be exposed to any external GPU? Then we can treat those integrated ones as part of the "TEE". That's my speculation, waiting for details.
This is the question I had about this. The reason this design works per the article is that the GPU memory is inaccessible to the OS, so the decrypted content cannot be stolen.
With a unified memory architecture, is the shared GPU memory inaccessible to the CPU?
With the proper MMU settings, yes, the CPU can definitely be denied access to some memory area. This is why devices like the raspberry pi have that weird boot process (the GPU boots up, then brings up the CPU), it's a direct consequence from the SoC's set-top-box lineage.
DRM shouldn't be illegal, but works protected by DRM should be ineligible for copyright protection unless a key is placed in escrow somewhere.
Basically, rightsholders should be be able to choose enforceable legal protection or unbreakable technological protection, but not both. Copyright was supposed to be a two-way street, but DRM permanently barricades one lane.
The point is that they are already effectively making their own laws through DMCA 1201 being a loophole that allows them to negate constitutional rights and fair use. So making DRM void copyright would be still an improvement over current status quo.
But I agree, DRM should just be illegal in the first place.
yeah, 105 years later we finally have a disney's a mickey mouse, but they still put the ears in every gadget to prolong it as a trademark.
TBH I can see now how the conglomerates created by buying smaller studios by big fish start owning everything. They've divided the market by themselves, and now they are rising their prices. Meanwhile I cannot make a screenshot of my
favourite cartoon to create a meme, because of "copy protection". But I have right to do it you now? It's written in law in my country (Poland) that I can have small pieces recorded down, screenshotted etc, as long as I am doing some creative work on it, or just keep it to myself. THIS IS THE LAW HERE. And it's being ignored.
That makes some sense, I'd say using DRM should void copyright completely, keys or not. I.e. if they want to go out of the way with invasive anti-user measures, they should lose any legal protection against copying of that stuff.
It should also drive home the idea that DRM will be broken anyway and they'll be just left with nothing, so let them stick to copyright itself without all that DRM garbage.
> Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.
As to the first point... the TPM can't communicate with the GPU, but maybe the GPU could communicate with the TPM. The way that would happen is that the GPU would talk to the TPM directly, using `TPM2_StartAuthSession()` to start an encrypted session with the TPM then it would use `TPM2_ActivateCredential()` or `TPM2_Import()`/`TPM2_Load()`/`TPM2_RSA_Decrypt()` to decrypt a symmetric session key that the GPU would then use to decrypt the stream. I.e., the GPU would do the bulk crypto, but the TPM would do the key transport / key exchange.
That also addresses the second point: the TPM being slow is not a big deal because you'd only need it to do something slow once when starting the video playback.
Of course, the GPU could just include TPM-like features to get the same effect, which really proves the point which is that:
> The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.
Pretty much. All the DRM functionality can be in the GPU, and there might not even be a standard API like TPM 2.0 that anyone could use, so the result is even worse than if the GPUs used TPMs to implement DRM.
Though, if one were implementing DRM in the GPU or in the display monitor (why not) then the TPM 2.0 MakeCredential/ActivateCredential protocol is a very good fit, so one might as well use that, and even embed a TPM in the GPU and/or the monitor. If you do the bulk decryption in the monitor then the user doesn't even get to screenscrape (eavesdrop on) the connection between the GPU and the monitor. One could even implement just a small portion of TPM 2.0 -- everything needed to establish an encrypted session (`TPM2_CreatePrimary()` and `TPM2_StartAuthSession()`, but also `TPM2_FlushContext()`) and `TPM2_ActivateCredential()`, and maybe a bit more if attestation is required (`TPM2_Quote()` and `TPM2_CreateLoaded()`). What would one attest? I think one would use a platform certificate and its key as the signing key for a TPM2_Quote()-based attestation. The point would be to prove that the device is a legitimate GPU or monitor made by an approved vendor.
If you dislike DRM then TPMs are not the enemy. Particularly the TPM on any server or laptop is not the enemy. TPMs in GPUs or monitors might be, but Windows 11 requiring a TPM on the box has nothing to do with that, and again, the GPU/monitor could implement the ActivateCredential protocol internally w/o a TPM anyways.
How would the GPU verify it's speaking to a real TPM? You'd need to bake the full set of legitimate EK cert CAs into it somehow (charitably let's say that's a signed blob that the driver pushes in at startup), but that's still going to be a terrible user experience because you won't get media playback if your machine has a TPM that's too new or from too niche a vendor.
Right, and more to the point there's nothing special about a TPM design-wise. It's actually a very odd kind of chip that only really exists due to the unique political and market requirements of the PC industry. If you look at vertically integrated platforms like Apple's, or the games consoles, or smartphones, there's no TPM. There are subsystems that do similar things, but none of them follow the TPM design specs.
Even Intel abandoned it when designing SGX. SGX doesn't involve a TPM at any point.
So for a GPU vendor there's no reason to introduce the additional complexity of handshaking with a TPM. Blowing a private key into some eFuses at the factory is relatively easy, add a RAM encryption engine on top and you're already providing better security than what a TPM provides.
TPM is a missed opportunity. What I really want for security is a solid secure enclave scheme on the CPU itself so my SE code can blaze. The TPM is not programmable and is very limited, both in terms of its API and in terms of its capabilities (e.g., number of keys loaded, number of algorithms supported, ...) and in terms of its performance.
My point in my above reply was to say that even if TPMs were used by GPUs then TFA's point would still stand.
Despite the bad press it's received over the years, SGX is a very solid design and works pretty well. Some of the papers presenting breaks turned out to be quite misleading when I looked closely at them some years ago. If you want a general purpose TEE then you could do worse than play with it.
Unfortunately it's not available on consumer hardware anymore, and in the cloud only Azure really supports it AFAIK. And you have to write apps for it specifically, and then you have to have clients that know how to do remote attestation and bind it to secure channels, and you have to program in a threat model in which rewinds are possible at any moment. This is very hard, and it turns out most people in the market don't really care about their data that much (are happy to share it with trustworthy institutions). So it never really took off. But the tech is decent.
Amazon has their Nitro secure enclave system that's pretty easy to use. IIUC its based on isolating the code that runs it and in it onto one core set aside for just that, possibly just when it's needed. Having the SE be easy to use is a key thing. Not that the Nitro approach extends well to consumer hardware (it doesn't).
The problem with Nitro is that a TEE doesn't really work if the adversary makes your CPUs.
SGX works, conceptually, because of the division of labor between Intel and the people running the machines:
1. Intel can't break into your enclave even by subverting SGX, because it doesn't have access to the computers (isn't your cloud operator or network admin).
2. The people with access to the computer can't break into your enclave, because SGX blocks everyone except the enclave owner and Intel.
With Nitro, Apple's approach and a few others the logic becomes:
1. Amazon can't break into your enclave even if Nitro has a back door because Amazon don't have acces.... oh, wait.
SGX is conceptually sound because subverting it at the design level requires the CPU maker and the cloud operator to team up against you. This could happen, especially if you use a US cloud and the US government gets involved, but the bar is much higher. And of course you can always choose to run the hardware somewhere the USG can't get at it, requiring a coalition of those two governments or providers
Governments as threats... that's way beyond what people who want DRM consider within their threat models. TFA was about TPM not being relevant to DRM.
For most public cloud users having to trust the cloud operator is just a fact of life. Even if the SE were strong up to but excluding collaboration of the CPU vendor and the cloud operator, the user would have to run most if not all of their code in the SE, which is one thing the SEs invariably can't do.
> How would the GPU verify it's speaking to a real TPM?
Option 1: as I said, the GPU could have its own, and yes in that case the EK cert would be known to the GPU (or it could have a platform-like cert issued by the GPU OEM).
Option 2: the platform vendor can teach the GPU the EK cert (or the public key for some primary key anyways).
Option 3: the GPU could learn it on first use.
> charitably let's say that's a signed blob that the driver pushes in at startup
That's what TPM vendors do as to the EK cert. Surely if they can do that then so can GPU and platform vendors. Indeed, some platform vendors ship with platform certs.
> but that's still going to be a terrible user experience because you won't get media playback if your machine has a TPM that's too new or .
What do you mean "too new"? Like, you replaced your TPM? That's a thing on servers, but not laptops.
As to "from too niche a vendor", as long as the platform vendor teaches the GPU what the EK cert is, or makes a platform-like cert that the GPU can use to authenticate the TPM, then it's good enough.
Anyways I suspect that MSFT and others don't mind an incrementalist approach. You have a system that can do it their way? Great, it will. You have a system that cannot do it their way? Fine, they'll do weak software DRM for now. There's probably no other way to to get to their dream DRM everywhere state.
> What do you mean "too new"? Like, you replaced your TPM? That's a thing on servers, but not laptops.
I buy a GPU in 2025. I buy a new motherboard in 2026 and plug the GPU into it. How does the GPU learn about the new EK CA? These are devices that can be moved between systems, you can't delegate this to the platform vendor or TOFU, the GPU would need to generate independent trust in the TPM.
One way I might handle this would be to have a TPM on the GPU itself. Then you can move the GPU about all you like, and it will work. The GPU would have to implement an API and protocol that allows the DRM site to do attestation via software running on the CPU, but that seems doable.
The other way would be accept that the GPU that the content is to be played on might not be the same as the device on which the TPM exists. You could have the GPU on a computer halfway around the world and use a TPM from another system to which the user account is registered on the DRM site. Not great, but as a form of account sharing and subject to account sharing detection, it's not bad.
Well of course. My comment was about how TPMs could be used but how still you were correct that TPMs aren't the FSF's enemy. I was exploring that space to further show that.
> I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is.
It is quite obvious: to force people to buy a new PC. TPM provides no added security value for the vast majority of users[1] but it is a convenient hardware that has only started to become standard (fTPM) in PCs built in the last ~8 years so it provides an excuse for Microsoft to declare computers older than that (which can run Windows 10) obsolete using "security" as an easy scapegoat.
> TPM provides no added security value for the vast majority of users[1]
Yes it does. The vast majority of users aren't going to have their laptop stolen by the CIA/NSA and have their DIMMs popped and cryofreezed.
The vast majority of users aren't going to have the case opened and a special-purpose PCIe device installed to steal keys over DMA.
The vast majority of users aren't going to have a dTPM vulnerable to SPI sniffing as modern and not-so-modern processors have fTPM.
This is to provide some baseline level of protection of the user's data against theft and loss.
Are there attacks against TPM? Yep. In as much as there are attacks against SMS 2FA, but for the vast majority of people, SMS 2FA is an acceptable level of security.
If you're a CEO, well sure, you're going to want to do something better (TPM + PIN). I acknowledge that Windows 11 Home users don't have this specific option.
Everyone needs to level set on the type of attacks that are practical vs. involved and who the targets of those attacks are.
FDE (w/ TPM) is part of defense-in-depth. Even if imperfect, it's another layer of protection.
> The vast majority of users aren't going to have their laptop stolen by the CIA/NSA and have their DIMMs popped and cryofreezed.
That's kind of the point. The vast majority of users aren't going to have their laptop stolen at all, if they do it will 99% of the time be by someone who only wants to wipe it and fence it, and attempts to access data are most likely to be by unsophisticated family members who would be defeated by a simple password without any TPM.
Meanwhile there have been plenty of TPM vulnerabilities that don't require anything so esoteric and can often be attacked purely from software, so if a normal user was facing even so much as someone willing to watch some security conference talks, they're going to lose regardless. If the TPM doesn't make them more vulnerable to that, because it contains the secrets and is susceptible to attack, vs. FDE with a boot key stored in some cloud service secured with the user's password instead of a TPM, which can then rate limit attempts without being susceptible to physical access attacks and be revoked if the device is stolen.
Moreover, the more common threat to normal users is data loss, in which case you only want your laptop to be secure against your unsophisticated nephew and not the tech you want to recover your data after you forget your password.
> In as much as there are attacks against SMS 2FA, but for the vast majority of people, SMS 2FA is an acceptable level of security.
The current recommendation seems to be against SMS 2FA because the security of SMS really is that bad, so if you need 2FA, use an authenticator app or similar.
> That's kind of the point. The vast majority of users aren't going to have their laptop stolen at all, if they do it will 99% of the time be by someone who only wants to wipe it and fence it, and attempts to access data are most likely to be by unsophisticated family members who would be defeated by a simple password without any TPM.
True, any preboot password method (even fully software) will be sufficient to prevent data exposure when a laptop is stolen.
The whole TPM + secure boot thing is more to prevent evil maid attacks where a laptop is messed with (eg installing a bootloader that intercepts the password) and then placing it back in the user's possession so they can be tricked into entering the password.
That whole scenario is extremely far-fetched for home users. Laptops get stolen but then they're gone.
But it doesn't even do that. If I want to perform the "evil maid" attack why would I screw around with the bootloader? I'm just going to replace the entire device with something that captures the password & sends it to me remotely.
You're at an industry conference. I want the data on your laptop's hard drive. You leave your laptop in the hotel room. Which one is easier:
1. Go into your room and screw around with the boot loader to somehow give me unencrypted access to your laptop after you login next time.
2. Go into your room. Take your laptop. Put an identical looking laptop in place that runs software that boots and looks identical. Have it send me all of your password attempts over WiFi to my van in the parking lot.
I'm going with option 2 every time. I have your original device. I have your password. TPM, SecureBoot, or whatever is irrelevant at this point.
The attacker must be able to fake any pre-boot drive unlock screen and OS login screen to look exactly as the user's real screens but accept any password.
Legend goes that security oriented people will visually customize their machines with stickers (and their associated aging patina) and all kinds of digital cues on the different screens just to recognize if anything was changed.
MS chose to impose TPM because it allows encryption without interactive password typing (BitLocker without PIN or password which is what most machines are running). That's it. The users get all the convenience of not having to type extra passwords when the machine starts, and some (not all) of the security offered by encryption. Some curious thief can't just pop your drive into their machine and check for nudes. The TPM is not there to protect against NSA, or proverbial $5 wrench attacks but as a thick layer of convenience over the thinner layer of security.
> Legend goes that security oriented people will visually customize their machines with stickers (and their associated aging patina) and all kinds of digital cues on the different screens just to recognize if anything was changed.
Maybe I am mistaken, but I feel that the people going to such lengths to ward off an attacker and the people who’d want to rely on fTPM with Bitlocker over FOSS full disk encryption with a dedicated passphrase are two entirely separate circles.
> The TPM is not there to protect against NSA, or proverbial $5 wrench attacks but as a thick layer of convenience over the thinner layer of security.
I agree with you there, it is convenience, not security, but as such, should it be any more mandatory than any other convenience feature such as Windows Hello via fingerprint or IR? I’d argue only for newly released hardware, but don’t make that mandatory for existing systems.
Especially since I had one case where fTPM was not recognized, no matter what I did, despite it being enabled in the UEFI and showing up in Windows 10 and on Linux, I could not install 11.
> the people going to such lengths to ward off an attacker and the people who’d want to rely on fTPM with Bitlocker over FOSS full disk encryption with a dedicated passphrase are two entirely separate circles.
Bitlocker + PIN/password (hence my mention of a pre-boot password) is a good combination that isn't any worse than any "FOSS full disk encryption". Beyond the catchy titles of "Bitlocker hacked in 30s" is the reality that it takes just as many seconds to make it (to my knowledge) unhackable by setting a PIN or password.
Adding the (f)TPM improves the security because you don't just encrypt the data, you also tie it to that TPM, and can enforce TPM policies to place some limits on the decryption attempts.
> it is convenience, not security
It's convenience and (some) security by default. Not great security but good enough for most of those millions of Windows users. The security was the mandatory part, encrypting the storage by default. The convenience was added on top to get the buy-in for the security, otherwise people would complain or worse, disable the encryption. Whoever wants to remove that convenience and turn it into great security sets a PIN.
I don't mean to disagree, but I think it's worth pointing out that with today's tech, it wouldn't be difficult for an attacker to also scan the stickers and print them out on sticker paper using a color printer, all in minutes. And the technology for doing that is only getting better. Just a thought.
TPM means the system can boot and then do face login or whatever using the user's password in exactly one place.
This is as much as most users will tolerate. And it also means Microsoft account recovery can work to unlock a forgotten password.
The whole point is Microsoft don't want user devices to ever be trivially bypassed, regardless of how unlikely that is (probably more likely then you think though).
These things are everywhere: they're used by small businesses, unsophisticated users etc. but the story which will be written if anything happens because the disk was imaged sometime will be "how this small business lost everything because of a stolen Windows laptop" and include a quote about how it wouldn't have happened on a MacBook.
"No one wants a preboot password though" - really? Doesn't strike me as particularly inconvenient, especially given the relative rarity of actual bootups these days.
I've been using bog-standard FDE for as long as I can remember. One extra password entry per bootup for almost-perfect security seems like great value to me.
It seems that you're looking at the wrong bubble here. Most people actually detests passwords and would rather use a different method if possible (this is why ordinary users turn on biometric authentication despite some here questioning its security). Adding another password will certainly make users - especially enterprises - complain.
Also for technical reasons, Windows can't do the fancy one login/password screen (which assumes a file-level encryption, which is how it is implemented nowadays to support multiple users [1] [2]). This is due to Windows software that are expecting that everything is an ordinary file (unlike Apple which don't care on that aspect and Android which has compartmentalized storage). Even if we have an EFS-style encryption here, it will be incompatible with enterprise authentication solutions.
> this is why ordinary users turn on biometric authentication despite some here questioning its security
That's part of the reason. Another part is BigCo spamming the users asking for biometrics or whatever the current promotion-driver is, making opting out hard to find, and using their position of authority to assert that it's "more secure" (for your personal threat model no less, nice to be able to offload thought to a corporation).
The more inexpensive option of the newer Trezor wallets and "login PIN" as an optional alternative to a password that also works, seems to be the best option (that I have seen so far).
The more recently released Trezor wallets are still new, and Yubikey 5C will probably be used in many places anyway just because of the keyring and no need for the usb-c cable.
Every phone has it these days. Doesn't seem to be a big deterrent? Laptops also need a password to log in.
In fact in many cases a preboot password is safer. Because the comms between the TPM and the OS can often be sniffed. And if the TPM doesn't need validation because it hands off its keys, it can be bypassed that way.
Again not really something that consumers have to worry about, but it's not quite difficult anymore to pull this off.
The phones are using their TPM equivalent to do it securely, though -- there's not nearly enough entropy in a lock screen to provide robust security, but the boot-time unlock depends on both the screen lock and the hardware, and the hardware will rate limit attempts to use it to turn lock screen inputs into usable encryption keys.
The vast majority of users neither have a password on their computer, or if they have it it's a stupid one (like their name, their birthday, etc) or they have it written on a post-it that is attached on the monitor itself. Why do they need a TPM? Most of the time I setup a computer for a friend or family member they ask me to remove the password since they don't want to remember it.
Vast majority of users neither have that much important data to steal on their computer at all, just some family photos, some movies downloaded from the internet, there is the case of credentials saved in the browser, but the most important stuff (such as banking sites) nowadays requires a multiple factor authentication (such as password + OTP on your phone) to do any operation.
Why do they need a firewall? Why do they need ACLs?
Let's just go back to single-user operating systems with exFAT drives.
If an individual expressly defeats the point of any particular security mechanism, that's on them. But to paint this broad brush of "I know someone who does X which makes Y pointless, so Y must be meaningless for everyone else" is silly.
> The vast majority of users aren't going to have their laptop stolen at all
The vast majority of homeowners aren't going to have a house fire. The vast majority of drivers aren't going to have an accident. Etc. etc. etc.
It's insurance.
> The current recommendation seems to be against SMS 2FA because the security of SMS really is that bad, so if you need 2FA, use an authenticator app or similar.
This is correct. But SMS 2FA is better than no 2FA. The attacks you speak of are targeted attacks, where the victim and phone number are known.
> Any snake oil can be painted as defense-in-depth.
Depending on the implementation it's occasionally more secure. For me it's never "better."
A significant fraction of banks, retirement accounts, financial web services, ..., can fully reset your password using just the SMS "2FA," sometimes most also requiring an e-mail verification. That turns the device into a single factor much weaker than a password (making physical attacks -- ex-lovers, nosy houseguests, ... much easier). There are a variety of easy methods for taking over a phone number temporarily or permanently for <$15, so for the ones without e-mails it's literally just a cost/benefit analysis for a crook.
Knowing how often SMS 2FA gets screwed up, I'd strongly prefer to avoid services offering it (especially those requiring it) even if there were no other downsides. Toss in the inconvenience of having to drive into town (many rural places I've lived), find a point of higher ground (many taller cities I've visited), or whatever just to get cell service, and the whole concept is a nightmare.
And so on. It's painful to use, usually much less secure, and rarely meaningfully more secure.
It's rubbish. The circumstances that would make it even theoretically useful are rare and in practice it doesn't even work then. There is no reason to pay good money so you can be insured against alien abductions under a policy whose terms won't pay out even if you somehow actually get abducted by aliens.
> This is correct. But SMS 2FA is better than no 2FA.
The alternatives to SMS 2FA don't just include no 2FA, they also include any of the better 2FA alternatives to SMS.
Choosing SMS is like saying we should all bottle our urine in case we need something to drink later. There's juice and soda in the fridge and a tap full of water right over there, don't be crazy.
> The attacks you speak of are targeted attacks, where the victim and phone number are known.
How do you mean? Anyone who can snoop SMS gets a list of usernames and passwords from a data breach, tries them all against a hundred services, when that user exists on that service the service says "we sent SMS to your phone number at xxx-xxx-4578" so the attacker looks for any SMS code to any phone number ending in 4578 in the last ten seconds. Even if they don't have the phone number from the data breach, most commonly there is only one matching message, if there are two or three they just try all of them, and now they've compromised thousands of accounts on a hundred services because SMS is such rubbish.
On top of that, the targeted attacks also work against SMS. If you know the target's phone number you don't need to be able to capture every SMS to compromise them using SIM swapping or any of the other numerous vulnerabilities SMS 2FA is susceptible to.
> It's not snake oil, however.
It's a proposed solution with negligible or negative benefits over known alternatives. That's snake oil.
> The vast majority of users aren't going to have their laptop stolen at all, if they do it will 99% of the time be by someone who only wants to wipe it and fence it, and attempts to access data are most likely to be by unsophisticated family members who would be defeated by a simple password without any TPM.
I've only met one person who's phone was stolen. They grabbed it while it was unlocked and within minutes after began scamming all the person's Instagram and other contacts asking for quick money for an emergency.
That's how it works now exactly because hardware security ("DRM") on phones is so good that grabbing phones whilst unlocked is the only way to beat it. For most of the history of phones, they would be pickpocketed or taken from bags, luggage, hotel rooms etc without you ever seeing the thief.
This is a huge upgrade, and nothing to sniff at. I also had someone try to grab my phone out of my hand and run off whilst walking on the streets in France. Unfortunately for him I can run extremely fast. Once he saw I was catching up and about to beat the crap out of him, he gently placed the phone on the road whilst running and gave it back to me. Before phone security got really good a guy like that would have been using the sneaky approach and then visiting a back room in a phone shop to reflash all the hardware IDs, but secure boots and the mobile security chips have got good enough that this is no longer feasible.
Depends which is more valuable, the phone or the potential scams. With no hardware security you'd just have a standard USB stick to root it and get the same access to the logins and contacts, or you'd take it right to an underground shop that did. And you could sell the hardware on top of that, making theft that much better.
Also, SMS isn't, because attackers often get access to the SMS network itself (see e.g. Salt Typhoon) in which case they can do automatic mass account stealing because they can see all the totally unencrypted SMS codes.
Not to mention LTT showed the ability to spoof and steal SMS directly, on specific targets using the international phone system trust, something that is effectively impossible to block due to the inherient trust built into cell companies at the moment.
> vs. FDE with a boot key stored in some cloud service secured with the user's password instead of a TPM
Without secure boot (backed by TPM), I can boot a small USB device that has LEDs on it to indicate to me that the target system has been infected to send me a copy of the target's password, after I already imaged the disk (or when I have another team member steal it or take it by force later).
If there's a UEFI password to access UEFI settings, I can reset it in under 20 minutes with physical access. Some tamper-evident tape on the laptop casing may stop me if I haven't already had a resource intrude into the target's home/office to have some replacement tamper-evident sticker material ready. Very very few places, even some really smart ones, make use tamper-evident material. Glitter+glue tamper-evident seals are something I can't spoof though.
It's not that hard to get into a hotel room. Often enough if a business books a hotel for you it's because they want access to your laptop while you're at lunch with another employee who so kindly suggests to leave your backpack in the hotel room.
disclaimer: all above is fictional and for educational and entertainment purposes only
> Without secure boot (backed by TPM), I can boot a small USB device that has LEDs on it to indicate to me that the target system has been infected to send me a copy of the target's password, after I already imaged the disk (or when I have another team member steal it or take it by force later).
Which is the same thing that happens with secure boot, because they just steal the whole device and leave you one that looks the same to enter your password into so it will send it to them.
Meanwhile if you're using tamper-evident materials then you don't need secure boot, because then they can't undetectably remove the cover to get physical access to remove your UEFI password or image the machine.
Thank you for prompting attention to the switcheroo.
This angle of attack is generally unheard of, but should be considered. I can think of some mitigations that can work.
Tamper-evident materials are well-known by the crowds that will target users. There are many criminals among us, so many that those who don't have criminal psychology have a hard time wrapping their mind around it. Given this, I am cynical, and every defense within reasonable cost should be leveraged.
> The vast majority of users aren't going to have their laptop stolen by the CIA/NSA and have their DIMMs popped and cryofreezed.
If you happen to have a Pro variant of Ryzen (there may be some Intel variants as well) then you can enable RAM encryption. The RAM will be encrypted with an ephemeral AES key on boot.
In my experience, FDE (Full Disk Encryption) is more of a hindrance than help to average users.
It just means that when something goes wrong, such as a forgotten password or a botched update, their data that would have otherwise been recoverable is now lost forever.
I'm not sure I know anyone who's had a computer stolen, but I know lots of people who have lost data.
Edit: I do know one person who had a computer stolen. It was a work laptop while they were in SF, and I'll concede that FDE probably does make more sense on a work-related computer. I was only arguing that it's more of a hindrance on personal devices that mostly stay in the owners home.
> It just means that when something goes wrong, such as a forgotten password or a botched update, their data that would have otherwise been recoverable is now lost forever.
Not at all. You can get your recovery key back via a few different means (for 11 Home, OneDrive/printed/PDF, for enterprises, various ways) and boot into the Windows Recovery Mode environment to perform the same repair options one would have without BitLocker in place.
What is the argument here about the CIA / NSA or any other US Federal 3 letter agency? If your device is secured via TPM or some other scheme that relies on an industry to secure your device they aren't going to be doing "DIMM popping". They are just going to get the master keys from whomever issued them and use that bypass whatever they need to on the device.
The point being is that Microsoft's implementation on Win 11 Home ("device encryption", aka unconfigurable BitLocker) is sufficient for nearly all of their user base. If you're a target of a 3-letter agency, additional security measures are required.
I agree. TPM defends against the most likely threat that typical users are facing. And, where users that are individually targeted, the theft/robbery will more often than not be designed to appear "random".
Because TPM sniffers are now at a material cost of about $15 and can be acquired for a price at under $200, more than a TPM is needed for data encryption, especially for users like a CEO. This is why a firm I used to work for encrypted the key that could unlock user data with both TPM plus Yubikey.
Microsoft doesn't sell hardware. Why would they be incentivized to make you buy new hardware? Unless you're alleging that their hardware partners pushed for it, in which case there would likely be logs of communications that are pretty illegal.
There's a bit of a gold rush on to be in control of all of a user's auth, and TPMs are a precondition to maintaining that control.
The passkey protocol (i.e. webauthn) has an "attestation object" field which organizations like Microsoft can use to pass extra details about the authenticated users to the authenticating service. Which details will likely depend on that service's relationship with Microsoft. Unlike most channels between these parties, it's expected to be secured via TPM thereby excluding others (e.g. the user, or any pesky researchers) from the conversation.
It's pretty obvious from the recent design choices re: Windows that Microsoft is keen on monetizing user data--and who, in that business, wouldn't like a way to do it exclusively? i.e. to control a channel which neither the user nor your competitors can tamper with.
So they'd be incentivized to make you buy new hardware because new hardware allows them to bind your advertiser id to actual identity much more closely than is possible without that hardware (e.g. via cookies and IP addresses). The sale of details about your actual identity to organizations who only know you by your advertiser id is big business. The TPM helps them protect that business against competitors who don't have such low-level control over your device (Google, Meta, etc).
Microsoft does sell operating systems (and user data from those operating systems). Those operating systems are typically bundled / installed by default on computers.
It's in their best interests to have everyone using the "latest and greatest" for those features that weren't present (at least to the same extent) in prior versions.
This is rather contradictory. There's way less friction to selling Windows 11 licenses to existing hardware owners. Requiring a new PC only means fewer people will be running 11.
Not necessarily. I'd bet that the fraction of $ microsoft makes from selling windows licenses _retail_ is a rounding error away from zero compared to what they get selling bulk/volume licenses to corporate / OEM.
It's in microsoft's interest to make sure that dell/hp/lenovo ... etc have reasons to keep buying licenses to put on the new computers they're selling.
I suspect that TPM is about making the PC less open than it traditionally has been. For the majority of people on this site, that's going to cause a deathly-allergic reaction. For the majority of the population, there's some security advantages to having windows manage device security from POST.
> Not necessarily. I'd bet that the fraction of $ microsoft makes from selling windows licenses _retail_ is a rounding error away from zero compared to what they get selling bulk/volume licenses to corporate / OEM.
Corporate customers already have a VLK which will cover Windows 11 [Pro/Enterprise]. The hardware is the only cost for VLK customers -- Windows licensing is already covered under the existing Enterprise Agreement. EAs often have current version and current version - 1 covered, thus a VLK will entitle one to both Windows 10 and 11 as of today.
It would be odd to think that corporate customers haven't been using BitLocker w/ TPM since at least Windows 7, if not Vista. FDE has been a Corporate Security Checkmark(TM) since it became available.
> I suspect that TPM is about making the PC less open than it traditionally has been.
By traditionally, do you mean prior to 2006 as that is when we first saw and started using TPMs?
Not really. The get a cut on both ends, really. If they make you upgrade to keep using up to date Windows because of claimed security issues, they get additional sales they possibly wouldn't have otherwise.
I suspect Microsoft has numbers which suggest people rarely upgrade their OSes anymore; they're more likely to upgrade their hardware. Enthusiasts still will do whatever but these changes aren't targeting or caring about enthusiasts.
Microsoft makes Xbox and the Surface. They are one of the largest consumer hardware manufacturers in the space.
Anyways Microsoft was clearly very irritated when everyone wanted to stick with Windows 7, perceiving that Windows 8 was worse in every way, and that Windows 10 wasn't a significant enough upgrade to justify the effort especially considering all the added telemetry they added to the product.
It's very reasonable, given this, that they would seek to force the upgrade cycle to occur where it clearly otherwise might not.
> It's very reasonable, given this, that they would seek to force the upgrade cycle to occur where it clearly otherwise might not.
How is restricting which machines can run Windows 11 "forcing an upgrade cycle" on the software? It's clearly doing the opposite, by making Windows 11 upgrades less likely.
The real motivation people have for upgrading to Windows 11 is Windows 10 going out of support. And the EOL date is totally orthogonal to the TPM requirement.
On the consumer front, sure, but there are large contractual buyers who have requirements for TPM presence and several software policy systems can enforce it.
The OS requires minimum hardware. To force users to upgrade their OS, discontinue the old OS, and make a new OS version, which has greater minimum hardware requirements. Now the user is buying your software again.
They're also buying new hardware which benefits the PC maker. It's a mutually beneficial relationship that forces the user to both buy the software again, and buy new hardware. (You do pay for Windows when you buy a PC, it's a cost the manufacturer absorbs. You can often receive a discount when you order a new PC by not including Windows with it.)
From my experience it's actually the opposite. The PC is sold with Windows on it, purchased by the OEM. The OEM then loads crapware on the new PC before delivery because crapware companies pay the OEM to load crapware. As a result, it'd actually cost more to buy the device without Windows.
I've only ever seen one piece of x86 hardware that was sold with or without Windows in my lifetime. It was $15 cheaper at the time to buy the Windows version and install Ubuntu myself.
Ok, so the theory is that Microsoft is after the revenue from Windows 11 licenses? And the way they're achieving this is by forcing people who want to upgrade from Windows 10 to buy a new machine rather than install Windows 11 on their existing machines? If that was the motivation, there's a far more direct option available. Just charge for the upgrade.
For this theory to work, it would have to be that there's a significant population that a) wants to run Windows 11 instead of Windows 10; b) will buy a new computer to do that; c) would not pay the price of an OEM license for a version upgrade.
> If that was the motivation, there's a far more direct option available. Just charge for the upgrade.
That's a far more direct option, which also largely doesn't work. Corporate IT doesn't like doing in-place major OS upgrades. Consumers just plain won't, unless it's free and easy.
Sure, let's say that's true. The obvious implication is that these users actually don't care about whether they're running Windows 11 or not, and thus the Windows 11 TPM requirement is utterly irrelevant in their decision to buy a new computer.
I don't see how this supports the theory that this is all about revenue from Windows OEM licenses from forced hardware upgrades.
The theory, as I understand it, is that the wider ecosystem of OEMs is better at selling new hardware than Microsoft alone is at selling Windows upgrades. The users don't care what makes new hardware "new hardware", just that a dozen different companies are telling them that "new hardware" is exciting to buy for the holidays and "more secure" and "better". The TPM requirement on paper is an easy shibboleth for "more secure", so an easy thing to sell through the multi-channel telephone game of OEMs to ad companies to retail stores to mainstream zeitgeist. They don't have to just take Microsoft's word that Windows 11 is "better", they have "word on the street" and their pal who works "Geek Squad" at Best Buy and all those HP commercials on TV telling them they need a new Windows 11 machine for "more secure" hardware.
(I think it is gross that this is how Microsoft and the PC OEMs think is the best way to increase revenue together, but I think there's enough evidence that this theory is relatively accurate portrait of one of the factors for why Windows 11 is the way that it is.)
> Sure, let's say that's true. The obvious implication is that these users actually don't care about whether they're running Windows 11 or not, and thus the Windows 11 TPM requirement is utterly irrelevant in their decision to buy a new computer.
> I don't see how this supports the theory that this is all about revenue from Windows OEM licenses from forced hardware upgrades.
what on earth makes you think that "what the users actually don't [or do care about]" has any affect on what corporate IT does with their users' devices?
do you think corporate IT is going to say "oh ok" when a user says "i don't want to upgrade to Windows 11 or a laptop that has TPM"
Good grief. The GP was the one claiming that corporate customers don't like doing in-place major OS upgrades. I'm just accepting that assertion for the sake of argument, because it seems obvious that it will not have the effect that the GP claims.
But it seems that you're disagreeing with the GP. So let's say for the sake of argument that you're right about that. Just what is your theory for how the Windows 11 TPM requirement is leading to more Windows licensing revenue?
They do sell some PCs, but their market share is very low, and I can't imagine it's a significant part of their revenue. They definitely wouldn't bother slowing down Windows 11 adoption to sell a few more Surface Books.
About 1.9% ($4.706 billion) of Microsoft's FY 2024 revenue was from devices "including Surface, HoloLens, and PC accessories" (and not including Xbox hardware).
About 9.5% ($23.244 billion) was from Windows "including Windows OEM licensing and other non-volume licensing of the Windows operating system; Windows Commercial, comprising volume licensing of the Windows operating system, Windows cloud services, and other Windows commercial offerings; patent licensing; and Windows Internet of Things."
Compared to FY 2023, devices revenue decreased 15% and Windows revenue increased 8%.
I don't think it's illegal for hardware partners to ask Microsoft to give users reasons to buy new hardware. And of course they do this, they always have. The Wintel alliance has always been a symbiotic relationship between Microsoft and the hardware OEMs:
- Hardware guys make cool new hardware that incentivizes PC sales.
- Windows guys add driver and OS support in a timely manner so apps can utilize it easily.
And sometimes the other way around:
- Windows guys add some cool new feature that incentivizes PC sales.
- Hardware guys drive down component costs to compensate for the OS getting bigger and slower.
The problem for the PC industry is that in the last ~15 years or so this virtuous circle has broken down. Outside of Apple the hardware guys stopped coming up with cool new features that would shift units outside of gaming GPU upgrades, and gaming has anyway been dominated by consoles for a long time exactly because they have hardware DRM that works so game developers prefer it (also gamers when they want multiplayer without wallhackers). Intel struggled and AMD didn't really pick up the slack in any major way. Even Apple has struggled here - other than their proprietary CPU designs and rolling back some Ive-isms by adding more ports again, a modern MacBook isn't substantially different than the models they were selling years ago.
So that leaves the software guys to drive sales. Unfortunately for the PC OEMs Microsoft has well and truly run out of steam here. Their best people all left the Windows team years ago, and Windows isn't even a top level division anymore, being weirdly split between the Office and Azure teams.
A big part of the stagnation is driven by the web. Nobody writes Windows apps anymore except games, so there's no progress to be had by adding new Windows APIs outside of DirectX. Meanwhile the web guys are shooting the PC industry in the face with a policy of never adding features unless it's supported on every piece of hardware from every vendor, more or less, which makes competitive differentiation impossible, so nobody even tries anymore. There is no web equivalent of a driver since the Netscape plugin API was killed. They also move incredibly slowly due to the desire to sandbox everything. In the 90s the success of Windows was driven by some wizard-level hackers but as PC hardware matured clever tricks stopped being an important differentiator, and monopoly profits made them fat and lazy. It's clear that Nadella has zero confidence in the Windows org(s) ability to execute, hence why in the post-Ballmer years the rest of Microsoft has systematically divorced itself from them.
So - no hardware innovation thanks to the web, no major CPU upgrades thanks to Intel/AMD, no software innovation thanks to Microsoft. The PC industry is stagnant and desperate. What have they got left? Well, they have TPMs (really, TPM v2 because TPM v1 was kinda botched). And Windows doesn't really need it, but if Microsoft ties Windows upgrades to TPMv2 they can use the treadmill of security/support expiring on Win10 to drive one last round of hardware replacements that can give the industry an injection of revenue that can then maybe be spent on finding new hardware features to drive upgrades, seeing as Microsoft can no longer do it.
There's nothing illegal in any of this - nobody is price setting and it's not much different to prior eras when new Windows versions required more RAM.
File deduplication would reduce disk space usage by 40% on a typical consumer laptop, and works well in Windows Server. The reason it is not enabled in client windows is because storage sells.
Strategies change over time, including Microsoft's. TPM was previously envisioned as a broader physical storage for secrets, such as virtual smart cards. Microsoft no longer likes virtual smart cards, but TPM is still used for storing data for measured boot attestation. Also, at the time Microsoft was attempting to broaden support for TPM where it is restricted, such as China, which does not allow foreign TPM chips.
I'm embarrassed to admit that I don't actually understand what a TPM does. My vague and probably incorrect impression is that it performs some sort of encrypted verification of firmware or hardware modules? Can anyone expand on what this does? My impression would be that this is not useful for most users, and would be much of a concern in industrial espionage situations. I have no confidence that I'm correct here.
It's a secure storage spot for crypto keys and performing crypto operations for things like bitlocker and validating device or OSs for secure boot. If you know of the Apple Secure Enclave it's a more generic version of that, a place where even the device vendor (in theory, who knows what techniques the secret squirrels of the world have hidden away) cannot extract the actual key material from only request operations performed using that info.
The simplest and most obvious use-case is allowing you to encrypt your hard drive using a key stored in tamper-resistant hardware rather than having to rely on the user to select a passphrase complex enough to resist offline brute force attacks.
Oh, that's interesting. So in the TPM case, I could not have a password to have an encrypted volume? And if I removed that hard drive from the computer, there would be no way to recover it? But from the user's perspective, it would be transparent and they might not even know it's encrypted?
Yes, that's very common. In Windows 11 Pro (not sure about other editions) you can enable BitLocker and turn on auto unlock with no PIN. Though if someone steals the whole PC I'm not sure how effective that is. With a PIN set the TPM will enforce rate limiting to prevent brute force attacks, which should be more effective in that scenario. Most modern phones do something similar: user data is encrypted with a TPM key accessed using your lock screen code on boot-up.
It’s just a little cpu and some nonvolatile memory running a program. You can send it messages, and it will send back replies, but you cannot control which program is running on it. Of course this is vague enough that it could implement almost anything you want.
What makes it a TPM is the protocol it answers to. The TPM has a hardware RNG, and you can just ask it for some random numbers. That’s very simple. You can have it create encryption keys for you, since those are primarily just random numbers. You can ask it to _store_ a key for you, to be released to anyone who asks for it provided the TPM is in a certain state. What is this state? This is the really interesting part of the TPM.
The TPM has a number of registers that start off empty when the computer boots. At any point any program running on the computer can send a message to the TPM that asks it to incorporate an input into one of these registers. The input is a number, and the new value of the register is basically just the hash of the current value of the register and the new input.
If the BIOS/UEFI computes a hash of its own code plus the bootloader’s code and measures that into a register on the TPM then the bootloader could check the TPM to make sure that it hasn’t been tampered with before it boots. It’s easier though if the bootloader hashes the kernel (and the kernel command line) that it’s going to run and measures that into the same register. The kernel can then hash the initial ram disk and measure that in. At each step of the process we can measure the next important part of the OS and incorporate its value into the same register and at the very end we will have a number. If that number is the same every time we boot up the computer then we know that the computer and the software have not been tampered with. We can even send that number off over the network as part of a Remote Attestation protocol. You might have all the laptops you supply to your employees do this so that you can know that they haven’t been tampered with. Or all of your cloud instances could do this for the same reason. (Of course the exact number that the TPM ends up storing changes after every OS upgrade, and you need to have some way of knowing what numbers to expect, so this is a fair amount of work.) Remote Attestation is not really of any use to the average consumer, but reliably detecting a hacked OS would be.
Going back to encryption keys, you could store the encryption key for your home directory in the TPM, locked to a specific value of a specific register. You would then not be able to unlock your home directory if the computer has been tampered with. An attacker who boots off of a USB drive can’t possibly arrange for the same value to end up in the TPM, even assuming that they know what value is required. It will do them no good to take the encrypted disk out of the computer and put it in another one, because the key doesn’t go with it. Rubber hose cryptography isn’t useful either, even if there is also a password for your account. This should be quite valuable to many, if perhaps not all, users.
The TPM is a great thing, from Microsoft's perspective.
Because Microsoft have the Secure Boot code signing keys. And none of their users expect a "free software philosophy" that lets them use their own modified kernel, or DKMS to build new copies of kernel modules on demand - so you don't have to make users jump through any "machine owner key" hoops.
And a lot of your customers are big corporations who barely trust their own employees - and inexperienced users for whom forgotten passwords and suchlike are a big problem.
With the TPM, that corporation's shared PC at the reception desk can have an encrypted disk without all the receptionists needing to know the password, only their own passwords.
With the TPM you can remotely force a reboot to install updates, and the computer will fully boot afterwards - not get stuck at a disk encryption prompt. Ideal if your corporate work-from-home policy is for employees to remote desktop on a PC under their desk.
With the TPM, the PC can boot, unlock the disk and join wifi before any passwords have been entered - so a corporation's employees only need to remember their windows password, and if they forget it, helpdesk can reset it remotely. It's great for the user too, who doesn't lose their non-backed-up data.
With the TPM you can have a short, weak passcode to unlock your PC, without worrying about brute force attacks. That's great if you want a cell-phone-style experience - or if you find long passwords an inconvenience, rather than a badge of honour.
With the TPM a corporation can give a laptop to a service engineer, who'd really like to install some games to play when he's stuck in a hotel over night for a service call, and who has unsupervised physical access - secure in the knowledge it's very difficult for them to install unapproved software.
For a corporation that wants hardware-bound keys, the TPM is superior to things like Yubikeys, precisely because of its inflexibility. Why give people a second factor that keeps working when they move PCs and that's compatible with different platforms, if you never want them to move PCs or change platforms without going through you?
It just so happens that the majority of these only benefit large corporations and forgetful users, while most Linux users are quite happy remembering long unique disk encryption passwords thanks very much.
> while most Linux users are quite happy remembering long unique disk encryption passwords thanks very much.
Which brings something up: how do you get back in if you suffer a traumatic brain injury or something like that? I feel like a lot of software assumes the operator can do things like remember unique passwords for a long time.
Sure, I can do that NOW, but will I still be able to in my seventies?
Well, you could write down your password and give it to a trusted friend, a lawyer, or whatever so people can get into your documents if the worst should ever happen.
Personally I choose not to do that. My girlfriend sent those nude photos to me, not to my heirs or the executor of my estate. It's impossible to "get back in" without the password, and that's how it's meant to be. Of course if you've got no sexy photos, and lots of treasured photos of your family growing up, you might feel differently!
Yubikeys offer PINs and passwords, a physical user presence button, finger print sensors, NFC, and you can use one key on different PCs, you can deal with PC hardware failures by moving the key and deal with key failures with a backup key, and and it's compatible with Windows, Linux, OS X, Android and iPhone.
So they're a heck of a lot more flexible.
But in a corporate environment, you might not give a shit about Linux support, and you might think it's better if the user can't unplug the key and plug it into another PC, because corporate workers should only connect to corporate systems with their corporate-issued laptops, and corporate helpdesk will sort out any hardware problems.
Microsoft is just trying to match features with apple which does the sorts of things with the T2 chip. Home users probably don't care that much, but corporate users do.
That said, the root of all DRM is not the TPM or the GPU or whatever... it is hollywood.
Devices with dTPM were released in 2006. BitLocker leveraging dTPM released with Windows Vista. Corporations have been using BitLocker w/ TPM for nearly two decades at this point.
You're referring to first usage but I think the above is about first guarantee of what ALL products in the platform will have. Corporate purchases or BYOD, you can assume an Apple product has a reasonably secure way of storing the user's VPN key or whatever.
I'm convinced Microsoft is prepping to make Windows as locked down as the Xbox, so that they can have final approval over apps that run on the platform and skim the top off app sales.
Apple has shown that the game console model can work for non-gaming software, and Microsoft wants in on that third-party app cheddar.
My guess is that the b2b sales of Windows outnumber b2c, if not in volume then certainly in revenue.
Suddenly, enforcing company security policies centrally without the client (laptop) being able to change then and still attest to connect to the corporate VPN, becomes a feature.
After all, it's not your computer, it's the company's.
I think inTune already uses the TPM for that kind of stuff, so "install this before we let you into outlook web, and also we'll check you're not a year behind with windows updates" is a thing.
Requiring TPM can actually benefit multiplayer video games because it introduces a secure way to identify hardware being used by cheaters. Right now everything being used by games is easily spoofed by cheats so cheaters just need to get a new account to continue cheating after being banned.
Anti-cheat software is usually blocking playing in VMs or on Linux anyway.
Some monitors [1] have cheats like that built in now, too. They are much more limited than what cheats do today because they only have access to information visible on your screen (can't see other players through walls).
There are cheats that give you more information than you should have. These typically require access to the game process's memory space.
If you're cheating with a video capture card, this likely means you're allowing a program to rewrite your inputs to more accurately target player models. You will likely be banned if you do this on the same machine via screen capture. A video capture card can process the information on a separate computer, e.g. location of enemies by searching for specific colours, then write into a virtual USB mouse on the gaming rig to keep the player's crosshair on the enemy model. I'm not sure about specifics, but this kind of cheat is almost undetectable; it is only really mitigated by the cost and effort involved to do it.
Players can add additional mitigations on top of this, like only activating aim assist while the shoot button is pressed, to make it entirely undetectable.
Encrypted monitors can be countered by a high quality video camera mounted on a tripod behind your chair or on a wall or ceiling
Expensive, yes, but at that point you're already spending real money on a second computer with a GPU to do computer vision on the game video stream, so...
HDFury devices allow stripping of HDCP 2.2, and vast majority of users currently don't have HDCP 2.3 compatible monitors/TVs, so that's not an option yet.
Anti-cheat is a lousy cover for something that's going to be much more lucrative when used to correlate the accounts of journalists and whistleblowers such that they can be silenced. It's censorship tech.
This here is a stronger motivator than any other motivator mentioned in all other comments posted. And "journalist" will include anyone who has the "wrong" memes on their machine.
This only matters for a tiny minority of video games, and even a small minority of multiplayer video games : for instance this is not going to be something I'm worried about if I play couch co-op / split screen multiplayer with friends only.
Customers are typically unhappy when Microsoft refuses to fix critical bugs that only arise when running Windows on older hardware.
To the average user, "Windows installs without error and hardware appears to work" = "Microsoft supports running Windows on this hardware", even if the hardware is EOL and requires drivers that haven't been updated since Windows Vista.
Windows 10 to Windows 11 upgrades are free. You know what's not free? The Windows license on a brand new computer if it's bundled with Windows. And here's a friendly reminder that the vast majority of users don't know how to build their own computer and install an operating system, even if it truly has been made extremely simple nowadays.
As the article points out, the TPM is not in a good place, architecturally, to use for DRM: there's no path from the TPM to the screen that's not under OS (and thereby user) control.
Currently, no. But once (undetectable) OS modification is no longer possible, making the undecrypted media unreadable is just a few API restrictions away.
In Android phones for example you cannot screenshot banking apps. And if you root (modify the OS of) your phone, banking apps refuse to work.
However, for the question at hand, that's irrelevant: a better (for DRM) solution exists today, and they're already using it.
I'm not saying that the TPM is incapable of being abused by manufacturers and OS authors, but the FSF really weakens their argument when they predicate it on something that's not actually true. Ex falso quodlibet (you may prove anything if you rely on a falsehood).
hard disagree. All security requires a root of trust. If you don't have that, how can you ensure you're not running on a mailicious hypervisor, you've not loaded any bad drivers etc.
You can only guess, and badly at that.
Because we don't have it, that's why we get crap like kernel-level anti-cheat, various 'security' solutions made by companies of dubious reputation and technical ability, just because you refused to trust Microsoft.
And even if these companies are somehow not malicious, and can be trusted, they still often compromise the stability and security of the OS.
The amount of crap Riot's anti-cheat and Crowdstrike has caused is well documented.
It's the computer security equivalent of not trusting Big Pharma, and taking a random assortment of herbal medicine coming from god knows where, and containing god knows what.
Movie studios wanted a way of securing the content between the time the AACS was decrypted and the HDCP encryption took over. Once the AACS was decrypted the encoded movie was sitting in main memory and could be intercepted by any other application.. The solution was to re-encrypt the data once it was pulled off the disc (I'm not kidding).. encryption would be done by the application.. The graphics driver would be able to pass along the encrypted data to the GPU, which would then decrypt and decode it in hardware and then the entire framebuffer would be HDCP encrypted by the GPU before sending it out over DVI/HDMI.
The top comment pretty much sums up everything that’s wrong with DRM:
> Lest one get the impression that hardware DRM fairs any better than software: Even 4K/HDR versions of streaming media start making the rounds on pirate sites within a day or two of release.
> As usual DRM fails to prevent piracy while hurting the experience of paying customers.
I remember the idea that DRM is not about controlling the viewers directly, but about controlling the makers of playback devices (both hardware, as in GPUs and TVs, and purely software). The point is not in making the bits uncopyable at all, but to prevent the makers of things like Roku or Chrome from making the access too easy, like skipping ads, let alone downloading.
Most viewers are not computer-savvy, even if they spend every day in an office facing a computer screen. If 90% of audience would know or bother to go no farther than the legal distribution channels, and won't be able to plainly download the high-res media in one click, the DRM has worked.
It suffices to make pirating inconvenient enough for the uninitiated, and let the advanced and determined minority pirate away, of course always threatened and stigmatized, to keep the operations low-key. A small amount of pirates, imho, only improves the profits, because they brag about having just seen the new hot thing in all its glory, and thus induce FOMO in their audience.
Of course the legally-buying, technology-naive audience is inconvenienced. But they know no better, and the whole point of control is, well, making people submit to what they rather won't, isn't it?
The DRM doesn’t really make pirating any more inconvenient for the (pirate) consumers (they get e.g. an MKV file without any of it). If there was no DRM on the legal platforms, it would be easier for pirates to release new stuff, but just marginally so.
If there was no DRM, ordinary viewers would still choose Netflix over torrents, and perhaps some more tech-savvy users would choose it as well (since many do want to support film makers, but are opposed to DRM). It would still be as hard to create a “pirate Netflix” as it is now, because of legal threats and because it’s tricky to monetize it.
DRM literally serves no purpose outside of some corporate politics bullshit.
No, with DRM, you can't make and sell a player that allows to skip ads, ignore regional limitations, etc. If you do, your key is revoked.
Pirating high-res videos already requires special hardware to remove HDCP. It's cheap now because HDCP is notoriously weak. A future standard may start needing a $500 device, or even a $5000 one.
Is it still relevant nowadays? I thought most people just went to online streaming, and you don’t need DRM to enforce all that stuff there.
> Pirating high-res videos already requires special hardware to remove HDCP.
That is true, and a new standard might make it harder for a few years, but:
1. The switch won’t happen overnight, which means pirates would still use HDCP while working on the new one.
2. It’s possible to make piracy prohibitively expensive, but the standard would have to be really really complex. Like, “putting hidden watermarks with display serial number on the stream and revoking keys just for that display” kind of complex. I don’t think it’s feasible.
Only one of the purposes of DRM is to prevent piracy. There are others.
One of which is to prevent mainstream media player manufacturers from making a hardware or software player which can skip region coding/studio tags/anti-piracy tags/trailers/random adverts. Or even from having a generic "skip 30s" feature.
You want to legitimately be able to play our stuff so you can sell millions of units of your player to unsophisticated consumers? Agree to these terms, and this fee schedule, or you don't get a key to play them. Fuck us over, and we'll revoke your key. Lol.
But hardware backed DRM can be so much more invasive beyond that. I have no doubts the long term goal of MS is to have a Windows version of Play Integrity.[0] So total control over everything that happens on your device. Just to give an example of what could happen if this becomes reality: https://en.m.wikipedia.org/wiki/Web_Environment_Integrity
This tech extended to browsers could easily mean that sites could refuse to serve you if your machine is running any bigcorp unapproved software. An easy example of that would be adblockers.
Unless we get lucky with secure world compromises like the Tegra X1 bootrom exploit[1] or get real good at passing legistlation that forces companies to give you all the private keys to your own machine, the future for personal computing is looking grim.
[0]: https://developer.android.com/google/play/integrity
[1]: https://github.com/fail0verflow/shofel2