After reading the article, and specially the remarks about this engine being copy-pasted from the Xbox DRM engine , does anyone still believe that Pluton, also copy-pasted from the Xbox, is about end user security? And not totally about MS finally having enforceable DRM on PCs?
Oh and by the way Pluton is now on the latest batch of Intel laptop chips. And has been on AMDs for a while. How soon until Windows requires it?
>does anyone still believe that Pluton, also copy-pasted from the Xbox, is about end user security?
I never did. The worst part is explaining it to people drinking the MS coolaid. I'm an MS admin so people at work love Win11, Intune etc all that max lockdown shit. To me that's not what Windows is about, for me Windows is excellent because of the admin tools and backwards compatibility. But hey that's just me.
Proton will be another TPM thing, introduce it, wait 5 years, then mandate it. They have time.
TPM end game is to have identity tied to a device on pcs, just like the monopolies already have on Android and IOS.
you know how google and apple dropped actual totp 2nd factor for their own accounts and force you to sign on another device to confirm signing on new devices? same thing.
Hell technical people can't figure it out. Everyone complains that it's fragile because what if their phone breaks, and those that think they know better, think it's because of the dozen one-time-use emergency codes.
It's not their fault though. Every web site or service that offers totp and the most user-facing apps like google authenticator all scrupulously avoid telling you to save the seed value in the initial setup qr code.
That short random string is all you need to have working totp on as many different devices as you want, set up a new one any time you want, and it's nothing but a simple static never-changing secret exactly like a password.
You can wake up naked in a foreign country and be all back in a few minutes and without having to re-setup any sites or anything like that.
That is, IFFFFF you have previously saved all the totp initial setup seed values right along with the passwords for those same accounts. If not, you can go do it right now.
Just when you enable 2fa on some site and it shows you a qr code (or however it gives you the code, it might be a regular url, and sometimes they even display the string in plain text)
save that string. If it's a qr code, save the qr code and read it with a regular qr code reader (probably just your camera app these days) and it will have a string or a url with the string as the query string.
That string is not just one-time use. You can just save it and enter it into totp apps all over the place all day for the next n years.
keepass apps all support it now for one example, so you could save the string in a notes field in keepass, but they have a dedicated totp field now too. You paste it in, and now that password entry not only stores your name & password for that site, it stores the totp seed for setting up totp apps, and also displays the current totp time code just the same way the totp app like google authenticator does.
It's all stored in the keepass db file just like the normal passwords, so to set up a new device, all you need is access to any copy of the keepass db file. Install any keepass app like keepassxc on a laptop, load the db, and there's your working current totp codes for all sites. You want a more convenient dedicated totp app than having to dive in to keepass, just copy the totp seed from keepass into gnome authenticator or whatever. The different apps have different ways to supply the string when not taking a picture directly with the camera. Some like google hide it from direct access. Last time I used google authenticator I think it had no usable export, but it just recently got the ability to store the seeds in googles cloud, but not like in an ordinary google drive file that would be useful, just some internal magic that all it does is if you can somehow manage to log in to your account on a new phone, it will pull the seeds down and start working on the new phone. It doesn't let you set up any other apps or devices, and Google has a copy of your seeds in a form they can read, even though you can't!
But the same seeds could be just as cloud-enabled by being inside a password manager db, which is still sitting on a google cloud server, but this time in a file that you own, and in a form that google can't read but you can.
I'm a bit late but FWIW Google Authenticator has a QR code export option, it generates a giant QR code (potentially multiple) that contain all the accounts and secrets. It's designed for you to scan into Google Authenticator on another device, but you can also read the contents of that QR code yourself with various open source utilities to get the accounts and secrets (or just print a copy for a physical backup of them). Overall it's not a terrible way to go, though like you said if you can save the original QR codes that's a nicer way to do it.
It being a Win11 requirement. It failing and triggering Bitlocker on our machines. It's just shit :) No I don't have another solution. Let me complain.
Every Windows Update that Lenovo kept pushing UEFI updates on their shiny new X13s with the Snapdragon and the Pluton chip in it kept tripping Bitlocker on every update.
FWIW, my old corpo HP would also trigger Bitlocker sometimes on random shit, such as upgrading the firmware of the docking station. But that was usually fixable either by unplugging USB devices while booting, or just trying many reboots until Bitlocker suddenly decided everything was OK.
People have been saying that for more than 10 years now, since the TPM was introduced.
Yet you can still install Linux on PCs sold with Windows, you can still install third party software on Windows not from a Store, you can still watch pirated movies downloaded from torrents.
You can even run an unregistered/unpaid version of Windows if you don't mind that it will not let you change the desktop background image.
Or you can recognize that app/game developers are starting to require Secure Boot enforcement if you want to continue to use their apps or play their games.
Let me tell you a secret: it's because the gamers are demanding that. The game companies couldn't care less if there are cheaters in the game, but it's the players which put huge pressure on the game companies to detect and ban cheaters.
Gamers don't want cheaters, but gamers also don't want malware. Some people won't care, others will care. The real problem is that publishers don't give anybody a choice on this. They sneak these invasive anti-piracy measures into their games without asking since they don't want to fragment their player base.
The reasonable, fair, common-sense pro-consumer thing to do is to split the online play in two: a non-anticheat server and an anti-cheat server. Players can opt-in to installing a rootkit/sharing their SSN/whatever if they want to play on the hardened server. This costs nothing, and makes all types of gamers happy.
But doing this has less upside for the publisher than forcing anti-cheat on everyone. The only risk is that they might get dragged through the mud by a handful of influencers peddling impotent rage to viewers who are just looking for background noise while sleepwalking on their Temu dopamine treadmill live service of the month.
> The reasonable, fair, common-sense pro-consumer thing to do is to split the online play in two: a non-anticheat server and an anti-cheat server. Players can opt-in to installing a rootkit/sharing their SSN/whatever if they want to play on the hardened server. This costs nothing, and makes all types of gamers happy.
This is a very good point! And I'd like to point out that there is an analogue to the problem of smurfing in online video games, and the corresponding solution, which is to require semi-unique ID to play (e.g. a phone number which can only be tied to one account at a time with a cool-off period when transferring between accounts). Valve does this for Dota 2, and smurfing is far, far less common than it is in League of Legends.
Some League players complain that they don't want to give their phone number to Riot (which is entirely reasonable given that it's a subsidiary of Tencent), but if enough people don't want that, then Riot could simply split the ranked queue into two: one where (soft, ie phone #) identity verification is required, and one where it isn't.
Riot won't do this, though, not because it wouldn't fix the problem (it would, as demonstrated by Valve), but because they profit from smurf accounts buying skins.
>but if enough people don't want that, then Riot could simply split the ranked queue into two: one where (soft, ie phone #) identity verification is required, and one where it isn't.
The phone number requirement is only there if you want to play Clash. Normal ranked play works flawlessly with no number.
The problem is also largely caused by publishers/developers wanting live service games instead of providing a complete product that users can then run themselves (with community-hosted servers). This makes the developer responsible for weedng out bad actors and they will of course seek technical means rather than social means which don't scale as cheaply.
> Let me tell you a secret: it's because the gamers are demanding that.
Citation needed.
Whose these gamers ? I surely didn't ask for this neither any of the gamers I know, nor seen any demand about that in gaming forums.
> The game companies couldn't care less if there are cheaters in the game, but it's the players which put huge pressure on the game companies to detect and ban cheaters.
The jump from this to "requiring TPM" is quite a long one.
Cheating in online games (especially ones that are free) is so absurdly rampant and disruptive that you can sell gamers just about anything if it can meaningfully deter cheaters. Every now and then a Youtuber will say “kernel level anti-cheat is bad for [reasons]” and gamers will pretend to care about it until the video leaves the “For You” page.
I personally stopped playing CS because my friends started using an alt-launcher to avoid cheaters, which added a whole layer of complication that made the game undesirable. Ban waves aren't perfect but in my limited experience, cheaters weren't that rampant, in others experience it became intolerable.
I haven’t played valorant, so I don’t know about them, but what I can say is that definitely other anti-cheats are highly ineffective (VAC being one that is highly ineffective), with blatant cheaters going years without ever being caught.
Hell, blatant cheaters literally stream themselves cheating and their own communities do not recognize the cheating till the stream makes a mistake and selects the wrong scene. This also means that VAC methods of sending footage to random players is ineffective, as some streamers who are very obviously actually cheating do so in front of tens of thousands of people, and those people do not recognize the obvious cheating happening.
We also know game companies don’t care about cheating, as activision admitted in their lawsuit that they leave cheaters on a safe list so long as the cheaters have any semblance of an audience streaming.
It really doesn’t even take that many viewers. Zemie, for example, is a straight up cheater that runs a button activated aimbot and wall hacks. He only averages a couple thousand viewers and is safe listed by a number of game companies.
That's not the gamers asking, though. In this instance they're being taken advantage of because they have maligned priorities, and being sold an over-the-top solution they don't need. You can still detect process injection, memory injection, sketchy inputs, HID fuckery, DRM cracking, host emulation and input macros without ever going kernel-level.
Truth be told, if the exploiter-class of your game would even consider a kernel-level exploit, your game is fucked from the start. Seriously, go Google "valorant cheating tool" and your results page will get flooded with options. You cannot pretend like it's entirely the audience's fault when there are axiomatically better ways to do anticheat that developers actively ignore.
Go on steam and look at the recent reviews for older but still popular fps games. Gamers complain about cheaters constantly and will negatively review games cause of it
You're being disingenuous here, or just missing the point. The point being made was the gamers are demanding game developers stop cheaters... and that secure boot (and related ways to lock down the computer) is one of the primary tools they know to use to do that.
> The point being made was the gamers are demanding game developers stop cheaters... and that secure boot (and related ways to lock down the computer) is one of the primary tools they know to use to do that.
That's akin to saying that, as people want security on the street, mandatory strip search as soon as your exit your home is fair game.
Asking for a result doesn't give a blank-check for all the measures taken toward this result.
I agree, but it doesn't change the fact that it's one of the primary reasons they're doing it. And "strip searches on the street" may not happen, but "Stop and Frisk" certainly is/was. And it was very much done because people were complaining about crime and safety. And it was done regardless of whether or not it was right, or effective, or even legal.
You cannot "prevent" cheating, you can at best mitigate it, it's a balance.
There plenty of way to mitigate cheating in game, but the game industry is focusing on the ones where they don't bear the cost and only the customer will (and this view is in part due to the model of F2P games, where banning cheater is useless as it doesn't cost them anything to create a new account).
Letting game developer having complete control and spying on the device playing the game is fine in a physical tournament were they provide the device, but it's insanity when it's the user own device in its home.
> There is no technical way to prevent cheating in advance without secure boot.
I'm not really sure I buy this. I can't really give a way that can guarantee no cheating but I know for example games like Genshin Impact run almost all the code (dmg calculation etc) server-side. Perhaps something that's an extension of Geforce Now might be the best "anti-cheat" technically speaking.
To run anti-cheat in that way, you need all game mechanics to be run server-side, and you need to not let the client ever know about something the player should not know - e.g. in a first-person shooter you need to run visibility and occlusion on the server too! Otherwise the cheating will take the form of seeing through walls and the like. This is going to boost the cost of the servers and probably any game subscription, and might lead to bandwidth or latency problems for players - just to avoid running any calculation that is relevant to game balance on player hardware.
Well yeah, that's the correct way to run a server, don't send information you don't want the user to get.
But as you are pointing out, forcing client-side intrusive anti-cheat is cheaper, thus this as nothing to do about preventing cheating, but about reducing cost.
It's not just about cost. Theoretically yes, you shouldn't send information that you don't want users to get and abuse. However, in the context of games, this is not always possible because most games are realtime and need to tolerate network latency. There is no perfect solution - there will always be tradeoffs.
Ideally player A shouldn't be networked player B if there is a wall between them but what happens when they're at the edge of the wall? You don't want them to pop in so you need some tolerance. But having that tolerance would also allow cheaters to see players through walls near edges. Or your game design might require you to hear sounds on the other side of the wall (footsteps, gunshots, etc.) which allows cheats to infer what what may be behind the wall better than a person would.
> Or your game design might require you to hear sounds on the other side of the wall (footsteps, gunshots, etc.) which allows cheats to infer what what may be behind the wall better than a person would.
Yes, and you cannot prevent this except in in-person tournament.
Any output send toward the player, even a faint audio queue could be analyzed, and use to trigger an action or display an overlay to the screen, and no amount of kernel-level stuff will prevent that, as you can do this outside of the computer running the game.
The end state of your argument is the game runs entirely on hosted hardware and you pay for a license to stream the final rendered output to your monitor. This is already happening. Soon games won’t be able to be “bought” at all, you’ll just pay the server a number of dollars per hour for the privilege of them letting you use their hardware.
If all surfaces are fully opaque, maybe. The second particle effects and volumetric effects and all sorts of advanced techniques play a role in actual gameplay, no. And that’s only for this one type of cheating.
Back in my day we all played on private, community ran servers where you could easily vote to kick/ban folks, the server owner was your buddy, or you played with people you trust.
Now everything is matchmaking, private servers, live service and that sense of community is gone.
It's very hard to gather full teams (usually 10 persons) in a small communities. Public matchmaking gives an opportunity to start a game in a minute from clicking "play", regardless of how many people you have at hand right now.
Small communities still exist, it's just that vacant places are now filled with strangers.
lot of thing happened, 6th gen consoles started a new way of using online games (no keyboard, no third party chat/vocal, no group chat out of game, no private server), then the industry pivoted away from private server to have more control on their games, then the whole F2P economy then GaaS took any agency out of players hands.
The goalpost just needs to be moved further than is economically interesting for cheaters in general to reach.
Perhaps secure boot by itself isn't enough, but I would imagine it would be a relatively large bump, when combined with a kernel-level anti-cheat. I presume such anti-cheats would e.g. disable the debugger access of game memory or otherwise debugging it, accessing the screen contents of the game or sending it artificial inputs.
What vectors remain? I guess at least finding bugs in the game, network traffic analysis, attempting MitM, capturing or even modifying actual data in the DRAM chips, using USB devices controlled by an external device that sees the game via a camera or HDMI capture.. All these can be plugged or require big efforts to make use of.
>Perhaps secure boot by itself isn't enough, but I would imagine it would be a relatively large bump, when combined with a kernel-level anti-cheat
VALORANT also adds TPM to the mix alongside SB and a kernel AC and yet is still trivially easy to cheat in as long as you have a driver you can use. Granted, it needs to be signed (=financially unreachable by a big part of the community), but if stubborn enough...
The real solution is letting players host their own servers and build their own communities of players they trust, but corps don't like giving that kind of freedom to users
Gamers arent demanding this. There are tons of ways to detect cheaters, the most effective one being human moderation. But no, companies wont do MaNuAl WoRk because it doesnt sCaLe, even though they have more than enough cash in the bank.
This absolutely happens already. The problem with finding statistical outliers is that plenty of legitimate players are outliers too. And if you're banning/segregating players for being outliers, you get a very angry player base.
Riot has a pretty indepth blogpost about their anti-cheat systems, they've had years to mature them on some of the most demanding competitive gaming platforms ever made. Requiring players install kernel anti-cheat was very far down the list of possible solutions, but that's what it came to. It was either this or stop being free to play.
The server is all-seeing, if there is no way for the server to discriminate cheater from other player, then no player can possibly know there a cheater on the server, thus cannot complain about cheating is either irrational or the server-side detection is severely flawed.
> The server is all-seeing, if there is no way for the server to discriminate cheater from other player, then no player can possibly know there a cheater on the server, thus cannot complain about cheating is either irrational or the server-side detection is severely flawed.
It's impossible to tell in-game if a baseball player is using steroids, yet there's a laundry list of banned substances and players who got banned for taking them because the MLB believes it gives them an unfair advantage. It's called competitive integrity.
Since it sounds like you don't play games, at least not competitively, I'll clarify that "cheating" in this case isn't the obvious stuff like "my gun does 100x damage" or "I move around at 100mph" or "I'm using custom player models with big spikes so I know everyone's location" that you would've seen on public Counter-Strike 1.6 servers in 2002. Cheating is aim assistance that nudges your cursor to compensate for spray patterns in CS, it's automatic DPs and throw breaks in Street Fighter 6 that are just at the threshold of human reaction timing, it's firing off skillshots in League of Legends with an overlay that says if it's going to kill the enemy player or not. All of this stuff is doable by a sufficiently skilled/lucky human, but not with the level of consistency you get from cheating.
> It's impossible to tell in-game if a baseball player is using steroids, yet there's a laundry list of banned substances and players who got banned for taking them because the MLB believes it gives them an unfair advantage. It's called competitive integrity.
This is relative to meat-space, not videogame, but we could go there and say caffeine or Adderall use is cheating, thus making anti-cheat a little more invasive…
And there another difference, you're referring to professional sport. I have no problem with invasive anti-cheat for professional gamer, even better it the gaming device is provided by tournament organization.
But we're talking about anti-cheat used for all players, akin to asking people playing catch in their garden or playing baseball for fun an the local park to take a blood sample for drug test.
> All of this stuff is doable by a sufficiently skilled/lucky human, but not with the level of consistency you get from cheating.
That's the point, there no difference for the other players between playing against a cheater and playing against a better player. Any ELO-based matchmaking will solve this, cheater will end-up playing against each-other or against very skilled player.
You could argue that they could create new account or purposely cripple their ELO ratting, but this is the exact same problem as smurfing.
Many games have ranked ladders now which are taken fairly seriously. Success at high levels of ladder player often translates into career opportunities, especially in League of Legends.
> Any ELO-based matchmaking will solve this, cheater will end-up playing against each-other or against very skilled player.
Well, first, you're wrong, because cheating only makes them good at one part of the game, not every part of the game. e.g. in League of Legends, a scripting Xerath or Karthus who hits every skillshot is going to win laning phase hard. However, scripting isn't going to help if they have bad macro and end up caught out in the middle of the game, causing their team to lose. Most cheaters don't end up at the top of the ladder, they end up firmly in the upper-middle.
Secondly, you're basically saying "cheating is OK because they'll end up at the top of the ladder." You don't realize how ridiculous this sounds?
Third, ranked and competition aside, playing against someone who's cheating isn't fun, even if you end up winning because they make mistakes that their cheats can't help them with.
You don't play competitive games, that's fine, but a lot of people do and they demand more competitive integrity than casual players.
> You don't play competitive games, that's fine, but a lot of people do and they demand more competitive integrity than casual players.
Little difference : I don't play competitive game with completes strangers on company run servers.
I've played competitively on community based server, with people being screened by other players and the community able to regulate itself (ban or unban players).
The problem space is vastly different, you don't need intrusive ring 0 anti-cheat for this.
The whole kernel-level anticheat stuff is a poor solution to a self-made problem by the developer : they wanted to be the one in charge of the game and servers, so they needed to slash human moderation need. They also wanted to create a unique pool of player and didn't want the community to split between itself and play how they want.
> Little difference : I don't play competitive game with completes strangers on company run servers.
People don't consider playing around with your friends to be competitive. You don't get to choose who else is competing in the game or what strategies they use. This is just an area that you are clearly not familiar with.
> The whole kernel-level anticheat stuff is a poor solution to a self-made problem by the developer : they wanted to be the one in charge of the game and servers, so they needed to slash human moderation need. They also wanted to create a unique pool of player and didn't want the community to split between itself and play how they want.
This wasn't self-made by the developer, it was demanded by the players. Competitive games have almost exclusively moved to online, skill-based matchmaking with a ladder system because that's what players want.
> People don't consider playing around with your friends to be competitive.
I didn't say friends. Please don't modify my argument to refute it.
> You don't get to choose who else is competing in the game or what strategies they use.
I, as a single player, no, but us, as a community, yes, and it's the same for any game or sport, different group run different tournament with different rules about who play and how.
> This is just an area that you are clearly not familiar with.
Please refrain to use ad hominem, especially when you have no idea who you are talking with.
> This wasn't self-made by the developer, it was demanded by the players.
I don't know any players who asked for the disappearance of community run server or human moderation, neither that wanted do lose agency on the way they play.
I don't they these players doesn't exist, but I don't make gross generality about players.
> Competitive games have almost exclusively moved to online, skill-based matchmaking with a ladder system because that's what players want.
They're not a hive mind, lots of them didn't or doesn't like matchmaking in any form, and even for the ones that wanted it, that doesn't mean developers have to remove other mean of play, like server browser and private server.
You're basically ignoring the past 30+ years of the gaming and cheating industry. Everybody already does log behaviour, try to find outliers, and have some systems to try to keep cheaters outside from the general player pool. That's what gaming companies have been doing since at least the early Halo days. That has its own set of side effects, such as creating a horrible experience for the most talented and active players — also the ones most likely to stream and advocate for your game, to produce youtube videos to complain about bad experience, and to have a very influential profile in the community.
The state of game cheating has professionalized A LOT, it is extremely competitive and cheating companies produce extremely good quality tools compared to what we had 20 years ago. There is a lot of money to be made, we are at the point where you can just pay a cheap monthly subscription and you get access to actively maintained cheating tools. I know people working on the anti-cheat side, it is a really messy, highly dynamic (the bad actors are constantly adapting), complicated problem that isn't solved once and for all. We are far from the situation where just a few people are using some hacked-together software that will obviously be spotted as cheaters.
Game dev companies (at least US/European ones) have zero interest in developing or paying for kernel-level anti-cheat. That's a massive barrier of entry for the player base and they know this. It's also far from being cheap.
(Note: ignoring geopolitical factors, Chinese companies such as Tencent or Russian companies could definitely have interests in developing kernel-level anti-cheat for information gathering)
While there are solutions, I won't comment on Valorant - free to play games are a whole can of worms the companies have nobody but themselves to blame.
I will comment on a game I used to play though: Escape from Tarkov. The game costs somewhere between 40$ and 250$+tax, depending on what pack you buy. Banning cheaters for this game is literally a profit center. Every time you ban a cheater and they re-buy the game, you made at least 40$. The majority of cheating in the game was due to real money trading - cheaters would make in-game millions quickly, sell them, get banned, buy the game again at a profit.
The solution to this is brain-dead simple - more manual moderation (these cheaters are very obvious to spot). What the developers did instead just killed the game.
There's cheaters even on consoles which are vastly more locked-down than a PC.
Those technical shenanigans clearly aren't working, be ready to be disappointed if you thought that a TPM would help against cheaters. Cheaters always find a way, what those game needs is proper moderation.
Yes that does cost money but that's the only known thing that works in the long run.
This seems like the old “any imperfect solution is no better than doing nothing” argument. Moderation is expensive, hard to scale, and can only address problems after other users have bad experiences.
It’s like saying seatbelts are useless because some people still get hurt, so instead of seatbelts we need a lot more ambulances and hospitals.
Like any complex system, games have a funnel. These technical measures reduce (but not to zero) the number of cheaters. Then moderation can be more effective operating against a smaller population with a lower percentage of abuse.
Since the technical measures like TPM are very heavy, there's some better evidence needed that it reduces the number of cheaters, personally I don't buy it.
On the other hand, all the games / servers I've seen which are successful against cheater have some very good moderation.
Just see Valorent vs Counterstrike. Similar levels of popularity, similar kinds of cheat concepts. One has a kernel level anti cheat and has few cheaters, one doesn't and is overrun by cheaters.
Look at Counterstrike with regular VAC based matchmaking and then with kernel level anti cheat in FACEIT. One is overrun with cheaters and one isn't. It's the same game.
If your account gets flagged for ANY sort of irregular behaviour, you immediately get "upgraded" to requiring TPM and Secure Boot. Been there, done that - a crack for VEGAS Pro I used turned on test signing via registry for whatever reason and VALORANT REALLY didn't like that - and because of the PC I was using at the time that was the end of my VALORANT career.
One thing that I do not understand is how an app can determine whether secure boot is enabled in any kind of secure way. The TPM and Secure boot system is not designed for that.
If it's software your job requires, that's one thing. But games? Just play different games, or get a different hobby. You have a choice so exercise it.
Financially supporting games which do a thing you disapprove of is so counter productive it defies rational explaination. You aren't "speaking out", you're joining the party and paying membership dues. How could you get so twisted around?
Brain damage, that must be it.
I said that people shouldn't play games with rootkit anticheat and you gave me that damned "first they came for" crap as though I am the one capitulating to the abusive practice. How else am I meant to take it?
And why is that? It isn't for DRM (the game is free). It is for anti-cheat, and it is great.
The libertarian maximalist i-can-do-what-i-want-with-my-computer ignore the many use cases where I want to trust something about someone else's computer, and trusted computing enables those use cases.
How is it great? Vanguard is extremely invasive; having kernel access, you have to relinquish your PC to this chinese-owned company at all times (whether you're playing the game or not), and just trust in their good faith.
And for what? Cheaters are more rampant than ever, now that they have moved to DMA type cheats, which can't (and never will) be detected by Vanguard.
So you give away complete control of your PC to play a game with as many cheaters as any other game. I wouldn't call that "great".
I don’t think you can make the argument that the amount of cheaters using DMA is “just as many” as in a game with a less restrictive anti cheat, allowing cheaters to simply download a program off the internet and run it to acquire cheats. The accessibility of DMA cheats is meaningfully reduced to the point that I would guess (only conjecture here, sorry) the amount of cheaters is orders of magnitude less in an otherwise equivalent comparison.
Now, the amount of DMA cheaters may still be unacceptably high, but that’s a different statement than “the same amount as”.
So, it’s not “giving up something for nothing”, it’s giving up something for something, whether that something is adequate for the trade offs required will of course be subjective.
I don’t know, the number of cheaters appears to be non-zero and present enough in my games. Why give any random game studio kernel level access to anything? There are absolutely server-side solutions, likely cheaper solutions because the licensing fees for the anti-cheat software aren’t cheap.
We gave up something real. But it has not been proven whether we got anything. Maybe we got nothing, maybe we stopped a few of the laziest cheaters, but we still see tons of cheaters. The number of possible cheaters is based off the quality of the software. No amount of aftermarket software will magically improve the quality of your game in a way that 100% deters cheaters. I’m positive that their marketing claims they reduce cheaters by an order of magnitude, but I have not observed them successfully catching cheaters with these tools.
You're right, a game with no anti-cheat or a bad one will have more cheaters. But as you said, it's about the tradeoff, and that's what isn't "great". It was for a period of two years or so, since the tradeoff was "lose all control of your PC by installing a rootkit, play a game completely free of cheats", which was compelling, but now that the game isn't sterile anymore it's hardly worth it, at least for me.
Is it so radical to want to be in control of your stuff? What are these use cases where we need to have third parties in control?
I don't really buy the gaming one, in every other domain where a community of people are gathering to do a thing they enjoy together it's on the community and not the tool maker to figure out how to avoid bad behavior. If you don't wanna play with cheaters then just play with somebody else.
You are in control. You can disable secure boot, you can install your own keys, you don't have to boot windows, you don't have to play games that demand invasive anti-cheat. Vote with your wallet.
Relying on the community to police cheaters is not an effective strategy for online skill-based matchmaking games. There's a reason game companies spend money and effort on anti-cheat and it's not because they're ignoring cheaper alternatives.
People who are concerned about this should realize: Microsoft will never create a situation where alternative operating systems can’t be installed. They already went through the antitrust ringer on that issue. They don’t even control what hardware vendors do for the most part.
This requirement will only hit multiplayer games where cheating and security threats are rampant.
Also, if you have a PC with secure boot enabled, there are popular Linux distributions like Ubuntu that have a signed key. Or, you can add a signing key to the firmware, depending on your hardware. And of course, most commercially available PCs will let you disable secure boot entirely.
(Most multiplayer games with anti-cheat software don’t really work on Linux anyway.)
> Microsoft will never create a situation where alternative operating systems can’t be installed. They already went through the antitrust ringer on that issue.
They have shipped ARM Surfaces where alternative operating systems could not get installed, enforced with Secure Boot permanently on. Have they been through any such "antitrust ringer" in the past 10 years?
> Also, if you have a PC with secure boot enabled, there are popular Linux distributions like Ubuntu that have a signed key
Note that there's one key MS uses for Windows and one key they use for everything else. They actually advise OEMs not to install this second key by default ("Secured Core" PCs), and some vendors have followed the advice, such as Lenovo. Resulting in yet another hoop to install non-MS OSes.
Even recently, a Windows update added a number of Linux distributions to the Secure Boot blacklist, resulting in working dual boot systems being suddenly cripped. Of course, even _ancient_ MS OSes are never going to be blacklisted.
> You can in fact disable secure boot on the arm surfaces.
Not all. I know for a fact you could not in the RT/2.
This is despite the fact that people _do put effort_. This is how I know, for example, that some Linux workarounds for "funny" ACPI interpretations had to be also "ported" to the ARM architecture in ACPI ARM Linux because Windows is literally making the same "bugs" all over again. Except, this time, Windows hardware is in the _minority_, and there's plenty of ARM ACPI devices that do not require these workarounds...
> It was due to a bug/and or not being able to detect all manners of dual boot correctly.
Sure. It is also a bug they just applied these blacklists automatically in the first place? It is also a bug that the list of blacklisted bootloaders mostly comprises non-MS oses, despite the fact there are well-known issues in many Windows versions?
> They actually advise OEMs not to install this second key by default ("Secured Core" PCs), and some vendors have followed the advice, such as Lenovo. Resulting in yet another hoop to install non-MS OSes.
True, 3rd party not trusted by default is a "Secured-Core PC" requirement, but so is the BIOS option for enabling that trust [0]. On my "Secured-Core" ARM ThinkPad T14s it's a simple toggle option.
> Even recently, a Windows updated added a number of Linux distributions to the Secure Boot blacklist, resulting in working dual boot systems being suddenly cripped. Of course, _ancient_ MS OSes are never going to be blacklisted.
Actually they are in the process of blacklisting their currently used 2011 Windows certificate, i.e. the Microsoft cert installed on every pre-~2024 machine, also invalidating all Windows boot media not explicitly created with the new cert. It's a manually initiated process for now, with an automatic rollout coming later [1].
It'll be very interesting to watch how well that's going to work on such a massive scale. :)
> True, 3rd party not trusted by default is a "Secured-Core PC" requirement, but so is the BIOS option for enabling that trust
As I said, yet another increase in the number of hops for no reason.
Before you say anything else: until this you could install _signed_ Linux distributions without even knowing how to enter your computer's firmware setup. Now you can't.
The trend is obviously there. First, MS forced Linux distributions to go through arbitrary "security" hoops in order to be signed. Then, MS arbitrary altered the deal anyway. Even mjg59 ranted about this. And the only recourse MS offers to Linux distributions is to pray MS doesn't alter the deal any further.
Maybe at no point they will make it impossible on x86 PCs, but they just have to keep making it scary enough.
And in the meanwhile keep advertising how WSL fits all your Linux-desktop computing needs. While at the same time claim they have nothing against opensource.
> Actually they are in the process of blacklisting their currently used 2011 Windows certificate
No, they are NOT in the process, and that is precisely what I was referring to. They have not even announced when they are going to even start doing the process. All you quoted is instructions to do it manually. So I'll believe it when I see it.
And besides, just clearing the CMOS is likely to get you a nice ancient DBX containing only some grub hashes on it, and the Windows MS signature on DB. Not so much luck for the MS UEFI CA signature, as discussed above. So "recovery" will be trivial for Windows, not so much for anyone else..
The funny thing is that it's currently easier to run linux on M-powered Apple devices than Qualcomm powered Windows devices. My 8cx Gen2 powered Dell Inspirion is a blackhole of linux support where Gen1 and Gen3 seem supported but Gen2 has a different device tree breaking linux support.
Hell I can't even reformat it with a fresh copy of Win11 for ARM because it isn't offered. The only way to download windows for ARM is a virtual machine file for windows insiders. Then use third party tools to crack that open and extract the OS.
People will keep saying it, because that ratchet only seems to go one way. Consumer access to general purpose computing is something we take for granted, but every year it seems like there's a bit less of it, and once we lose it we will never get it back.
I’m using Linux and LUKS but have never been convinced Secure Boot adds anything for me. It does sometimes add extra steps though, or block a driver from loading.
> What does that do for me to stop malware? Bitlocker is only protecting an offline system
LUKS also only protects an online system. So why are you using it?
Oh, I think I know, if you are on Windows it's bad to use BitLocker because it's made by Microsoft and it doesn't protect against malware, but if you're on Linux of course you use LUKS, it's a sensible thing to do. Got it.
Back in my retail computer technician and sales days, it wasn’t uncommon for somebody to lose their Bitlocker keys, and encryption did what it was designed to do - make the data unreadable without them. Sometimes they didn’t even understand what they enabled.
To that customer, Bitlocker itself was a threat.
In my small sample size, I’ve seen that more often than lost laptops. I’ve also seen many more malware infections.
Tying encryption to the TPM, which is the default, makes it easier to lose those keys. With LUKS I choose my own password.
It’s an important implementation difference, especially if it is going to do it by default. Warning a person “you will lose all data if you don’t write this down” in big bold red text is sometimes not enough.
Does tying those keys to your MS account fix that failure method?
> Does tying those keys to your MS account fix that failure method?
Yes. Bitlocker recovery keys are escrowed to the Microsoft account. I've relied on this recover data from a family member's PC when it failed and they had unknowingly opted-in to Bitlocker (a Microsoft Surface Laptop running Windows 10 S Mode).
>As opposed to just not encrypting their data at all and letting everyone who ends up with the drive have their data.
You are presenting a false dilemma where either Bitlocker is in use or the drive is entirely unencrypted; there are other ways to ensure data integrity in the face of physical compromise.
1. It's not a false dilemma, it's more of a question of how to handle the "average Joe" user that doesn't know how to store encryption keys. I don't like how this automatic encryption is implemented, by the way, but sending the keys to MS servers is not the worst idea ever.
2. Bitlocker can totally be used without a MS account and without sending keys anywhere and without TPM... But seeing how most people fail to RTFM we're back to point 1.
I mention that only because it's one avenue. I figured obviously on a place like Hacker News that malicious agents aside from government could also compromise the security of 3rd party-held keys; as always security is a matter of difficult tradeoffs and anticipated threat categories.
Ah, thank you; I get it now: you don't need to worry about data theft because the drive was encrypted, so the only remaining problem is buying a replacement - a 'VISA' problem. I rather like that way of putting it; I might use it myself :)
Secure Boot makes persisting malware in the kernel fairly difficult.
Which IMHO made sense coming from Windows 7 where driver rootkits and boot kits where trivial. With today's main threat model being encryption malware I would agree that it doesn't add all that much for most people.
It really doesn't prevent anything like that, not even remotely. First, to do any type of persistence that would be detected by Secure Boot, you already require unencrypted, block-level access to the disk drive, possibly even to partitions outside the system drive. There are a gazillion other ways that malware can persist if you already have this level of access and none would be detected by Secure Boot. If you were able to tamper with the kernel enough to do this in the first place, you can likely do it on each boot even if launched from a "plain old" service.
I may be naive, but I still do. Skepticism is warranted, yet outright dismissal based on conjecture is its own brand of fallacious reasoning. Can Microsoft potentially benefit? Certainly. But that doesn't negate the possibility of genuine user security motivations and benefits for end users
> Can Microsoft potentially benefit? Certainly. But that doesn't negate the possibility of genuine user security motivations and benefits for end users
it's important to ask which one of the motivations will allow them to lock users down and ask for ongoing rent. one of these two will, and that's what will always drive the decision.
Oh and by the way Pluton is now on the latest batch of Intel laptop chips. And has been on AMDs for a while. How soon until Windows requires it?