What an interesting day when you see a site you've worked on for the past 2 (3?) years get posted to HN! Except I tried submitting this site years ago when I had just finished it, but it did not seem like HN was that interested at the time, and I don't blame them. It was very niche and video game related, and the site also looked a lot worse. It's come a long way to the point where there where I collaborated with someone else to do a redesign, which I think has done great for the project at large.
I originally created the site as a way to track which games would be supported on Linux, since at the time the Steam Deck was releasing, and some games were turning to support it. And it has since blossomed into a larger project, which some other tools even pull from! I would have never even imagined that when I first started making this.
I do want to address something I see being talked about in the comments, which is the fact people say that anti-cheats are snake oil, or useless. This is a big misunderstanding, and I feel like those more technically inclined should understand that anti-cheat is a "defense-in-depth" type of approach. Where it is just one of many lines of defense. Some anti-cheats are pretty useless, and don't do much, but some actually do try and protect the game you're playing. But, just like DRM, it can be cracked, and that's why it's more of a constant arms race, rather than a one and done thing.
I'm writing out a longer post about this for the future, but just know that without anti-cheat clientside, it would be far too easy for an attacker to cheat in these games. We're still ways out from letting AI (see VACnet [1] and and Anybrain [2]) determine if someone is cheating server-side, so for now we have to rely on heavier client-side techniques and server-side decision making.
Also if anyone has questions about the site (or for me), I'll try to answer them here when I see them. If not, have a nice day!
I disagree with the onclient kernel stuff. Just like with any website, any checking MUST be server side. Kernel stuff not only makes clients inherently less secure and stable, but also for cheat coders it's only a matter of finding vulnerable driver they can use to avoid being caught.
Empirically, it works. Look at Vanguard as an example. There's obviously a privacy tradeoff, but a lot of people would rather avoid cheaters than maintain tight control of their computers. It would be great if anticheat could all be serverside, but I'd love to hear a proposal for how to prevent aimhacking with serverside anticheat alone.
> There's obviously a privacy tradeoff, but a lot of people would rather avoid cheaters than maintain tight control of their computers.
I don’t agree. Instead, a lot of people allow the install because they have no say in the matter if they wish to continue playing the game. Even if it weren’t effective, I’m pretty sure most people would allow the installation of some form of not-yet-proven-to-be-dangerous malware if the alternative is cutting ties and accepting the sunk cost (be it in terms of in-game purchases, proprietary file format, etc).
Vanguard is good example of blocking even honest customers from playing. You basically need clean install of windows, clean drivers, no third party apps and modern hardware to even launch the game.
Completely agree with your comment. It's something I've [0] been trying to critically evaluate, and the conclusion I came to was the same as yours: hopefully, one day, we'll be able to do this without installing stuff in the kernel-level, but that day is a while away, and for now, kernel-level ACs do appear to be the best solution.
Another point I would bring up with the "community server" argument is that the argument is almost always volunteering others to be the admins because no one wants a 2nd job of moderating games. It's like any other internet forum moderator position, not usually taken because someone wants to, but because it's a necessity (or someone wants power).
That's why even community server owners want additional anti-cheat rather than spending their own time doing it. All those CS ones are examples too, running on community servers. I also remember back in the day community server ICCUP for Starcraft Brood War had their own anti-hack.
There's also the shift of games to the mainstream; more casual players who do not want to be mods. As well as the shift from 16v16 matches to smaller 5v5 matches, making more outliers to check.
There are DMA (direct memory access) cheats, and that's discussed in the article (under the section "Hardware cheats make this all moot, no?").
Not sure about KVM-like hardware cheats, specifically. You could obviously use an AI to simulate mouse movements, but I don't think that's particularly common.
DMA cheats are not detected. What happened is thousands of cheaters all bought firmware from the same guy, and Riot was able to determine via stats that this group of people with the same obscure "network card" had outlier stats, and they banned them all. DMA is by definition not detectable, but human idiocy is.
I imagine that having to buy special hardware means fewer people will do it, the types of dongles used for this are likely detectable in some way by kernel-level anticheat, and computer vision based cheats probably work better when you can inject contrasting color textures into the game.
I don’t think any system will stop someone truly dedicated, but the general idea is that each thing that adds a little more friction to cheating makes it less likely that the average player will encounter a cheater.
People buy dma cards and displayport/hdmi mergers to avoid hack detection. Another pc reads memory of your gaming machine through the dma card that creates your ESP overlay and then dp/hdmi is merged through a box. The dma card runs custom firmware that pretends to be some benign peripheral like an usb or soundcard.
There's also hardware aimbot/triggerbot that reads your video output then sends input to a device connected to your mouse.
Its not what your everyday cheater has in free to play games like cs or cod but there are games where it matters more if you're banned, and when cheat subscriptions can be $100-200 a month the hardware cost isn't much.
DMA cheats are not detected. What happened is thousands of cheaters all bought firmware from the same guy, and Riot was able to determine via stats that this group of people with the same obscure "network card" had outlier stats, and they banned them all. DMA is by definition not detectable, but human idiocy is.
If you just go and buy a card and use the normal firmware you're gonna get banned. Cheat creators make custom firmware to avoid that. It might be that Faceit is small enough to investigate cheaters thoroughly to get most of them, and with their reputation it might discourage most to even try. But I don't think that scales enough for big games unless you have Riot money.
Trying to force ever more restrictive and intrusive controls upon players won't solve cheating. The only way to "solve" cheating is with https://xkcd.com/810/. Use statistical analysis and server-side controls (fog of war, lockstep calculations) to force cheaters to play indistinguishable from top human players. If you can't tell the difference, does it even matter?
> the types of dongles used for this are likely detectable in some way by kernel-level anticheat, and computer vision based cheats probably work better when you can inject contrasting color textures into the game
If you've ever worked in broadcast or volunteered for conference, lecture or house of worship broadcasting, you'll know there's an entire industry of cheap undetectable HDCP-removing HDMI splitters and capture cards. It's an open secret that conference AV relies on shitty $10 chinese HDMI splitters to make HDCP "work".
Similarly, there's a countless number of devices that can present themselves as any other USB device. You can MitM e.g. a keyboard or controller and inject packets that are impossible to distinguish from the users' own inputs.
Some consoles only allow wireless controllers with encrypted protocols, but that can be circumvented too. Replacing the joysticks in controllers with hall-effect ones is a common mod. It's possible to attach another chip inbetween at this point to inject custom inputs.
You can use these injected inputs to e.g. compensate for recoil. But you can also run a simple classifier on the HDMI video to identify objects and players.
Now sure, an anti-cheat could use statistical analysis to measure how quickly a player reacts, which would allow detecting such cheats. At this point it won't matter whether you're using kernel, userland or server-side anticheat though, as they've all got the same information available to them.
> Trying to force ever more restrictive and intrusive controls upon players won't solve cheating.
I think it's not about "solving" cheating, so much as making it sufficiently annoying to maintain working cheats that fewer people try. Just as in cybersecurity, no individual security measure will "solve" hacking, but in concert they reduce the impact by making it more difficult: the "Swiss Cheese Method" / defense-in-depth.
Reading through game cheating boards, it seems many hardware devices have been detected over time. It's an arms race. Here's a discussion of how anticheat started to detect people using HID-emulating devices by forcing a disconnection event: https://www.unknowncheats.me/forum/valorant/615373-vanguard-...
> Reading through game cheating boards, it seems many hardware devices have been detected over time. It's an arms race. Here's a discussion of how anticheat started to detect people using HID-emulating devices by forcing a disconnection event
That's a hack which only works for some devices in some specific state. At that point you're playing whack-a-mole, and you'll always lose.
> I think it's not about "solving" cheating, so much as making it sufficiently annoying to maintain working cheats that fewer people try
Annoying? I don't think you understand the hacker mentality. Breaking anticheat or DRM tickles the same nerv as CTFs or puzzle games. What you consider "annoying" is an activity others do for fun.
It's fun to break a system that's intentionally trying to keep you out. That's why I reverse engineer proprietary, obfuscated file formats and protocols. Whether that's brother plotters, blackmagic's input devices (which also function as license dongles), apple video codecs (actually still WIP) or my landlord's wireless water meter so I can add homeassistant support for it.
When kernel-level anticheat became a thing, I actually built a custom hardware aimbot out of an HDMI capture card and a custom Sandisk wireless clone that I was working on at the time. I've only used it once or twice, as I'm not a competitive gamer and don't actually have any use for it. The entire fun was in breaking the system.
> At that point you're playing whack-a-mole, and you'll always lose.
That's just sort of fundamental to society at some level though, we play whack a mole with all sorts of misbehavior until we reach some sort of acceptable equilibrium.
I totally get the hacker mentality, I have a fully disassembled HP printer under my desk with some bullshit DRM that I've been desperate to break for some time, but I think your last line is really the key: breaking the system is fun for a small portion of people who are able to do it, but it's their users/customers who will be annoyed when their accounts keep getting banned and they need to buy new hardware.
> fully disassembled HP printer under my desk with some bullshit DRM that I've been desperate to break for some time
With my brother printers it turned out I could just remove the chips from the genuine toner cartridges, reset the counters, and hot glue them to the refurbished toner. Maybe that works for HP ink as well?
This printer will simply refuse to print without an always-on connection to their cloud, it's diabolical. Thought I might be able to get root via its crappy web interface but no luck, and it seems to use properly implemented TLS when talking with the verification server, so I've taken it apart to poke at some interesting looking points on the PCB.
People buy all kinds of stuff online, why not this device? Unless the game uses HDCP the hdmi rip is not possible to detect. And the usb controller could even forward the properties of the connected device. These devices exist as we speak
I think just purely off of the additional effort—a cheat that requires a second PC and specialized hardware is simply going to have fewer users than something you can download and run. Some portion of people won't care enough or will have some sort of other issue with the hardware setup. I think generally these things aren't about making it impossible so much as reducing the frequency.
>This is a big misunderstanding, and I feel like those more technically inclined should understand that anti-cheat is a "defense-in-depth" type of approach. Where it is just one of many lines of defense. Some anti-cheats are pretty useless, and don't do much, but some actually do try and protect the game you're playing.
As a serious player of many multiplayer games I disagree. All it takes is one cheat to circumvent the protections and soon enough every cheater will use that circumvention.
Meanwhile, I, the legitimate player suffer from degraded performance, disconnections (looking at you Amazon Games - you've not been able to fix your (most likely) Easy Anticheat disconnection issue in 2 years!), or outright inability to play.
Perhaps the cheating situation would be worse without anticheats, but considering how rampant it seems to be in fast-paced or grindy games I play, I kind of doubt it.
Anti cheat is DRM. It's added specifically to make it so modifications are DRM circumvention and therefore copyright infringement. This isnt to protect the player, but forced by big suit investors to "protect their investment".
The best anti cheat is proper net code. Games rarely do this because it's expensive and difficult. Consumers will buy it anyways.
Anti cheat overtop is like calling an open window with a loud Weiner dog guarding it "defense in depth".
I don't think the point is to argue anti-cheat isn't effective, the point is to draw a line in the sand and say, this is where it stops.
Take the analogy of enabling better police work by granting unlimited access to our private communications. No one doubts it would be effective, but the cost and the threat is too much.
This is the line we draw in the sand: get out of the kernel, anti-cheat has no business being there. The cost and threat are too great.
This acceptance is the same situation that brought us the Crowdstrike incident. It's unacceptable.
We fail as an industry and as a society when we accept these compromises.
Putting a government monitored streaming video camera in every bedroom and bathroom in the country to detect sexual assault would also be "defense in depth". But it would be a terrible thing to do, both because it's easily evaded (do your rape someplace else) and because of the intrusion. Any kind of defense in depth argument has to consider how easily bypassed the defense is and the cost it comes at.
Believe it or not, most people don't play video games against strangers. Anti-cheat is not of any value to them. Even for people who do play video games against strangers even uncompromised anti-cheat doesn't stop many forms of cheating like macro-mouses. Especially now with all the success being shown at machine learning playing video games with nothing more than a video feed and the button inputs, the amount that anti-cheat can help is clearly quite bounded and getting worse over time.
And the cost? Anti-cheat comes at the cost of general purpose computing, at the cost of being able to control the computers with which you trust your most intimate secrets. It's a civil liberties nightmare, or at least a per-requisite technology for many such nightmares. Opposition to anti-cheat is opposition to RMS's Right to read dystopia (https://www.gnu.org/philosophy/right-to-read.en.html).
I don't think it's too far a leap that saying that anti-cheat or DRM technology that comes at the expense of the availability of general purpose computing is more of a problem for human rights than the farcical bedroom cameras I started with.
So when you advocate anti-cheating technology that locks users out of controlling their own computers, you're favoring an at-best incremental improvement which can still be evaded for a narrow application that most people don't care about... and this comes at the expense of imperiling the human rights of others.
Like with many things there is an asymmetry to the costs: Anti-cheat and DRM substantially fail if even a moderate amount of dedicated people still have a way to cheat. Yet the damage to people's freedom from the loss of general purpose computing is still substantial even when the lockdowns can be evaded.
If anti-cheat came at no meaningful cost the fact that it could be evaded wouldn't be a meaningful argument against it. But it's expensive to develop, intrusive, disruptive, and the more successful it is the more effective it'll be at being abused to deny people control of their computers in anti-social ways.
Could I persuade you to reconsider going over them? I'm not expecting an essay or anything but it would be interesting.
One thing that comes to mind for me is that most cheaters probably don't code the cheats themselves but buy them off telegram channels or whatever (just a guess), and probably wouldn't want to install a whole operating system for them
Cheating is a market, and most cheaters are not programmers themselves. But it goes deeper than that. Most players, and players who intend to cheat are already using Windows. Any portion of a game's player base that intends to cheat is usually small, any the portion of a game's player base that is also running Linux at the same time, is even smaller. So programming cheats for Linux (however easy it may be), is a nil-some game. Though I'm not going to claim it's never happen, there are cheats for CS2 on Linux for example, but this is an outlier and exception to the rule.
> Could I persuade you to reconsider going over them? I'm not expecting an essay or anything but it would be interesting.
Sorry, I didn't say that because I was trying to withhold this information, I just didn't want to spoil my future blog post. If you don't want to wait for the post and just want to hear it, I'm down to just giving a overview of the reasonings.
As Starz0r said, one of the main reasons is that the market is just very small. I think it was CSGO that had basically no protection on Linux for years, and the developers just ignored it because the small number of players didn't make much of an impact.
One thing I don't understand and I would really appreciate if someone could explain this to me.
Why do we need separate anti-cheat programs? Can't the operating systems simply have an option when creating a process that prevents all operations looking at the memory of the process (and maybe if such a process is about to be launched the user has to explicitly accept that by clicking a button)? Wouldn't that stop almost all the cheats without needing separate anti cheat programs, since I assume those programs have to use OS facilities to mess with the game anyway.
Cheats run on the cheater's machine, not on the other players' machines. Of course the cheater would always click accept because it's not an accident that the cheat is running on their machine.
It's not the cheat that has to be accepted, it is the game. The option prevents the cheats (or any other program) from being able to examine the game's memory.
You would need to get rid of kernel level drivers for that to work. Which right now would completely disable any security software. But if it's ever done yeah this wouldn't be such a bad idea to isolate apps. However any security API would still have to allow read only access, which would be enough for most cheats, and by design blocking this type of access should never be possible since antivirus/EDR will need this.
Indeed. Anti-cheat is interesting, because it's a case where you "want" to be able to "prove" that you don't have control over your own machine. Or rather other players want a sufficient level of assurance that you're not running certain kinds of software.
Well I assume there are ways to prevent that (or make it extremely difficult at least)? E.g. look at denuvo, nobody has been able to "open a hex editor" and disable denuvo.
Not the recent denuvo versions, and not in the past ~1 year.
That's not even the point though, I am not saying it is literally impossible to circumvent this, but as long as it is hard enough that it is not financially reasonable for the cheat makers, that's good enough.
Denuvo has been disabled many times; however, the amount of work required to modify all the generated injection points is tedious—it's a LOT of work. It seems that fewer and fewer people are willing to spend weeks or months of their lives cracking a single game.
Anti-cheat systems, on the other hand, are entirely different. If you only need to modify one variable in the game, it's much easier because, in most cases, that variable is frequently used. This means you can't add too much overhead to its use, and after all, it's just one variable.
Denuvo isn't just a flag on a process. It's no more relevant to your suggestion than encryption would be to a suggestion that audio files have an option to prevent them being copied.
> Can't the operating systems simply have an option when creating a process that prevents all operations looking at the memory of the process
Already the case for userspace programs, due to virtual memory
> those programs have to use OS facilities to mess with the game anyway.
Cheats today essentially are like drivers, they do not run as userspace programs. Hence, they can do literally anything on your computer. In terms of privileges, driver code runs at a level as privileged as the operating system. Hence the need for programs that run at the level of the OS kernel to catch the cheats.
> Already the case for userspace programs, due to virtual memory
Userspace programs can read other userspace programs memory, it's part of the standard win32 api[0].
> Cheats today essentially are like drivers, they do not run as userspace programs. Hence, they can do literally anything on your computer. In terms of privileges, driver code runs at a level as privileged as the operating system. Hence the need for programs that run at the level of the OS kernel to catch the cheats.
Some cheats nowadays do this, but they do this because of anti cheat programs. If there were no anti-cheat programs, they wouldn't have to do this.
If I wanted to write malware the first step to doing so would be turn on the “make me immune to any anti virus or endpoint detection software”
If you want to know why the OS doesn’t enforce this - https://slashdot.org/story/432238 you roll into HN’s other favourite topic of “why can’t I run the X of my choice on my OS?”
Unfortunately injection based cheating is not the most prevalent form of cheating within titles that do a great job at preventing it such as overwatch. Screen bots are used often outside of any monitored process through hdmi streams and such. They can use game features, sprites, and colors to make aim and trigger bots that seem pretty natural. Additionally the most prevalant and annoying cheaters are the ones that trick games into believing keyboard and mouse is a controller which combines sticky aim features of controller input with the precision of mouse and keyboard controls. On consoles this is a dominant persistent cheat that a larger percentage of gamers use as opposed to the small percentage that inject code.
It's not the cheat that has to be accepted, it is the game. The option prevents the cheats (or any other program) from being able to examine the game's memory.
It could if the hardware allowed such separation, but the x86 platform doesn't do anything close to that and allows reading memory of other processes in so many different ways in both userspace and kernel. Not to forget hardware being able to read memory via DMA that many use now.
- Have the user-facing OS be a VM managed by that hypervisor
- Have the game process run under a second sibling VM
The hypervisor can then mediate hardware access and guarantee nothing from VM A can access VM B nor the other way around.
IIRC WSL2 enables such a mode, both the Windows OS the user sees and the Linux VM run under Hyper-V as siblings VMs.
And Xbox One and up do EXACTLY the above: each game runs in its dedicated VM (I presume that's what "trivially" enables Quick Switch/Resume via pausing/shapshotting the VM) and apps run in another.
Tangent: I somewhat wish MS would allow WSL2 on Xbox.
Without hardware support, once the attacker gets to the hypervisor, you can't trust the hypervisor, or the "guarantees" that such tainted hypervisor provides to be upheld.
You need hardware support for confidential computing (for example, AMD SEV) to be able to trust that the hypervisor can't just read/write all over the VM RAM.
Sure, security comes in layers. A trusted platform boot chain can validate the hypervisor much easily than a whole hard disk, and existing x86 instructions can do the rest. The attack surface is also quite a lot smaller. It's already miles better than unfettered access from the very same OS and anticheats being privacy-invasive rootkits.
Hardware support for confidential computing is cherry on the cake, but in this scenario the user is not trying to defend themselves against an attacker, the game is, from the user a.k.a the cheater.
In addition to the technical details mentioned there is also the "social" part:
Having Anticheat lets the company show they are doing "something" against cheating and keeps law abiding players from installing cheats.
You would need hardware support to do this effectively. Telling a piece of software “no one is looking at your memory” as the OS doesn’t take into account rootkits and hypervisors.
For software related cheats maybe, but keep in mind that keyboard, screen and mouse being processed by an entirely separate computer is also very viable.
No, but we shouldn't treat user's freedom as an anti-virus mechanism. Pretending that user acceptance will help in preventing malwares is extremely naive.
leaving aside that most anticheats are useless and constantly teetering on the thin line between legitimate software and malware, not enabling anti-cheat solutions that support Linux on Linux is really an asshole move that almost definitely stems from an unmotivated or ideological hostility to Linux in general (I'm specifically referring to Tim Sweeney here).
Another offender is Ubisoft, or more specifically the R6 Siege team. Battleye works perfectly fine on Linux - in fact, other Ubisoft teams have enabled Battleye-Linux support for their games (ex: For Honor) - but for whatever reason, the Siege team refuses to do so, even though it's one of the most upvoted issues on the bug tracker [1].
BattlEye is generally broken even on windows (though it happens that it is actually working as intended right now). Cheaters generally use windows, and switching to Linux will only be done when the windows anticheat is considerably harder to break, and with proton/wine, you even get to run the same version on both.
I agree that they teeter the line, but hard disagree that they’re ineffective. They’re ineffective if you run your own servers and vet your own community because you don’t need them, but that’s not how most popular games are being played these days whether you like it or not. Fall guys was fundamentally broken, they added easy anti cheat and the problem disappeared pretty much.
the best anti cheat that i have experience is vangaurd by riot games. I was running a python script in background for web crawling, left it on and guess what? my account got banned. the support says the vangaurd found a script running. i explained them patiently that it was a web crawling script , still no use.
lol I had something similar but instead of a crawler, I had WinDBG run a BSOD dump file and the game automatically closed. I forgot the game was running in the background while I trying to figure out what was crashing my system. It was a random network monitoring driver (after removing it) problem solved. But I ended up getting shadow banned. After 14 days. My account reverted back to normal. My guess, the game triggered a fail safe and closed to avoid any injection or step process read. But the fact that I was running a debugger to fix problems, it just tells me that some of these anti cheats are trash. It still puzzles me how they do not implement daily offset reset randomizer with encryption + decryption binded to the device. Anyone want to partner up and start an anti cheat service solution let me know. =)
I'm not saying it is, I'm saying it could, because this is one of the most intrusive anti-cheats on the market. It can do and see basically anything on your computer.
Anti-Cheat will not help, if the games not Update it for more than 8 month.
And one thing the devs could do without Anti-Cheat, is to automate analysis of e. g. head shot rate, movement speed, etc. but most games not do that. If average player make 25 Kills per hour in a game and some 150 over longer periods i did not need an anti cheat to do something.
This is a common misconception. Some players are extremely good at video games, and they look like statistic anomalies / outlier when mapped across the full distribution of players.
Consider, for example, professional gamers. They spend countless hours practicing, and they can easily outcompete casual gamers who don't have the time to refine their skills daily.
Statistical anti cheat is extremely weak in any game where legitimate human players can end up as outliers.
Extremely good players have old profiles that they have used for a long time, gradually getting better. Cheaters are either using a new profile, or an old profile with bad stats that then has a sharp uptick.
I've been playing the same 2 games for many years and I think I got pretty good at them and I've used multiple accounts - under your assumption I look like a cheater.
In a proper statistical analysis there are far more variables than what I outlined in my preceding two sentence post. It would be naive to think that I would consider anyone a cheater only based on the account age.
Smurf accounts are also bannable in plenty of games and I certainly support that.
Beyond that, the level of "good" we're talking here goes way beyond dominating in a random match. Cheater stats are usually better than literally the top #1 player in the world.
Take something like Battlefield, where on the public leaderboards the "top players" have a kill-to-death ratio in the thousands. That is so far beyond human possibility, yet they are still not banned because of this aversion to statistics.
I think what's naive is to assume that statistical detection methods haven't been investigated at length by the anti-cheat companies.
When a complete newcomer comes to a field and sees professionals not doing a simple thing, the right question isn't "why don't you just do this, duh", but "I thought this would work, why doesn't it?".
Newcomers definitely make naive assumptions, Chesterton's fence etc.
I'm not a newcomer though, I've worked on both cheats and anti-cheats going back more than two decades. I know how the sausage is made and it's not pretty.
The anti-cheat companies you talk about mostly sell a mass produced product that works very similarly to anti-virus software. Games embed the anti-cheat module and its cheat definitions get updated. Statistical analysis requries both knowledge of the specific game and access to its database. Often also additional game programming to even store the crucial data. A bespoke solution. This can't be mass produced and is expensive, so most games don't have it.
So to bring it back to the newcomer question, I thought this would work, why doesn't it?, the answer is that game companies don't want to spend the money. [1] A classic answer to most annoyances in life, really.
---
[1] An interesting outlier is the online gambling industry, especially online poker. They spend way more money than non-gambling game developers and have much more sophisticated anti-cheat systems, including statistical analysis. It's also fun to see how techniques used to get around online poker anti-cheat detection slowly make their way into mainstream gaming with a delay of about 15 years or so. As a simple example, nobody serious was even running their code on the same system as the game client back in 2005, instead parsing the video signal and simulating HID inputs. [2] Took more than a decade to see popular cheats for regular games go to that length to avoid detection. Not because the cheat developers were less capable, but because the anti-cheats didn't warrant the investment.
[2] Thus taking the battle almost completely to the statistical analysis realm. Are your mouse movements random enough, with good jitter? Does your bot take belivable micro breaks? Does your average performance, including reaction times, degrade at the end of a long session as you get more tired? Et cetera.
Picking out the statistical outliers are not that hard, but will this not have diminishing returns? As soon as the cheaters learns that being too obvious gets you banned they'll change up how they play. Eventually there wont be much difference between the really good players and cheaters, is some false positives okay here?
Many cheaters were already trying to not be obvious, most I've encountered playing various fps games are not the typical spinbot in csgo. Instead they might play with only wallhack, aimtrigger, or even no hack, and only turn on the big hacks halfway through a game if they're not winning or think someone on the other team is hacking as well. In some games they use bots to dunk their stats when not playing.
AI detection is also coming to videogames with anybrain.gg, but seems like these can be countered with AI enhanced cheats no?
As an experienced player with an anti cheat/cheating/security interest it doesn't seem like statistics is the silver bullet you claim it to be, at least as your only detection/protection. It combined with normal protection/detection methods is likely what Riot is doing.
It's definitely a cat and mouse game and no single method, including statistical analysis, is a silver bullet.
I'm definitely not advocating for doing less to counter cheaters. I'm just talking about how more could be done. As in, continue with existing methods and add new ones.
Also, yeah many cheaters would start being more conservative and manage to evade detection. However that is also a win. It's the aggressive obvious cheaters that are the worst, because it makes it obvious that the fight was unfair. If the cheater made it look plausibly legit, then the victim won't feel as bad.
I think the real answer is to sidestep all of the direct, deterministic solutions in favor of statistical ones. I am not 100% certain of this, but I believe some there are some games, like EA's Battlefield series, that utilize a degree of statistical modeling to detect cheaters.
We reliably use statistical process control to automatically calibrate incredibly precise, nanometric-scale machinery for purposes of semiconductor engineering. Surely, with the extreme amount of data available regarding every player's minute inputs in something like a client-server shooter, you could run similar statistical models to detect outliers in performance. With enough samples you can build an extraordinarily damning case.
The only downside is that statistical models will occasionally produce false positives. But, I've personally been "falsely" banned by purely deterministic methods (VAC) for reasons similar to others noted in this thread (i.e. leaving debugging/memory tools running for a separate project while playing a game). So, in practice I feel like statistical models might even provide a better experience around the intent to cheat (i.e. if you aren't effectively causing trouble, we dont care).
> like EA's Battlefield series, that utilize a degree of statistical modeling to detect cheaters
Battlefield started out using PunkBuster, one of the earliest kernel-level anti-cheats. With Battlefield 4, they used FairFight, a statistical server-side solution, alongside PB.
With Battlefield 1, they dropped PB, and operated with just FairFight.
And now, EA have decided to create their own kernel-level AC, called EA AntiCheat, and are implementing it on BF5 and BF1, largely because FairFight was not enough.
You could probably detect 90% of cheaters in Rust by detecting people who press DEL during in game non textual interactions. It probably would also have a relatively low false positive rate. It is however easy to evade once known.
But I think collecting all that data and sparingly using it is the best approach. You could combine that with headshot rate, etc. and really narrow down relatively reliably.
In order to play some online games that requires anti cheat.
I avoid these titles myself. In fact, I don't run wine, steam or game console emulators on my Linux workstation. I run Windows VM:s for isolation and security.
You may have strong opinions on anti-cheat software and they may be correct, but it is required for playing certain online multi-player games, and people want to play those games on Linux too (especially the Steam Deck, I would presume). Ergo, people want anti-cheat software on Linux.
If Apex Legends, CS, and Valorant has taught me anything, it's that anti cheat does not work. Once you start approaching a pro level, cheating becomes rampant.
The cheaters don't make them, they buy them. It really needs a multi factor solution. The technical solution is not enough. Trying to buy cheats should be like trying to buy chemical precursors to illicit drugs. There should be a strong social stigma. Most cheaters have no problem with it because 'everyone else is cheating', justifying their behavior. There was a time when 'everyone else smokes' was justification, but now it's mostly defeated. There should be real world implications. Sign in with your phone number and 2 factor auth, which is located to a physical address. Cheating is a form of fraud. There should be legal implications.
> Trying to buy cheats should be like trying to buy chemical precursors to illicit drugs.
oh my. Seeing your posts makes me sincerely want to lobby to ban video games at least if adding additional liabilities to distributing software or computing devices were actually a direction that the games industry was promoting.
We need to stop letting stupid entertainment companies trample our rights to narrowmindedly maximize their profits.
IMO anti-cheats at this point are more of a PR tool than actual cheat prevention systems. Look at Vanguard: they marketed Valorant specifically focused on the anti-cheat to draw players away from CS:GO where many in the community think cheaters are rampant.
The only difference is that maybe you have a few less rage hackers that get caught by it, but anyone that really wants to cheat will still be able to, it's just a lot harder for players to see. All they care about is the public perception. If it looks like it has less cheaters, it's good enough for them.
The cost? You basically install malware from a Chinese company in you computer...
very interesting side-effect of the nature of multiplayer competitive games.
to me, competitive video games are far gone like pro cycling in terms of the extent players go to feel "superior" than others.
<rant>
many of these games remain broken with other things while raking in insane amounts of money, so regularly maintaining anti-cheat inside the game, if at all, is probably very low in their backlog.
the third-party ones are then used to not having to think about this, but even these providers are more focused on attracting game publishers than doing something meaningful.
</rant>
personally, it should be possible for games that can be played in local multiplayer or with friends to have a way to play it without anti-cheat. don't allow competitive modes with it, but having an option will alleviate a lot of these issues.
Those idiots replicated the same firmware across all their modules. They are not the brightest bunch…or maybe they are but greed blinded them. Anyway, it blows my mind how people go to such extent to cheat in multiplayer games.
In the case of FPS, it is gone: with AI cheats which are morphing average/bad players into god like/very good players, that without being on the player system, (external, and input device man-in-the-middle or custom/modified input devices), FPS games implementing anti-cheats are doing so more to please microsoft than anything else: too be sure it won't run on linux based OSes (like "secure" boot in order to sabotage easy installation of "free" alternative OSes).
That said, you "may" have a chance at detecting it using game related metrics on server side. Because an AI will very probably betray itself at some point, "AI"s are usually imperfect like human.
Elephant in room, the more you put big brother in your system, the less you will be able to run really free operating systems. So long for your digital freedom.
Look at the abominations which are video game consoles.
It is obcene to have to pay a lot of money for completely locked/digital jail devices. It should be illegal, period. They should be leased for cheap.
I don't know enough about 'real time' netcode for games. However I have read several HN articles over the years so I've got at least a basic understanding.
Why can't the servers distrust the clients? What should a 'client side anti cheat' actually prevent?
The way I think I'd tackle such things is to have multiple copies of each character model moving in different locations and different ways. Such that trying to spy on the state of the game from one client's viewpoint yields mostly false data. New 'threads' would fork off of the existing threads and would only be culled when there are too many or they're about to make a side effect that would be visible if they were real. In that way the server would be responsible for feeding misinformation to clients but maintaining the state of the true game as a secret to itself.
> Why can't the servers distrust the clients? What should a 'client side anti cheat' actually prevent?
There are two issues. One is the user seeing things that the server is hiding, such as enemies hidden behind obstacles, by going into "wireframe mode". The other is superhuman performance via computer assistance, or "aimbot hacks".
The first is a performance issue. The server can do some occlusion culling to avoid telling the client about invisible enemies, but that adds to the server workload. The second is becoming impossible to fix, since at this point you can have a program looking at the actual video output and helping to aim.
(You can now get that in real-world guns.[1]) Attempts to crack down on people whose aim is "too good" result in loud screams from players whose aim really is that good.
The only feasible solution is to have high-level players compete in physical tournaments or at verified centers, where the authenticity of the player is replaced with some authority. At a high enough level, there is no way to distinguish a really good player from a cheater.
But it's not really feasible to argue since you need to be on such high level in the first place to honestly engage in 'is this player chesting' conversation. And it's on case-by-case basis
I've watched professional games in SC, CS and DOTA for decades and I definitely agree that pros are indistinguishable from a good cheater (not a rage hacker).
One of the issues around this is cheating within pros too. People that are actually good at the game, but use cheats to get even further ahead. These players are already statistical anomalies and even from an experienced player's perspective, you can't tell if they have an amazing game sense (many really do) or he's wall hacking, as an example.
Competitive games are unlikely to reach the market share necessary for a competitive gaming tournament if their casual scene is inundated with cheaters. Only a tiny handful of games even have a viable competitive scene.
But are cheaters even an issue in unpopular games that don't give out real money for tournaments ?
I have never seen cheaters being an issue (even the few times people set up tournaments with prizes), which makes me think that this might be limited to very few games (in very specific genres) ?
> But are cheaters even an issue in unpopular games
Yes. Every game has cheats. The cheat packages are pretty easy to adapt to new games and people pay money for them.
Why do people cheat? Because it’s fun! If you’ve never cheated it’s honestly worth trying. It’s hilarious. It also utterly ruins the game for everyone else in the lobby.
If games had reliable anti-cheat you’d be shocked at the percentage of lobbies that have a cheater. It’s wildly rampant.
I'm not talking about developer tools - cheats that come with the game, available in single player (and multiplayer if the host allows it).
But a lot of games do also have accessible to everyone replays that show every order given by every player, so catching a cheater that acted on information not available to them (because for instance they had buddies in other team(s)) isn't particularly hard, especially in tournaments with a lot of eyeballs on those replays.
At scale it’s incredibly hard. Impossibly hard even. So hard no one has successfully solved it! Ever!
But what you’re describing is Valve’s Overwatch system for Counter-Strike. It’s a key component of the anti-cheat ecosystem. But cheating is still rampant in CS and one of the biggest complaints.
"at scale" assumes a popular game - and you end up by giving as an example one of the most popular FPSes ever ! Please give an example of a game with, say, less than a million of copies sold / given away ? (And ideally, not an FPS, we all know these have specific extra challenges involved.)
And "at scale" pretty much means that matches are not competitive, because the sums required for entering a tournament game and given for winning it are going to be too small, won't they ?
P.S.: And for non-competitive games, I would expect that this cheating issue (among others) would be aggravated if you insist on playing with total strangers you will never see again (also part of the scale issue) - maybe just avoid that ?
But popular games are the ones people want to play, and are the ones you’re claiming are immune to this. Look at this comments section - it’s people talking about the top 3-5 games on Pc right now, not the 30th entry in the trending FPS section.
Part of the appeal for cheating is doing it where it has impact - in popular games.
Also, I want to insist on one thing : some of the popular games listed are those that are online-only and/or removed the ability to host your own servers (and/or even worse, have microtransactions).
I have zero sympathy for the kind of asshole that gave money to companies engaging in the despicable behaviour cited above. You were warned. You made your own bed, now lie in it !
I’m big into competitive Call of Duty. On that game (and any other shooter that uses a controller), the biggest undetectable cheat is auto recoil adjust. People call it a “chronus” for the same reason people call it Kleenex. You download profiles for the gun you're using and it basically does the recoil pattern in reverse, turning every gun into a laser beam. It’s undetectable because it modifies inputs from a legit controller while appearing completely normal to the console/PC. No computer vision needed, and it’s destroying the integrity of the game.
In the future I kind of hope the handshake from controller<->console becomes a lot more robust, maybe working in a similar way to HDCP.
I don't think it will work. Nothing can prohibit users from desolder the stick and putting a microprocessor with DAC in place of them.
Actually, those kinds of mod is frequently performed by gamers, because lots of people wants to replace analogue potentiometer with hall-effect sensor with microprocessor, which provides much more durability compared to the Alps potentiometer stick. (and no one likes to play with a drifting Dualsense or Joy-Con)
your point about "chronus" or auto recoil adjust cheats is a perfect example of how cheats evolve to bypass detection. By modifying controller inputs at the hardware level, it’s nearly impossible for traditional anti-cheat software to identify such exploits. It shows that as long as there is an incentive, people will find creative ways to gain an advantage, often blurring the line between legitimate skill and unfair advantage.
I think moving forward, a hybrid approach is essential—one that leverages both server-side logic to prevent information leaks and robust client-side monitoring that can detect anomalous behavior patterns. Perhaps more sophisticated machine learning models that analyze player behavior in real-time could help in distinguishing between legitimate skill and enhanced performance due to cheats. It's a constantly evolving battle, and staying one step ahead is always going to be a challenge.
Would love to hear more thoughts on how to effectively balance these aspects without compromising the player experience!
Cheating isn’t a binary thing , it’s a spectrum. The number of people who are willing to install a random script that they drop into a folder that lets them win every Br game is vastly higher than the number who will install a kernel level driver, which is more than will _pay for_ and keep updated with a kernel level driver. Currently, “expensive dedicated hardware that replaces the gaming mouse that I like using” is significantly less of a problem than “install rootkit”
The performance issue you talk about has a little more to it too. If the server is 30ms away from you and the other player, and the server runs at 30Hz there’s 90ms between the enemy pressing a key and you seeing it. That’s before you add real world networking conditions into the mix and have to start adding client side prediction in which adds a few more MS to boot, or errors. But in order to do this prediction the client needs a little more state than is visible on screen - players that are around corners that are about to appear, that sort of stuff. So the client needs that information in order to actually function meaning it’s hard/impossible to tell the difference between good game sense (I know the reload time of this gun is X and that peeking lasts Y frames and they will appear here) and cheating (we’re 2 frames away from showing the player on screen but he’s going to be right here so shoot here)
I think someday, almost all aimbots will be undetectable by anti-cheat systems.
Thanks to the neural network, we have made enormous progress in the computer vision domain. As a byproduct, it invalidates the method we use to separate machines from humans (the image-based CAPTCHAs).
I guess aimbots will switch to CV-based systems to detect enemies rather than dumping game memory to find the enemy's position. This change will force anti-cheat systems to perform an automated Turing test, which is hard. (Telling the bot and human apart only by watching the replay is much more challenging compared to the above CAPTCHA problem. And we are currently losing at the CAPTCHA frontline, too.)
@Animats, you’re spot on about the two main issues—visibility hacks and aimbots. The concept of hiding enemy positions server-side through occlusion culling does present a performance challenge, but it’s essential to balance between ensuring fair play and maintaining server efficiency. And you're right; the rise of external programs that can interpret video output makes preventing aimbots significantly harder.
Delaying UI interaction until it has been verified by a server that runs at 20fps (60 is uncommon on servers unless theres no AI), with a RTT of 60 ms, means your hitmarker will take 110ms instead of 6ms if rendered locally.
Apply that to every interaction that the server has to be authoritative about, movement, reloading.
Your game will be unplayable.
And if you want to combat aimbotting: your viewport and hit point would have to be server authoritative too.
Basically: unless its Stadia or geforce now, this wont work.
Not delaying UI interaction; though conflict resolution (there are at least two involved clients, each with it's own lagged view of the other, and a server that knows it's own truth) might change the outcome of events. THAT is the part of multiplayer net code I know the least about, mostly because I don't think there is a perfect solution but I am not a subject expert on what works well as an approximation.
That is what early days path if exile chose to do, and players hated the rubberbanding. Nowadays everyone uses lockstep instead, because backtrack events feel worse than being blocked right when the issue happens.
My understanding is that it's popular now to use rollbacks in fighting games (in combination with delays so the rollback doesn't get too far). Perhaps something like that would be useful, though of course that would depend on the game (and how much data it needs to send between players).
I think the difference is fighting games are easier to simulate. Part of rollback is to rewind 6 frames and resimulate those 6 frames again with the new input. This basically requires you to be able to run your game at 6x speed consistently. It's also increased memory requirement, because you need to have the game state from those 6 frames ago in memory. These are also reasons you cannot do too many rollback frames without adding delay. I believe the Nintendo Switch never got the rollback update for BlazBlue: Cross Tag Battle because of performance reasons.
Fighting games have two (maybe 4 with assists) characters generally at 60fps. That's relatively easy to do. A worse case would be an RTS game: in a fight when each unit's attack needs to be calculated repeatedly. Valorant runs at 128 ticks/second. For the same latency compensation as 6 frames in a fighting game, you would need 13 frames, so you need to be able to simulate the game at 13x speed.
And rollback still has janky visuals when conflicts happen. The games I've played will let you choose between smoother visuals with more delay or rollback artifacts with less delay. Generally the default setting is the former.
Sending copies of fake character data isn't a thing because there eventually has to be a flag that tells the client to not render that character that the client hack could simply read.
It should be clear that servers already do not trust the client, they do many checks hence you don't see teleportation hacks in games like Counter strike or Valorant. There used to be cheats in the counter strike games like "nospread" where you could have 100% pixel perfect aiming but that was because the the client was trusted however now in most games with some randomness in bullet spray patterns the random seed is different between the client and server so something like "nospread" are no longer possible.
You might be stumbling upon "fog of war" that is not sending data to a client unless the enemy player is close to visible which is a thing. It's widely used and I'd say effective in MOBA/MMORPG/RTS games however in FPS games fog of war is many times more computationally expensive which matters at the scale of games these days. It has been a thing for a long time in counter strike with server plugins like "SMAC anti wall hack" or "server side occlusion culling" however the implementations sometimes have not been perfect and require significantly stronger servers. https://github.com/87andrewh/CornerCullingSourceEngine
Riot games also implements fog of war at scale in Valorant and has a blog post covering some of the issues they overcame. One thing you can see the gif at the end of the blog post, even though fog of war is effective it is only effective in reducing the effectiveness of wall hacks and wall hacks still provide a significant advantage.
https://technology.riotgames.com/news/demolishing-wallhacks-...
There would be no such flag. The clients would cull the characters that are out of position. Yes that's some client load for the culling, but it's probably less overhead than 'anti cheat'.
The important reason I suggested MULTIPLE clones of a character and only forking new paths off of existing characters in the world is that it should eliminate any information oracle about which of those is the real character.
There is a high level of server load for this as well, not only placing these fake characters but making them move and act like real human players so they are believed and then culling them (server would cull them not the client to be clear because how would the client know to cull them without a flag?) only right before they are visible while taking account of lag (ping), interp, packet loss etc..
I definitely could see someone games doing this as a one-off to just catch specific cheaters they are suspicious of to confirm they are cheating (Many 3rd party anti-cheats in counter strike and the 1st party valorant anti-cheat do manual bans based on replay reviews) but also since they already do fog of war someone with wall hack seeing an enemy player pop in for 1 frame before disappearing would make it not effective on a wide scale.
Because the popular cheats aren't "the client says the player shot the enemy".
The popular cheats are "the client says the player just clicked at (1030, 534) on the screen", which is a totally valid move, except it's calculated by the cheat instead of the player.
The client need to have more state info than the player to render accurately, for example, to render an opponent passing through a window without lag. And also, there are also cheat that doesn't need to spy on the state, like aim assist tool or HUD improvement.
* Pass through a window without lag - That's why the server is sending multiple copies of potential movements and paths through the level for each character, but terminating the ones that are about to reveal their effects (no longer be culled by walls / objects) when they'd send false information to non-cheating players.
* Aim Assist - what's that supposed to work with for the assist? I guess it might help someone target a player once they're exposed, or once they've locked on. For that I think that extremely top tier players might behave within fuzzing distance of tool assist, at least some of the time. Dodging might have similar issues. I could even see ML assisting inputs just based on frame-grabs off the screen video output. -- So I'm not sure what client side anti-cheat is supposed to do here.
Aim Assist falls into a category of cheats that are more or less undetectable and unavoidable over the internet: skill assists for something a computer does better than a human. How central these are to the game depends on the game. For a game like Chess, the impact (of consulting a computer to suggest moves) is devastating, but the online community survives. I think it's typical in such communities for truly high stakes competition to happen in person, and for the online scene to be seen as more of a social / practice scene. I like this solution: prevent theft by reducing the value of what can be stolen.
Games that turn heavily on aiming have a similar central security flaw in that it is hard to prevent cheating at the game's central skill. (Though I think in the case of aimbots, sometimes webcams are substituted for LANs, with some success.)
On the other hand, some games are practically cheat-proof. A puzzle game in which you submit actual solutions doesn't require any trust of the client at all. CTF games generally run along these rules - almost anything you can do to solve the puzzle (googling, teaming up, writing tools, bringing AI assistants) is considered fair game. What might be considered a cheat in another context is just advancing the state of the art.
HUD improvements depend on the game. But as a simple example, I play a game where leading a moving target is a major skill; a HUD that gave you an aimpoint for a perfect intercept would be a pretty big cheat.
I think anti-cheat is one of those problem spaces where there is a danger of overemphasizing technical solutions to social problems. Technical solutions are nice, but there are also gaming experiences that are only practical on a private server, with friends, on the honor system. A wise friend once observed that removing griefers and jerks from a community also did a lot to address cheating. I think it is best thought of as a social problem first, though I agree it all depends on the context.
That monitor seems to dynamically do things based on data the game legitimately shows a player.
For some data, like the health bars, a skill / accessibility leveling feature might be to just let the user pick HOW the game displays that data, to customize the UI layout to their needs.
Enemy position highlight based on the minimap vs present location? Yeah, that crosses a clear line, but it's abusing some data the game probably shouldn't have told the player to begin with. What if the minimap reflected the known shape of the world, but only updated with the visible area (standard 'fog of war' mechanic)? Again, it might be within accessibility features to highlight enemies within sight, so I don't see too much issue if the minimap's render state is restricted to the immediate area + what the camera direction could see.
There is a fog-of-war mechanic, called 'vision'. In the game discussed in the monitor article (League of Legends), what is shown on the minimap is restricted to what your team can see at that moment.
The monitor is akin to having an experienced coach watch you play live. Is that also cheating? I think it is.
I also think it's impossible to detect, unless the player suddenly becomes extremely much better at the game. That's the best they can do to catch cheaters at chess. But chess is orders of magnitude easier to monitor, because the game state and input are small and simple.
When I first read about the monitor I realized that for many types of games cheating will become unstoppable. Although sad, the bright side is that it drove me away from online gaming even more, to the benefit of my overall health.
>Pass through a window without lag - That's why the server is sending multiple copies of potential movements and paths through the level for each character, but terminating the ones that are about to reveal their effects (no longer be culled by walls / objects) when they'd send false information to non-cheating players.
So the client must render multiple possible scene to be prepared ? They already have issue to have steady fps.
> So I'm not sure what client side anti-cheat is supposed to do here.
Anti-cheat will check other running processes to prevent it. Of course, you can have totally external system for that, but it will be much more expensive. The goal is not to be perfect but to prevent most of the player from cheating.
>HUD improvements - like what?
Highlight items, show life percentage in games that doesn't, highlight barely visible opponents...
Not trusting the clients and redoing all calculations server-side would require massive processing on the server side.
Your idea then multiplies the load on the server.
We do have massive processing on all sides. And the actual processing done for the game stuff is not really that big. Mostly it goes to graphics, sounds and so on.
How much though, i mean you're basically doing vector work, and only when its visible within range, and then when its visible within the player and the player hitbox is.... oh I see.
Opening the console and seeing rollback net-code pretty much says enough. Also the game itself feels great to play even with 160ms ping which is only really possible when using architecture similar to overwatch.
The first rule in any software backed by a server, but especially multiplayer games is, you never trust the client. You could have a perfectly deterministic game where every action is validated on the server, be defeated by running the game at half speed.
well because realtime online multiplayer game need to be "FAST", I mean really fast
sure if you develop platform today we can check token user now with hashtable we have in database but in games ?? You cant verify calculated damage numbers users gave, not fast enough
You absolutely can do that and (almost) all games do that already.
This type of cheats are DECADES in the past.
Today is all about
a) enhancing normal behavior with artificial precision, not making any 'illegal' (from game perspective) actions.
b) giving player information he isn't supposed to have but that is passed to client for latency sake
Sorry but I don't think you worked for a multiplayer game in the past 15 years. Verifying damage numbers is no-brainer. The programmers won't discuss "should we verify damage numbers" at all. It's the norm today.
Then all the cheater would need to do is move close to the area of these ghosts and find out which the real player is. Its also going to be very taxing for the server to create realistic ghost players that move around dynamically.
The reason multiplayer servers implicitly trust clients is because it's a cheaper and proven (less risk) solution.
The traditional anti-cheat can be just slapped after the game is developed in most games. If the game is very successful then you can just update the game with extra paid protections provided by the anti-cheat tool.
The alternative is local game engine that works with a partial game state which is a challenge on it self. If you still can make it work, you will still have to deal with people "modding" the client to gain an advantage. E.g.: enemies are painted red instead of camouflage.
As someone working in AAA game development, I come across comments like these often, and they never fail to get under my skin. It’s like watching that infamous "Two idiots, one keyboard" scene from CSI—full of confidence, but completely detached from reality.
I don’t mean to sound harsh, but it’s tough to tackle this kind of misconception because it’s stated with such certainty that others, who also might not know any better, just take it as fact.
Here’s the thing: Multiplayer servers trust clients mainly for performance reasons. In AAA game development, anti-cheat isn’t something we focus on right from the start. It typically becomes a priority post-alpha (and by alpha, I’m talking about an internal milestone that usually spans about a year—not the "alpha" most people think of which is usually closer to an internal "beta", and "public beta" is more like release candidate 1). During that time, the tech team is constantly working on ways to secure the game. (make it work, make it correct*, make it fast).
If we were to bake in anti-cheat measures from the very beginning of a project, it would force us to scale back our ambitions. Some might argue that’s a good thing, but the truth is, we’d also risk missing critical milestones like First-Playable or Vertical Slice. You simply can’t tackle everything at once—focus is a measure primarily of what you are not doing, after all.
Back when I was working on The Division, we had some deep discussions about using player analytics and even early forms of machine learning to detect "too good" players in real-time. This was in 2014, well before the AI boom. The industry's interest in new anti-cheat methods has only grown since then, I promise you this.
At the end of the day, games are all about delivering an experience. That’s the priority, and a solid anti-cheat system is key to ensuring it. Endpoint security is currently the best solution we have because it doesn’t bog down the client with delays or force awkward mechanics like rollbacks or lock-step processing. Plus, it lines up with the (very heavy) optimisations we already do for consoles.
Nobody in this industry wants to install a rootkit on your PC if we can avoid it. It’s just the best trade-off (for all parties, especially gamers) given the circumstances. And let's be clear—these solutions are far from cheap. We pay a lot to implement them, even if some marketing material might suggest otherwise.
Did the division have an anticheat when it was released? I remember it being really bad some time after release, like a few steps above most other games in both the number of hackers and their abilities (not just the usual aimbot/esp).
Yes, we did, but it wasn’t good enough (it was the machine learning system I talked about). We later added EAC as well, the situation improved but cheating was still rampant.
Makes sense, ineffective AC and little server side checks, I think the community consensus was that there was no AC at all. I played dark zone quite a bit, kinda first in the raid looter shooter genre. Had a lot of fun with the jumping jacks "bug".
Its really hard to tell if someones cheating based on the things you can check because it can look like low ping or just a slightly better than average player. In those cases, our genuine best players might accidentally trigger. (which has happened)
There are egregious examples of cheating, sure, but those people are always banned within the hour.
The real killer was the free weekends, it makes it so that there is no “cost” to cheating for a while since being banned on a fresh account has no meaning.
>It’s just the best trade-off (for all parties, especially gamers)
I fail to see how pimping out my PC to code that no one can verify is a good deal. The takeaway is, have a separate hardware to play games on and don't let it touch anything private?
> because it's a cheaper and proven (less risk) solution
I mean... didn't you just essentially say he's right? Things are done the way they are because of performance (aka "cheaper") and to meet project goals (aka "less risk")
Those aren't bad reasons at all, and it makes perfect sense, especially when you consider already locked-down platforms like consoles. But it seems to me, from what I read here, that the reasons are ultimately cost and risk.
Sure, but you'll need to do more than "read several HN articles over the years" (presumably completely unrelated to anti-cheats?) to get even a basic understanding of how anti-cheats work, as he went on to demonstrate.
I also just thought it was unintentionally funny, like a comedic setup for a stereotypically cocky HN user to comment with great confidence on something way outside of their field of expertise.
(not saying that's the case for mjevans)
<sarcasm> oh dang you should be a multiplayer engineer. Sounds like with barely even thinking about the problem space you've solved what thousands of extremely talented and knowledgeable engineers never could! </sarcasm>
Servers very much distrust the client. Obviously. That's literally rule #1. Don't trust the client!
Comments like yours are extremely irritating. Please don't behave this way with your co-workers.
Anyhow, there's all kinds of types of cheats for different kinds of games. There's a variety of mitigations for each kind. I don't think there's a multiplayer shooter on the planet that has fully solved aimbots. For however clever you think you are I promise the cheat makers are much, much more clever. :)
Touch grass buddy. In their first sentence they openly admit this isn’t their area of expertise. Hence them asking a question to people who know more than them. “Don’t behave this way to your co-workers” is much better advice for your comment than for GP’s.
Went outside with my dogs and successfully touched grass. It’s growing back in nicely after a week of rain.
“Why can’t you just” guys are extremely irritating. I implore OP to not be a “why can’t you just” guy at work. What is a WCYJGuy? Someone who has no knowledge of a domain but proposes solutions under the implication that there is a simple solution that they are oh so clever to have instantly discovered. It takes a lot of time and effort to explain “no you can not just” to someone who doesn’t have the pre-requisite knowledge.
I originally created the site as a way to track which games would be supported on Linux, since at the time the Steam Deck was releasing, and some games were turning to support it. And it has since blossomed into a larger project, which some other tools even pull from! I would have never even imagined that when I first started making this.
I do want to address something I see being talked about in the comments, which is the fact people say that anti-cheats are snake oil, or useless. This is a big misunderstanding, and I feel like those more technically inclined should understand that anti-cheat is a "defense-in-depth" type of approach. Where it is just one of many lines of defense. Some anti-cheats are pretty useless, and don't do much, but some actually do try and protect the game you're playing. But, just like DRM, it can be cracked, and that's why it's more of a constant arms race, rather than a one and done thing.
I'm writing out a longer post about this for the future, but just know that without anti-cheat clientside, it would be far too easy for an attacker to cheat in these games. We're still ways out from letting AI (see VACnet [1] and and Anybrain [2]) determine if someone is cheating server-side, so for now we have to rely on heavier client-side techniques and server-side decision making.
Also if anyone has questions about the site (or for me), I'll try to answer them here when I see them. If not, have a nice day!
[1] https://youtu.be/kTiP0zKF9bc
[2] https://www.anybrain.gg/