Hacker News new | past | comments | ask | show | jobs | submit login
How do video games stay in sync? (medium.com/geekculture)
303 points by whack on May 28, 2022 | hide | past | favorite | 164 comments



I've been working on my own realtime networking engine[0] and I think there are a few important points related to network syncing that are not mentioned in this article:

1) Bandwidth. The users internet can only handle so much network throughput, so for fast paced games (where you're sending data to each client at a rate of 20+ frames per second) it becomes important to optimize your per-frame packet size. This means using techniques like binary encoding and delta compression (only send diffs).

2) Server infrastructure. For client-server games, latency is going to be a function of server placement. If you only have a single server that is deployed in us-east and a bunch of users want to play with each other in Australia, their experience is going to suffer massively. Ideally you want a global network of servers and try to route users to their closest server.

3) TCP vs UDP. Packet loss is a very real problem, and you don't want clients to be stuck waiting for old packets to be resent to them when they already have the latest data. UDP makes a major difference in gameplay when dealing with lossy networks.

[0] https://github.com/hathora/hathora


> 1) Bandwidth. The users internet can only handle so much network throughput, so for fast paced games (where you're sending data to each client at a rate of 20+ frames per second) it becomes important to optimize your per-frame packet size. This means using techniques like binary encoding and delta compression (only send diffs).

Games like Blizzard's Warcraft III / StarCraft II and Age Of Empire linked here in this thread (1500 archers on a 28.8 k baud modem) and oh so many other games approach that entirely differently: the amount of user inputs users can input is tinier than tiny. So instead of sending diff of the game state they send user inputs and the time at which they happened. Because their engine are entirely deterministic, they can recreate the exact same game state for everybody from only the timed user inputs.

Fully deterministic games engine also allow for lots of easy to reproduce bugs and they also allow for tiny save files.

Negligible network traffic. Tiny save files. Bugs are easy to reproduce. When the game allows it, it's the only reasonable thing to do.


This presents a (relative) vulnerability to cheating. If every computer has the full game state but players aren’t supposed to be able to know some things there is the potential for hacks.

The most obvious version of this in StarCraft is maphacks that let you see through fog of war, although that’s far from the only thing.

Poker meets all the technical requirements here, but sending everyone the contents of all hands would be a disaster.


> Poker meets all the technical requirements here, but sending everyone the contents of all hands would be a disaster

I work in the gambling space. A few notes, gambling games don’t ever rely on physics (even roulette, or a coin dozer type of game, everything is decided by a certified rng, no regulatory body that I am aware of allows outcomes based on physics engines). This means there is far less data to keep state on (a hand of cards is very tiny json blob to send). Games like poker etc. don’t require “real time”, if a player takes 4 seconds to decide if they want to call/raise/fold etc. then an extra 200ms of latency isn’t even going to be noticeable. So we don’t really care if there is a bit of latency, these aren’t FPS games.


Yep - even apparently physics-based digital casino games (think pachinko-style) are not allowed to use the real physics, that's really just faked as an animation to match the strictly controlled odds that can be easily verified by code inspection.


That should be considered cheating!

Games that pretend to be physics-based but in reality have a backed probability engine.


People who gamble understand this, it's literally the law.


This comes up in Minecraft too, and there was a small arms race around it. For the unfamiliar - certain resources in the game are useful and valuable but also rare (diamonds) and requires the player to spend a decent amount of time digging through the ground to find them.

But, since you have the whole game state, you can sift through the data and pinpoint these resources and acquire them quickly and with almost no effort. In multiplayer this is generally considered cheating and is called an "xray" modification to the game client. There are other variations of this hack that involve changing the game's textures to transparent images except for the specific resources you want to find.

Mulitplayer server administrators don't like cheats so they created countermeasures for this. The best example is probably Orebfuscator which "hides" said valuable resources until the player is very close to them.

https://dev.bukkit.org/projects/orebfuscator


Can't you still gain an unfair advantage using Bayesian search theory where probability drops to zero at the "revealing radius"?

Or is the "revealing radius" somewhat randomized over time in a way that's invisible to the client?


I mean, if you can acquire or otherwise reverse-engineer[0] the game seed, you can also just find resources by loading a local copy and noting the coordinates of ore. For major servers, anti-xray plugins will be installed as due diligence, but most of the anti-cheat efforts are focused on detection, reverting, and banning.

Ultimately, if you have a big enough server to attract serious cheaters, you will (or at least should) have tools that can also detect suspicious behavior based on heuristics (i.e. see if a player mined straight to an ore block). Tools like CoreProtect[1] can help detect and revert this.

Ore obfuscation still works very well, however, for the majority of causal cheaters that just googled "hacked minecraft client" and installed the first result.

One ore obfuscation technique used in PaperMC actually sends intentionally fake data to the user to "muddy the waters"[2].

(I know a lot of this because I help develop a Minecraft server management tool)

[0]:https://www.youtube.com/watch?v=GaRurhiK-Lk

[1]:https://github.com/PlayPro/CoreProtect/

[2]:https://docs.papermc.io/paper/anti-xray


https://en.wikipedia.org/wiki/Mental_poker might provide a means by which you could have a match be verifiable by all parties after the fact, but not leak info during it.


Deterministic games suffer from desync and issues with input limitations. It is true that war3 does this, but it has some serious drawbacks.

It also makes the client easier to cheat on and gives one player (the host) a preferential ping.

Most competitive FPS’s use server authoritative instead of replayable deterministic because of this.

If you want to see the limitations, head into the old war3 map editor forums and look up the hacks using automated clicks between placeholder variable units just to move a few bytes of data between clients so they can persist character stats between games.


I never really got into SC2, but Warcraft 3 and AOE2 have pretty major problems with online gameplay due to using deterministic lockstep. Back in the day, it wasn't uncommon for people to lag out in Warcraft 3, which would freeze the game for the whole lobby until either enough time passed that you could kick them, or they stopped whatever was making them lag in the background.

My friends and I actually quit playing AOE2DE because about 1/3-1/2 of team games had someone lagging from the start, which makes the game slow and choppy for every other player (and this was despite the game having a benchmark you had to complete at a certain framerate to play matchmaking online). Spending the next hour in an unplayably laggy mess of a game just isn't fun.

I know Supreme Commander (supcom) also has problems with 1 player lagging causing everyone else to lag.

There's also the matter of it being much harder to actually program a deterministic game, especially once you try and multithread it (which is really necessary for modern RTS, but a nightmare for determinism). Fixing all desyncs is very difficult (AOE2 and WC3 both still have desync bugs. In AOE2DE some people use them to cheat their way up the ranked ladder, desyncing whenever they're losing so it counts as a draw instead of a loss). I've heard the upcoming Sanctuary RTS (indie supcom spiritual successor in development) talked to a bunch of RTS industry vets, and were told that it's not worth it to do a deterministic simulation these days unless you want over ~10k units. AI War 2 [1] has a pretty interesting network model for multiplayer, where they've got a semi-deterministic simulation, and self heal client's simulations if they diverge from the host, which allows for it to be heavily multithreaded and have 10k-100k+ units. You'd probably have to use dedicated servers for a competitive ranked mode if you went that way (and they'd be heavier to host, for a smaller per-game player count than eg. an fps or a survival game).

[1] https://wiki.arcengames.com/index.php?title=Category:AI_War_...


1) Bandwidth is pretty irrelevant now. Even players on cellular networks have megabits of bandwidth. I stopped spending a large amount of time optimizing for packet size while building the networking for Dota 2. Nobody is playing a 14.4k modem anymore.

2) Server placement is still an issue. It's still ~200ms round trip from New York to Sydney for example. Fortunately, cloud infrastructures can make getting servers to closer your players much easier now. You don't have to physically install servers into data centers in the region.

3) Packet loss still occurs, but is incredibly rare that the gap between using TCP and UDP is narrowing. Modern TCP implementations like Microsoft's are amazing at handling loss and retransmission. However, I'd probably use QUIC for game networking if I was to write an engine from scratch these days.


Having worked on a fairly popular .io game mostly played by kids on phones and chromebooks over wifi, I concur with everything you said.

1) We updated around 60Hz, and bandwidth was never an issue (everything was binary encoded and many values were hand compressed to the number of bits they needed, but we didn't run any additional compression or ever feel the need to optimize further, and these were games with 100 players in an instance).

2) Probably the biggest key to success was global server placement. That mattered most, and we ended up renting servers in 10-20 regions around the globe to keep latency down for players. I didn't work on this part, but I know it was quite a bit of work, and also very experimental. Physical proximity of servers didn't always translate to lower latencies. Country borders could by surprisingly laggy, in certain cases.

3) This is the one that really shocked me. As I said, players were engaging with the game in almost the worst conditions imaginable, weak chromebooks over wifi and all communication was over websockets (which basically behave like TCP). Still, packet loss was not an issue. We prioritized low-latency over smoothness, so our sever just blasted out the latest state to the client ~60 times a second, and the client would display mostly like a dumb terminal. This is approximately how you're supposed to do it with UDP, but dropped packets and resends are supposed to make it unworkable over TCP. But we just YOLO'd it with TCP and it worked great! I'm sure at some point in the past before internet infrastructure got so good, it would have been a disaster, but seems like, for most players, we've advanced past that.

Now, I know partially this just weeded out the people with bad connections, but I really don't think it was that many. Certainly my own experiments taking my laptop and checking out various wifi spots around town with wireshark indicated that modern infrastructure is just that good.

(Actually, in terms of improving latency, beyond making sure people were playing on local severs, the next biggest thing was just optimizing javascipt. Both the client and server were written in it, and GC stalls were a huge problem. The eventually solution was to rewrite the server to not generate that much garbage in the first place and then just disable the GC. Reboot the server process after ~10 minutes between games. Another big js issue was code getting de/reoptimized continually. The biggest issue was inconsistent numerical literals between float and integer. Once we figured out it was an issue, we became very disciplined about that.)


I understand if you wish to keep it private, but what was the name of the .io game you worked on?


Another key requirement that must be considered is packet ordering. With games you care about the latest state and thus discarding older out of order packets is a better strategy than waiting for packets to serialize them like TCP would do.


You only care about the latest state for some events. Only events which will soon be superseded by a later event should go over UDP. Move A to X, sent on every frame, fine. Create monster at Y, no.

If you find yourself implementing reliability and retransmission over UDP, you're doing it wrong. However, as I mention occasionally, turn off delayed ACKs in TCP to avoid stalls on short message traffic.

Reliable, no head of line blocking, in order delivery - pick any two. Can't have all three.


Why not use Valve's game networking stuff? Just curious.


There's another way to look at this. The more data you send per packet, the more that can be reasonably interpolated by the client in between updates. Diffs also becomes impossible of course in cases where you're using UDP. So for instance imagine you're only sending visible targets to a player in updates, and then there is a brief stutter - you end up risking having a target magically warp onto the player's screen, which is obv undesirable. Pack in everybody's location (or at least maybe within some as the crow flies radius) and the client experience will break less frequently. Of course like you said though, now the bandwidth goes up.


I’ve written a diffing algorithm using UDP. You tell it to diff against a previous packet with an id. Every so often you send a full key frame packet so they always stay in sync and have full game state.

It works really well and cut my network traffic down by a whole couple orders of magnitude.

The trick is to figure out update grouping so you can create clean groups of things to send and diff on. Ultimately delta compression doesn’t even care what the data is, so modern net stacks do some really efficient compression in this way.


I’ve written a diffing algorithm using UDP. You tell it to diff against a previous packet with an id. Every so often you send a full key frame packet so they always stay in sync and have full game state.

Right. That's how video streams work, too. Every once in a while there's a complete frame, but most frames are diffs.


The key here though is that your server keeps the last N ticks of state (probably around 20) and calculates the diff for each player based on the last id they reported seeing. This way missing an update doesn't get you completely out of sync until the next full state sync, it just gets you a larger diff.


There is a useful intermediate approach, send more entities but use an importance algorithm to control how frequently each has their data sent. Clients keep knowledge of more entities this way, but bandwidth usage/frame can be kept stable.


Sounds like the recent changes star citizen made via the entity component update scheduler


Diffs aren't impossible over UDP. The client should be sending to the server which states it has locally, along with inputs. Then since the server has a (potentially non-exhaustive) list of recent states it knows the client has seen, it can choose to make a diff against the most recent of those and tell the client which state the diff is against. Then the client and server just need to keep a small buffer of pre-diff-encoded packets sent/received.


Another trade off with your approach of sending non-visible entities ahead of time is that it makes wall hacks possible.

Anyone aware of any conceptual way to “encrypt” the location data so it’s only usable if the player has line of sight? I doubt that’s easy/possible but don’t even know where to begin searching for research around topics like that.


Here are some article that address wall hacking - not quite what you're looking for but, still a great read.

https://technology.riotgames.com/news/demolishing-wallhacks-...

https://technology.riotgames.com/news/peeking-valorants-netc...


Not quite what I originally had in mind but interesting idea of storing the remote entity locations in the trusted enclave: https://lifeasageek.github.io/papers/seonghyun-blackmirror.p...


How does UDP work if you're also using delta compression? I would naively expect that the accumulation of lost diff packets over time would cause game state drift among the clients.


The simplest way I've done it: say client and server start on tick 1, and that's also the last acknowledgement from the client that the server knows about. So it sends a diff from 1 to 2, 1 to 3, 1 to 4, until server gets an ack for tick 3, for example. Then server sends diffs from 3 to 5, 3 to 6, etc. The idea is that the diffs are idempotent and will take the client to the latest state, as long as we can trust the last ack value. So if it's a diff from 3 to 6, the client could apply that diff in tick 3, 4, 5 or 6, and the final result would be the same.

This is done for state that should be reliably transmitted and consistent. For stuff that doesn't matter as much if they get lost (explosion effects, or what not), then they're usually included in that packet but not retransmitted or accounted for once it goes out.

This is different (and a lot more efficient) than sending the last N updates in each packet.


> The idea is that the diffs are idempotent and will take the client to the latest state, as long as we can trust the last ack value. So if it's a diff from 3 to 6, the client could apply that diff in tick 3, 4, 5 or 6, and the final result would be the same.

Can you elaborate or give an example of how this works?


Imagine the following changes each tick:

    1: x = 1
    2: x = 2
    3: x = 3, y = 5
    4: x = 4
    5: x = 5
    6: x = 6
    7: x = 7, y = 1
Diff from 2 to 4 would be "x = 4, y = 5".

Diff from 3 to 6 is "x = 6", which will always be correct to apply as long as client is already on ticks 3~6. But if you apply at tick 2, you lose that "y = 5" part. This can't happen in a bug-free code because the server will only send diffs from the latest ticks it knows for sure the client has (because the client sends acks)


Cool thanks, that makes sense! In my head I was thinking the diff from 2 to 4 would be something like "x += 2, y += 5", and 3 to 6 would be "x += 3, y += 0"... which of course wouldn't be idempotent and wouldn't allow you to apply the update to different client states.


You can extend it to practical use by imagining these terms as:

    entity[123].active = true
    entity[123].x = 4
    entity[123].y = 8
Then later...

    entity[123].active = false
And with special rules such that if `active = false`, no other properties of the entity needs to be encoded. And if `active = true` is decoded, it sets all properties to their default value. Then you get a fairly simple way to transmit an entity system. Of course you'd want to encode these properties in a much smarter way for efficiency. But the basic idea is there


That is a fascinating use of idempotence, bravo!


If you get your data small enough to fit multiple updates into a single packet, you can send the last N updates in each packet.

If your updates are bigger; you probably will end up with seqs, acks and retransmitting of some sort, but you may be able to do better than sending a duplicate of the missed packet.


Exactly, you assign a sequence number to each update, have the client send acks to convey which packets it has received, and the server holds onto and sends each unacked update in the packet to clients (this is an improvement over blindly sending N updates each time, you don't want to send updates that you know the client has already received).

If the client misses too many frames the server can send it a snapshot (that way the server can hold a bounded number of old updates in memory).


You just described TCP


It's close but TCP will retransmit frames rather than packing multiple updates in a single frame.

It's common for people to build this kind of retransmission logic on top of UDP (especially for networked games), it's sometimes referred to as "reliable UDP".


It’s not TCP, it’s TCP without head-of-line blocking, which makes it much more suitable for real time games.


TCP forces sequencing across all packets, SCTP is a bit closer.


You don’t delta compress everything, only a significant part of the payload. Each diff is referential to a previous packet with a unique id. If you don’t have the previous packet, you just ignore the update.

Every 30 frames or so, you send a key frame packet that is uncompressed so that all clients have a consistent perspective of world state if they fell behind.

Using sentTime lets clients ignore old data and interpolate to catch up if behind as well.

It does work, I wrote one from scratch to create a multiplayer space rpg and the bandwidth savings were incredible.


How does UDP / packet loss work with delta compression? If you’re only sending diffs and some of them may be lost or received out of order, doesn’t that break delta compression?


The same way it works for video codecs which also send diffs over UDP. There are mechanisms to introduce redundancy in the stream, ask for retransmission, handle missing information.


I did my PhD on exactly this area (thesis on yousefamar.com, DM me if you'd like to chat about gamedev and netcode!) and optimising for cost and easy setup to support indie devs.

Afterwards, when I tried to validate this as a product (libfabric.com) I realised that I'm trying to solve a problem that nobody has, except for a very small niche. The main problem that developers of large-scale networked games have is acquiring players. They use whatever existing service+SDK (like Proton) for networking and don't think about it. Once they have good traction, then they can afford scalable infrastructure that others have already built, of which there are many.


Good point about the niche problem. It reminds me of freemium pricing where for low volume the service is free but above a threshold (when customers can typically afford to pay) it gets expensive.

It'd be nice to some global service to host your game servers controlled by a single slider linking ping to price. With the slider at its cheapest end, there's only one server and it's in the cheapest datacenter. At the most expensive end, there are servers all over the world linking players as locally as possible.


Related interesting read:

1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond

https://www.gamedeveloper.com/programming/1500-archers-on-a-...


Speaking of networking older games, I think the "TRIBES Engine Networking Model" is also an interesting read. Managing ~128 players over the internet in a single game back in the late 90's was no mean feat. A lot of these kinds of optimizations are still greatly applicable even today!

https://www.gamedevs.org/uploads/tribes-networking-model.pdf


I'm a Tribes 1 and 2 vet. I think the largest games I played were still capped at 64 people. It was definitely some impressive network code, for sure. Latency was still a huge issue, but what really helped that game out was the fact that it basically required being able to accurately predict everything yourself. You constantly had to predict how high up you could jetpack, when to initiate the jetpack, where you should land on a slope, where you should jump from, where to shoot "ahead" to get "mid-air" shots (shooting people in the air as they're jetting past you at incredible speeds). This act of priming oneself to the game's fast-paced environment made the latency far more tolerable than it probably would have been.


This article explains what every figures out when making their own engine (and basing it off guessing what other games do and reading 2 blogposts instead of diving into philosophical rabbit holes). It also misses that most games are made with frameworks/libs now that give the dev no control over what things require round trips to the server (I assume this is the explanation for that; Fortnite took 2 years to fix weapon switch lag and LoL still has round trip bugs).

What immediately happens in practice with interpolation, is that every second player has a bad network connection and so you get unevenly spaced movement packets from him, and he just warps from here to there (via teleport, or smooth movement, neither of which look good or are possible in single player mode), among other problems. Interpolation also adds latency which is already a constrained area. Your game should already be designed to handle a constant rate of in and outbound packets and is broken if it sends them on demand instead of packing them into this stream of evenly paced constant rate packets. If you can't send packets for a few frames you should be punished until you switch ISPs, as opposed to you getting advantage because you teleport somewhere on the other clients's screens. The idea of "right away" is a misconception here. Since you get packets at a constant rate (which should be high enough to be displayed directly on screen), interpolation is not necessary. 60 tick is literally nothing, for a game with less than 100 players and little to no moving objects. Of course, most gamedevs are not concerned with this stuff as the framerate drops to 10 in most games if 1-6 players are on screen depending on how shit their game is. Also, client side damage is a mistake.


> also misses that most games are made with frameworks/libs now that give the dev no control over what things require round trips to the server (I assume this is the explanation for that; Fortnite took 2 years to fix weapon switch lag

I can't speak for riot/league but I've worked in unreal for close to a decade and the engine provides complete control over what requires a round trip. I won't speculate on the Fortnite weapon switch lag (although I did work for epic on Fortnite at the time), but these kinds of bugs happen in the same way any other bug happens - accidental complexity. You call a function that you know requires a round trip, but then 6 months later someone else calls your function but doesn't realise that theres a round trip in there.

> Since you get packets at a constant rate (which should be high enough to be displayed directly on screen), interpolation is not necessary.

This is just nonsense. There is no such thing as a perfect connection, particularly when you're communicating across the internet. Even in your perfect world situation it doesn't work - if both you and I are 16 ms away from the server and it's running at 60hz (which is a stretch too - many games are running at much lower update rates because of the expense of running these services), in the worst case you have over 60ms of latency to handle, which is 4 frames.

> Of course, most gamedevs are not concerned with this stuff as the framerate drops to 10 in most games if 1-6 players are on screen depending on how shit their game is

This is the sort of comment I expect on Reddit and not here. Most developers, myself included would do anything in their power to avoid that happening, and I can't think of a single game that drops to even close that bad that was released in the last decade.

> Also, client side damage is a mistake.

Client side hit detection is a trade off. On one end it allows for abuse, but most client interpolation systems (including the one that comes out of the box with unreal engine) will mostly negate that. On the other, it allows for local-feeling play in a huge number of situations.


> It also misses that most games are made with frameworks/libs now that give the dev no control over what things require round trips to the server (I assume this is the explanation for that; Fortnite took 2 years to fix weapon switch lag and LoL still has round trip bugs).

League of Legends is using a custom game engine built from the ground up, it's always been a really buggy mess though.


Fortnite is on Epic’s own engine so that seems a little unlikely.


Not everyone in the company has the same knowledge.


The teams are pretty darn interlinked and the engine is source available. Seems highly unlikely they aren’t able to discuss things if they’re important.


> Also, client side damage is a mistake. I’m assuming you mean client side hit detection, as a person who lives in a region where a lot of games are 100 ping, that’s absolutely necessary. Without it players with latency above the tick rate would have to lead their shots to hit other players. While it causes some unfairness for the victim (ex. getting hit behind cover), it’s still the best way to do it but it must be disabled at a certain threshold, preferable where it doesn’t completely ruin the experience for players with average connections playing on their closest server. That said it is a band aid and ideally you would just set up servers closer to players.


> ideally you would just set up servers closer to players.

That doesn't really solve the problem though. In my last apartment a ping to my router on WiFi was 15ms, and it had some nonsense hardware that caused spikes of 100ms+ every now and again [0]. Someone else in your household watching Netflix can cause buffer bloat, and in a multiplayer game you have both players connections (or 10 or 100 players depending on the game). Even at 10ms latency to the server, you have a worst case network latency of 50ms if the server runs at 30hz (plus network buffering plus client frame rates plus buffering plus render buffering...)

You also need the population to support having servers everywhere, and choose to put servers in places, _and_ you very quickly hit diminishing returns. Sure you could locate in 3 cities in the UK to absolutely minimize latency, but the advantage of doing so is complete elimi6 if one player is on WiFi, so the servers for all of Europe might as well be in Amsterdam or Dublin.

Completely agree otherwise!

[0] https://www.ispreview.co.uk/index.php/2018/08/intel-coughs-t...


I know this might oversimplify, or perhaps is obvious to many, but when I got into amateur game dev one surprise was realizing that real-time games are just turn-based games where you don’t control the turns.


If you mean the networking architecture, that seems indeed like an oversimplification. AFAIK lockstep synchronization isn't a good networking strategy, and most games netcode will do have some prediction and rollback components.


You're conflating game logic with engine logic here - whatever hijinks the engine pulls to make things seem seamless (prediction / replay etc.) is (or at least SHOULD be) orthogonal to the game logic. In game logic terms all games are turn based because there are no infinitely fast computers. The turns just happen at your game's tick rate which might be pretty quick.


> The turns just happen at your game's tick rate which might be pretty quick.

Except because of latency, it's like taking turns when you may be ahead of other players in turns or behind other players and the game does rollbacks to make the turns make sense once they've all arrived and how many turns ahead/behind you are is also dynamic.

It ends up having a very real affect on the actual game mechanics. In CS:GO a player peaking out from an obstacle has the advantage over the player waiting for them to pop out.

https://on-winning.com/csgo-beginners-guide-peeking/


Even with an infinitely fast client computer, the server shouldn't accept infinite ticks from one player before processing ticks from another.


If one player is lagging, this would lag every other player. It also allows an easy "freeze the world" hack if the malicious player blocks their network connection for the game.


Where did I make a distinction between engines and game logic? How can you say I conflated anything?

Anyway. I only have unfinished attempts at low-latency netcode and rollback, so can't say I'm speaking from solid experience. But I would doubt that engines implement rollback netcode for you. Essentially the game needs to be structured in a way to accomodate storage of game state as snapshots. And it needs to decide how to incorporate messages that arrive late.


Again you're conflating netcode with game rules. The players don't know of any rollbacks. That's not part of the game, just the implementation of the client.

The comment was that (surprisingly) all games are single threaded and feel very turn based. Even real-time games.


This was not about players. It's about the implementation.

I also don't know what you mean by "games are single threaded and feel very turn based". Games are usually not single threaded (insofar as most games run on multiple OS threads). If you want to say they feel single threaded, then it might be because the screen emits one frame after another, and you can influence the next frame by your action? But I don't know how that is an interesting insight and it doesn't seem to have anything to do with how games stay in sync.

You might have something interesting to say, but I wasn't able to learn anything from your post. Add to that the slightly accusing tone of it ("again conflating") I can't help but be annoyed.


Most game logic is indeed single threaded and most multiplayer games are single thread authoritative. Because the logic is single threaded it can feel turn based because it has to handle things in a time sliced way.


The game logic doesn't have to be single threaded but it should be deterministic I guess.

But anyway, this is like the most basic insight and doesn't touch what I said at all. It does not explain how games stay in sync, unless you go back to shitty 90's lockstep netcode.

As I said in my first comment.


that explains how it was possible for the developers of Diablo 1 to turn the game from turn-based to real-time in a single afternoon :)

yes, originally it was developed as turn-based and I often wonder if that's one reason why the animations are sooo satisfying. But could be that they simply had great animators.


I've never thought of it in that specific way before (although obviously when you're writing the game that's how it goes) and that's a great way to explain it. Thanks!


Seems a fine way to think about things. Essentially turn based games treat player actions as a clock tick?


They're real-time games but over the characters looms an all-powerful controlling entity (like a TurnBasedActionController object) that will only allow one character at a time to think about and execute an action. The others can do fun taunting animations but not much more.


I learned it from games that porting to newer platform with faster FPS. For example, Dark Souls II runs at 30 FPS at release. Then upgrade to 60 FPS after porting to PC, but double the FPS also doubles the weapons break pace.


There’s an architecture to avoid this by decoupling the simulation tickrate from the renderer.

The really simple way is to just pass a delta milliseconds to each system so it can simulate the right amount of time.

But yeah, it was wild how DS2 at 60fps fundamentally altered a lot of things like weapon durability and jump distance.


From seems to be guilty of this even today. Eldenring recently had a dog that did more bite damage at higher fps. I guess they're just ticking damage every frame the dog head and the player collider overlap.


I’m currently close to abandoning bannerlord, a medieval combat game where less latency gives you a better chance in MP.


Isn't that how it's supposed to be? If you have a good connection, you get the game state earlier than someone with a worse connection.


for those unfamiliar, it's because the game's main loop looks like

  while (!exitRequested) {
    player.updateState();
    someCharacter.updateState();
    someOtherCharacter.updateState();
  }

you could in theory make this kind of updates in parallel but then the entire game becomes a non-deterministic chaos and trying to deal with synchronising threads in a context like this is such a nightmare and I'm sure intractable performance-wise. did anyone even try this, ever?

bottom line, real-time or turn-based, a piece of code needs to execute before or after another, not at the same time.

the order in which things take their "turn" each frame becomes very important the more complex the game btw, so even the order in which things execute serially cannot be entirely arbitrary. usually for things that depend on other things to update their state in order to accurately update their own state. which is a lot of things in every game. for example, you wanna update the text on the GUI that says how much gold the player has. you'll update the text after everything that could have influenced the gold this frame has updated (i.e. at the end of the frame). player input state (keyboard input e.g.) is updated at the beginning of the frame before you make the player's character do anything based on input.

particular stuff can be parallelized or turned into coroutines that "update a little bit each loop" so as to not kill performance. like pathfinding, a character needs to go from point A to point B, he doesn't really need to find the whole path now. a partial path while a separate thread calculates the entire path can do. or just make him think a bit while the pathfinding thread finds a path, the advantage is characters thinking about their actions is also realistic :P


> did anyone even try this, ever?

I think it depends on the granularity.

Coarse-grained parallelism is already common: some things like AI (like your example), Physics, Resource Management, Shader Compilation, Audio and even Netcode are already commonly run in separate threads.

However you won't see the updateInputs(), all the updateState() and sometimes even render() running in parallel for two reasons: first because it's often cheaper and easier/simpler to run in the main thread than dispatching async jobs anyway. Second because each operation often depends on the previous ones, and you can't wait until the next frame to take it into account: you often want instant feedback.

However these things can in theory be run in parallel without becoming chaos. ECS is often very parallelizable: you can run multiple Systems in different threads, as long as the output of one System is not a dependency of another running at the same time. You could also process multiple components of a system in multiple threads, but that would negate ECS main advantage: being cache-friendly by virtue of running in a single thread.


I think Bevy in Rust demonstrates this well. Your systems are structured in a dependency graph and, I think, automatically figure out what things can be done in parallel.


Naughty Dog developers did a talk at GDC 2015 where they explain how they parallelized their game engine.

https://www.gdcvault.com/play/1022186/Parallelizing-the-Naug...

>for example, you wanna update the text on the GUI that says how much gold the player has. you'll update the text after everything that could have influenced the gold this frame has updated (i.e. at the end of the frame).

Modern game engines are pipelined; you render the previous frame logic. In the talk aforementioned, they show a three stages deep pipeline looking like this

    [FRAME]       [FRAME+1]       [FRAME+2]
    ----------------------------------------------
    [LOGIC]       [LOGIC+1]       [LOGIC+2]
                  [RENDER LOGIC]  [RENDER LOGIC+1]
                                  [GPU RENDERING]
each stage is independent and doesn't require syncing. they call that "frame centric design".


This system introduces yet more lag, which is increasingly awful kinesthetically. It makes modern games feel... sticky, sluggish, unresponsive, imprecise. We've gone from instant feedback to ridiculous degrees of input latency.

You press a button. It takes 1 frame for the signal from your USB device to get polled through the OS into your engine. Then it takes 1 frame for the input to affect the logic. Then it takes 1 frame for that change in logic to get "prepared" for rendering. Then it takes 1 frame for the GPU to draw the result. And depending on your video buffering settings and monitor response time, you're still adding a frame until you see the result.

If you're running 60 frames per second, that's an abysmal 83 milliseconds lag on every player input. And that's before network latency.


Last of Us Remastered had +100 milliseconds of input delay. GTA5 was above 150ms. Modern games feel sluggish because of the animations getting more and more realistic. With old games, the input would break the player current animation instantly, today games don't allow that anymore; everything has to be blend together. The input may have to be buffered and analyzed for few "logic" frames before having any effect on the "render logic" frames.


And then there's the output latency of LCD TVs. Seems less of a problem than it used to be, but some older TVs could easily add 50-100ms of latency (particularly if not switched to Game Mode, but even a game mode didn't guarantee great results)

But these days it's hard enough to convince people that 'a cinematic 30fps' really really sucks compared to 60 (or better), and there's an even smaller number of gamers/devs who seem to notice or care about latency issues.


Most people barely notice or mind, which is unfortunate. Elden Ring has so much input lag but most dont even notice.


Ooooh this is like that

  for {
     a[i] = something;
     b[i] = something;
     c[i] = a[i] + b[i]; // can only do a and b at the same time because c depends on them
  }


  // handle a[0],b[0]
  for {
    a[i] = something; 
    b[i] = something;
    c[i-1] = a[i-1] + b[i-1]; // can do all 3 at the same time because no deps.
  }

  // handle c[last]

optimization I saw in a talk about how CPUs can do many instructions at once if they don't depend on each other.

I was unaware of how something like this could play into game engines at the loop level, thanks for the link I'll watch it asap.


I learned about system order the hard way.

A system would try to apply damage to a ship that was already destroyed.

It taught me that you often have to flag things and then do a cleanup phase at the end. So destroyed = true but don’t outright delete the entity until the end of the “turn.”


Seems reasonable. Although I would tweak it to say you can't control when each turn ends.


Glenn Fiedler’s Gaffer on Games is a worthwhile read for anyone who wants to dive into the technical details of networked physics:

https://gafferongames.com/categories/game-networking/


Can't read it, ran out of free medium dot com articles.



Thank you – TIL that’s a thing!


Open in Private/Incognito mode?




i was on mobile :(

but more to the point im trying to get self respecting developers OFF of medium


This is pretty well explained, and the visualizations make it understandable.

An example of a netcode that does "prediciton" and "rollback" is GGPO, which is used in fighting games: https://en.wikipedia.org/wiki/GGPO

I believe a version of this is what runs in fightcade2 (see https://www.fightcade.com/), which is the best fighting experience I've ever seen. I can play against people all the way around the world, and it still works. Very impressive, and highly recommend to anyone in Gen X or Y who grew up on street fighter.


Unrelated to the topic. But at first the blog only contained an introduction, no content. But because others have been commenting, clearly I was missing something. I reopened the page few times. And at some point; some JavaScript I suppose?; started loading the text.

How is it possible for this medium service to be so spectacularly bad? We are talking about text here...

Edit: Did more test. The page updates and loads the content up to 5s after initial loading. There is no feedback. What a miserable experience.


Carmack wrote a great article on this a decade or so ago, but now every time I search for "carmack + latency + network + games" I get millions of hits about his recent VR stuff. Thanks SEO. Anyone remember the original article?



The first one is it! Thanks!!


His #AltDevBlog posts are no longer available, maybe that has to do with it. I too could only find stuff having to do with render latency (in the context of VR).


Can always add a "-vr" term to your search.


Just out of curiosity: does anyone know any real-time multiplayer game that runs the physics simulations on the server side? AFAIK, only Rocket League is doing this (and they have some good talks about it on GDC).


If it's anything serious/competitive and has to have integrity without having trust between enemy players, the server has to run the simulations. Otherwise it would be extremely easy to cheat just by modifying the client code and memory.


Though you can have client-side physics engine running as an interpolation between authoritative server states.


Physics on the server? Nearly every shooter game. Server-authoritative is the way. As per the article, the clients only predict some physics objects (requires simulation) and interpolates & extrapolates others (does not require simulation). The clients have no first authority over the server’s simulation, other than their own player inputs.


Here is the GDC talk: https://www.youtube.com/watch?v=ueEmiDM94IE

Of particular note is the fact that they run their physics loop at 120hz to minimize error.


Minecraft does physics and lighting on the server, although the physics model is very simplistic.


Physics is also ran on the client, but the server is authoritative and can correct or override client decisions.


Pretty much every serious online games do it.


For my game, King of Kalimpong, I run physics on the client and the server. The server is the boss but the client feels good.

I suspect most games where movement is primarily physics-based are doing this, but who knows, netcode tends to be very game-specific.


Multiplay Crackdown 3 (I worked on it) runs a fairly meaty physics simulation on the server side.


This reminded me of some of the stuff that went into TeaTime in Croquet-OS (https://en.wikipedia.org/wiki/Croquet_OS)


I don't know if you have seen, but the old Croquet team is back: https://www.croquet.io/


"Client side interpolation" is a term misused in the game world. It's really client side extrapolation. Interpolation is generating more data points within the range of the data. Extrapolation is generating more data points off the end of the data.

Interpolation error is bounded by the data points on both sides. Extrapolation error is not bounded, which is why bad extrapolation can produce wildly bogus values. So you need filtering, and limits, and much fussing around.


Well, no, because this article is kinda mostly wrong. Prediction is almost never used. Actual interpolation, however, is. As in, interpolation between 2 known points. What you're doing there is interpolating to handle the deviation between the server tick rate (say, 60 ticks / second), and the clients actual framerate (which can easily be >100fps). But you're always interpolating between 2 server received positions, you're ~never making stuff up in the future.

What actually happens in an FPS game is all clients just operate in the past. They aren't in sync at all, and don't even try to be. Rather, when the client sends something like a shot to the server, the server "rewinds time" to where everyone was ping/2 ms ago for that client & evaluates the state at that point to see if the shot would hit or not.


Ah. I'm painfully familiar with Open Simulator / Second Life clients, which really do extrapolate (although it's called "interpolation" in the code), and not all that well. There's frequent "rubber banding", where the extrapolation guessed wrong and positions snap back on the next update. My contribution to this was a sanity check against a low pass filtered value to put a ceiling on the error.


Once upon a time (2011) I wrote a blog post about Supreme Commander’s netcode.

https://www.forrestthewoods.com/blog/synchronous_rts_engines...

SupCom was a pretty classic synchronous + deterministic system. It’s a pretty major PITA and the next RTS I worked on was more vanilla client-server.


There are lots of interesting (and challenging) topics in this area and those tend to be a great source of complexity. For instance, do we want to replicate the full world? Or each client will keep only partial world state? This decision will impacts literally every other aspects such as programming model, security model, game play, etc.

I see a good potential for better formalization in this area. At its heart, this is literally distributed streaming system engineering with extremely wide, fluctuating variety of requirements in the name of game play first. You don't need to be 100% accurate on physics simulation but the result shouldn't diverge too much on every clients within a very limited bandwidth. You want to depict character movement as accurate as possible across every player, but their detailed status should be shared to their allies, not their opponents. Synchronization of continuous and discrete states are fundamentally different. How can we build a single, general networking model to unify those kinds of requirements without resorting to some ad hoc solution?


Honestly this is all largely a completely solved problem space. The article is just way out in left field seemingly fully oblivious to how game netcode currently works, which certainly isn't with AI prediction.

Look at actual game engine docs like this one from Valve https://developer.valvesoftware.com/wiki/Latency_Compensatin... or this one from Halo https://www.halowaypoint.com/news/closer-look-halo-infinite-...

But tldr is the only thing a client ever predicts is their own inputs, which of course can't really ever end up wrong later on. There's no other prediction happening (eg, the position of other players is not predicted)

And then for anti-cheat/optimization purposes the server also only sends positions for enemies that could be visible soon, which is done by taping into the same map chunking logic that would be used for asset streaming.

There's a ton of other great resources on this topic here https://github.com/ThusWroteNomad/GameNetworkingResources

But you'll find they all largely do the same basic thing. There's nuance in some of the rules and what state is replicated and what isn't (such as server side or client side ragdolls), but the general architecture tends to be the same. And without a fundamental shift in connectivity, seems pretty unlikely to change.


I had worked on a number of commercial MMO games' server and I don't necessarily agree on your opinion on "a completely solved problem space". Have you worked with your game designers and artist to find a sweet spot across latency, bandwidth, security, hit box and effect synchronization etc... just for one type of action skill? I had. It ended up with a very customized protocol which was nearly unusable on any other stuffs. It might seem simple to do such things for one or two features, but it's actually death by thousand cuts; it doesn't take that long till all those state synchronization and replication codes invade through game-play and presentation layers and all the logic and you got hundreds of them now.

And I am not very sure if you had a good look through the resources that you posted... Many of them are pretty explicit on their use of own ad-hoc solutions and the trade off that had to be made, which really supports my arguments. (Yeah, I think I already have read and watched more than half of them over the last decade) And this is not surprising if you have a good theoretical ground on distributed systems which have literally hundreds of impossibility theorems and real time games usually have quite tight latency constraints.


A great talk about this is about halo reach'es networking: https://www.youtube.com/watch?v=h47zZrqjgLc

I haven't seen it in a few years (re-watching it now), but IIRC they talk about how they do forecasting of things like shielding, grenade throws but need to reconciliate state after.


This is also an interesting writeup by Valve: https://developer.valvesoftware.com/wiki/Latency_Compensatin...


One thing I don't see mentioned is basing movement of some things on a synced clock

on a desktop machine open this link in multiple windows and size the windows so they are all at least partially visible at the same time

http://greggman.github.io/doodles/syncThreeJS/syncThreeJS.ht...

they should all be in sync because they are basing the position of the spheres on the system clock.

A networked game can implement a synced clock across systems and move some things based on that synced clock.

Burnout 3 did this for npc cars. Once a car is hit then its position needs to be synced but before it is hit it's just following a clock based path


One elegant solution to this problem is to keep 100% of the game state on the server and to stream x264 to clients.

I think if you engineered games explicitly for this use case, you could create a much more economical path than what products like Stadia offer today.


It doesn't solve the problem, in fact, it limits your options. And I may be wrong but x264 doesn't seem to have the lowest latency codecs.

There are essentially two ways of dealing with latency : input lag (wait until we know everything before showing player actions) and prediction/rollback (respond immediately trying to guess what we don't know yet and fix it later, what is shown in the article). Games often do a mix of both, like a bit of input lag for stability and prediction to deal with latency spikes. With video streaming, you limit your prediction options.


The article is about hiding latency between other people’s input, and your display. While it happens, your own input only has couple frames of latency.

The solution you proposed introduces latency between your own input, and your own display. Unless the server is in your house or at least within 100km from it, that latency gonna make the game unplayable.


Unless the server is in my house, I get at least 20ms round trip (thanks bonded VDSL2), so I'm a frame behind already. Add input sampling, codec delays, and video output delays and I'm at least two frames, probably three. That's going to be not great.


Most online games are 100% authoritative on the server already. Streaming rendered video is absurdly inefficient and does not help with the problems.


Yeah, I've worked on AAA games with great networking stacks and the network traffic is orders of magnitude lower than streaming something like 4k@60fps to the client over x264.


It is inefficient, but it makes cheating much harder (but unfortunately still possible, it will never be possible to prevent it completely).


Keeping the server 100% authoritative but maintaining all rendering on the client with no interpolation gets you all of the same benefits and drawbacks as shipping video, and ease of implementation, but with dramatically lower network costs. It’s not really done however except for turn-based games, because you still have horrendous input latency. It’s also entirely the same defense against cheating, except I suppose a user could do things like edit assets locally but I don’t think anyone cares about that concern


Another related option that sidesteps a big chunk of perceptible latency is to send clients a trivially reprojectable scene. In other words, geometry or geometry proxies that even a mobile device could draw locally with up to date camera state at extremely low cost. The client would have very little responsibility and effectively no options for cheating.

View independent shading response and light simulations can be shared between clients. Even much of view dependent response could be approximated in shared representations like spherical gaussians. The scene can also be aggressively prefiltered; prefiltering would also be shared across all clients.

This would be a massive change in rendering architecture, there's no practical way to retrofit it onto any existing games, and it would still be incredibly expensive for servers compared to 'just let the client do it', and it can't address game logic latency without giving the client more awareness of game logic, but... seems potentially neat!


Then you have a server rendering essentially 4 or 8 or 32 or whatever individual games, capturing and encoding them to a streamable format, streaming via a socket or webrtc or whatever to the client, who then sees the action, they input a control, the control has to get back to the server to be processed, render the next frame for every client, send them all back, and then the client sees the result of their action.

Doesn't seem elegant to me, it seems like a way to have wildly uncontrollable latency, and have one player's poor connection disrupt everyone else's experience.

I have a Steam Link hardware device to stream games from my PC to my TV over ethernet LAN, and even that can have issues with latency and encoding that make it a worse experience than just playing on the PC.


How is that elegant? In return for easy synchronization the server has to perform ALL computations. Sounds more like brute force to me.


It's very elegant depending on your goals.

It's the ultimate DRM for games, you can't crack a game's copy protection if you can't see it's binaries. You can't data-mine the locations of valuables if you don't have the map data. You can't leak unreleased assets ahead of their marketing debut if you don't have a copy of the assets. You can't expose all the Easter eggs by decompiling if you don't have the code. With subscription and premium currency models those abilities can all be interpreted as lost revenue.

The markets for people buying $400 consoles vs buying a $20 HDMI stick and a $15/mo subscription are very different. After the colossal (and to me, surprising) rise of mobile gaming I think the latter might be where the real money will be 10 years from now.

They'll address the bottlenecks on the data center end. I'm pretty sure you can list a dozen problems that make it prohibitively expensive right now and for every one of them some Microsoft or NVIDIA engineer can tell you how they are working on engineering away that problem in a couple years.


Of course, it solves a lot of problems. But you have to pay for it by increasing the load on the servers by orders of magnitude. This makes it a brute force approach in my opinion.

This is of course depending on your meaning of „elegant“, but for me this would imply to solve these issues without increasing the server load so much. Let the client decide as much as possible, but check the decisions randomly and in case of suspicious player stats. And DRM for multiplayer games should be no problem anyways? Verify the accounts or your users, but that applies to stadia like services and also the „conventional“ ones. Solving the data mining issue is another topic and yes, giving the server more authority for things that the player should be able to see might be the only way to deal with this. But maybe the server could hand out client specific decryption keys when they are needed? That would be elegant, and not just keeping all the content server-side.

Game streaming services will find their place, but they address mainly the entry hurdle and not the issue of game state synchronization.


Depending on how high the encode/decode/transmit overhead is, it might be a more efficient use of resources. Most game consoles and gaming PCs are idle most of the time. Centralizing the rendering in a data center is going to yield better resource utilization rates.

Mind you, they're still not going to be great rates, because the data center needs to be geographically close and so utilization will vary a lot in daily and weekly patterns.

Then again, maybe you can fill all those GPUs with ML training work when the gaming utilization is low...


Latency has a big impact on user experience in most multiplayer games and players with a low latency have an advantage in many games. I am not sure how satisfied people will be with low cost low powered systems made for game streaming in the long run when other users have a much better experience and even advantages in gameplay.


You don't have to stream encoded video from the server. It is much faster and simpler to stream game state and have the game clients render it. Esentailly, that's how all the games with "authoritative server" work.


But it's not just one video render. It is one video render per user. People have to build pretty decent rigs to get their single view of the game to render at speed. Multiplying that times the number of players seems wildly expensive for a single machine to do to be a non-starter after 3 seconds of thought.


This seems so obvious to me that I’m surprised Stadia doesn’t already do that (not having done any real reading on how it works; just making assumptions based on the marketing I’ve seen).

I just assumed it was a video stream with a touch control overlay.


That defeats most of what this article is talking about--hiding the perceived latency. In addition to the other issues like higher server cpu & bandwidth requirements, and poor video quality.


Stream OpenGL rendering calls from the server to the client.


That works fine for simple 3D scenes. One application, that’s how MS remote desktop worked until recently — Windows streamed Direct3D rendering calls to the client.

However, I’m not sure this gonna work with modern triple-A games. The current-gen low level GPU APIs were designed to allow developers of game engines to saturate bandwidth of PCI-express with these rendering calls. That’s too many gigabytes/second for networking, I’m afraid.


Not all clients have the same capabilities with regard to 3d hardware. Virtually any modern device can decode x264 without difficulty.


there's a very small chance that cloud gaming may one day work. even if that small window of opportunity exists, the incompetent game industry will miss it.


I am not a real-time systems engineer and know very little about the tech, but do know a lot about network engineering. I was honestly kind of disappointed by this article because it seemed like the network and latency side of things was just “YOLO” and the solutions were basically “interpolate with AI” and that was it. I was hoping for more insights into solving the problems but instead feel like it was more “here are ways to make it appear that there’s no problem.”

Definitely open to being wrong on this opinion.


What a great article!

For those of you interested in cognition, our brains have almost precisely the same problem of temporal delay and prediction.

Each of many sensory-motor systems has about 50 to 150 milliseconds of jitter and offset. And the jitter and offset in timing depends on many factors—-intensity of the stimulus and your state of mind for example.

How does the brain help “consciousness” create an apparently smooth pseudo-reality for us from a noisy temporal smear of sensory (sensory-motor) input spread out over 100 milliseconds or more?

It is damn hard, but the CNS plays the same games (and more) as in this great Medium article—interpolation, smoothers, and dynamic forward prediction.

Just consider input to your human visual system: color-encoding cone photoreceptors are relatively fast to respond—under 30 msec. In contrast, the rod photoreceptors are much slower integrators of photons—up to 100 msec latencies. So even at one spot of the retina we have a serious temporal smear between two visual subsystems (two mosaics). It gets much worse—the far periphery of your retina is over a centimeter from the optic nerve head. Activity from this periphery connects slowly over unmyelinated fibers. That add even more temporal smear relative to the center or your eye.

And then we have the very long conduction delays of action potentials going from retina, to first base—the dorsal thalamus—, and then finally to second base—the primary visual cortex at the very back of your head. That is a long distance and the nerve fibers have conduction velocities ranging 100-fold: from 0.3 meters/sec to 30 meters/sec.

The resulting input to layer four of your visual cortex should be a complete temporal mess ;-) But it isn’t.

What mechanisms “reimpose” some semblance of temporal coherence to your perception of what is going on “out there”?

Neuroscientist do not spend much time thinking about this problem because they cannot record from millions of neurons at different levels of the brain.

But here is a good guess: the obvious locus to interpolate and smooth out noise is the feedback loop from visual cortex to dorsal thalamus. I mentioned the dorsal thalamus (formally the dorsal lateral geniculate nucleus) as “first bade” to visual system input. Actually it get 3X more descending input from the visual cortex itself—a massive feedback loop that puzzles neuroscientists.

This huge descending recurrent projection is the perfect Bayesian arbiter of what makes temporal sense. In other worlds the “perceiver”, the visual cortex, provides a descending feedback to its own input that tweak synaptic latencies in dorsal thalamus to smooth out the visual world for your operational game playing efficacy. This feedback obviously cannot remove all of the latency differences, but it can clean up jitter and make the latencies highly predictable and dynamically tunable, via top-down control.

Quite a few illusions expose this circuitry.

for more @robwilliamsiii or labwilliams@gmail.com


Do you have any example illusions? Lately I notice that I read a lot of sentences wrong, and a second reading gives different words (though both readings were made up of similar letters), perhaps my feedback loop is too strong on the prediction?


Yes, sure:

1. The Pulfrich illusion (link to Wikipedia) is the very first clear illusion of this type.

2. Many false movement illusions expose the inevitable failure of temporal error control.

My favorite is Akiyoshi Kitaoka “subway” illusion @AkiyoshiKitaoka on May 9, 2021

3. Have a look at my pinned tweet @robwilliamsiii


A colleage told me once that his manager went into a panic after realizing that their multiplayer game would be unplayable over the network… Unbeknownst to said manager, the debug builds all this time had a 1 second delay built in to ensure all code would be able to deal with real-world delays.


I’m guessing you’re simplifying - but a fixed delay like this is a good way to fool yourself, since a system built that can handle fixed latency can go to pieces in the face of jitter and packet loss.


Any autonomous-vehicle engineers here? This problem seems similar, if not identical. Are self-driving cars the "clients" that predict the state of the "server" that is the real world?


It's very different. Games, at least FPS ones, don't actually try to predict the future at all. They just handle what are essentially sync collisions after the fact. This is why you'll find endless complaints from gamers about "dying behind a wall". Movements aren't predicted, they are instead compensated for by the server which just maintains history of where everything was in the past.


I think it's a stretch. In games you only have to deal with out of date but accurate information. In AI you have fuzzy images to interpret.


I think the analogy would be more that cars, pedestrians, and obstacles to avoid are all peers, and the real world is the network.


With all this in mind, how does a monitor's refresh rate matter, like, at all in games like CS:GO (e.g. 60 fps vs 120 fps)?


It's crazy that blizzard games had done this in 2001 (startcraft1, diablo2 pvp, etc), and did very well


In the earlier discussion about how you should not use text pixelation to redact sensitive info, I wrote this about how when re-developing The Sims into The Sims Online, the client and server would get out of sync whenever a Sim would take a dump, because the pixelization censorship effect used the random number generator:

https://news.ycombinator.com/item?id=30359560

DonHopkins 3 months ago | parent | context | favorite | on: Don't use text pixelation to redact sensitive info...

When I implemented the pixelation censorship effect in The Sims 1, I actually injected some random noise every frame, so it made the pixels shimmer, even when time was paused. That helped make it less obvious that it wasn't actually censoring penises, boobs, vaginas, and assholes, because the Sims were actually more like smooth Barbie dolls or GI-Joes with no actual naughty bits to censor, and the players knowing that would have embarrassed the poor Sims.

The pixelized naughty bits censorship effect was more intended to cover up the humiliating fact that The Sims were not anatomically correct, for the benefit of The Sims own feelings and modesty, by implying that they were "fully functional" and had something to hide, not to prevent actual players from being shocked and offended and having heart attacks by being exposed to racy obscene visuals, because their actual junk that was censored was quite G-rated. (Or rather caste-rated.)

But when we later developed The Sims Online based on the original The Sims 1 code, its use of pseudo random numbers initially caused the parallel simulations that were running in lockstep on the client and headless server to diverge (causing terribly subtle hard-to-track-down bugs), because the headless server wasn't rendering the randomized pixelization effect but the client was, so we had to fix the client to use a separate user interface pseudo random number generator that didn't have any effect on the simulation's deterministic pseudo random number generator.

[4/6] The Sims 1 Beta clip ♦ "Dana takes a shower, Michael seeks relief" ♦ March 1999:

https://www.youtube.com/watch?v=ma5SYacJ7pQ

(You can see the shimmering while Michael holds still while taking a dump. This is an early pre-release so he doesn't actually take his pants off, so he's really just sitting down on the toilet and pooping his pants. Thank God that's censored! I think we may have actually shipped with that "bug", since there was no separate texture or mesh for the pants to swap out, and they could only be fully nude or fully clothed, so that bug was too hard to fix, closed as "works as designed", and they just had to crap in their pants.)

Will Wright on Sex at The Sims & Expansion Packs:

https://www.youtube.com/watch?v=DVtduPX5e-8

The other nasty bug involving pixelization that we did manage to fix before shipping, but that I unfortunately didn't save any video of, involved the maid NPC, who was originally programmed by a really brilliant summer intern, but had a few quirks:

A Sim would need to go potty, and walk into the bathroom, pixelate their body, and sit down on the toilet, then proceed to have a nice leisurely bowel movement in their trousers. In the process, the toilet would suddenly become dirty and clogged, which attracted the maid into the bathroom (this was before "privacy" was implemented).

She would then stroll over to toilet, whip out a plunger from "hammerspace" [1], and thrust it into the toilet between the pooping Sim's legs, and proceed to move it up and down vigorously by its wooden handle. The "Unnecessary Censorship" [2] strongly implied that the maid was performing a manual act of digital sex work. That little bug required quite a lot of SimAntics [3] programming to fix!

[1] Hammerspace: https://tvtropes.org/pmwiki/pmwiki.php/Main/Hammerspace

[2] Unnecessary Censorship: https://www.youtube.com/watch?v=6axflEqZbWU

[3] SimAntics: https://news.ycombinator.com/item?id=22987435 and https://simstek.fandom.com/wiki/SimAntics


sidenote, but:

> To keep that in perspective, light can barely make it across the continental united states in that time

that is not true


Seems correct to me although they took the longest distance from Florida to Washington.

2800miles / c = 15ms




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: