Just a few months ago, I did an overhaul of the network code for DXX-Retro[1] -- a source port of Descent. Descent worked much better over the (laggy, lossy, bursty) net than DOOM, and -- if you're looking to mimic old school games -- is really worth studying.
Some quick technical commentary:
The bandwidth calculation in the article is predicated on sending updates at 60 hz -- or what we'd in the Descent community call 60 PPS. Probably because the screen is refreshing at that rate? It's unnecessary. You want a high framerate for control of your own game, but you don't need it to see enemies smoothly. Remember, movies only run at 24 FPS. ;)
The highest I allow in Descent is 30 PPS, and really . . . it's seen as a luxury. 20 is generally fine. Sometimes I play games at 10, and there you can definitely tell -- even with the smoothing (it sends velocity and acceleration, too, and interpolates) -- but it's perfectly playable.
Which is something worth remembering. With old school games, "crappy but perfectly playable" is actually all they were able to achieve.
No, the physics engines aren't perfectly locked, and how tolerable this is will depend on your game. In a simple FPS, this really isn't a big deal. You just lag lead (compensate in aim, both for the motion of your enemy, and the fact that the data is old). :)
Some of how he's proposing to send the data is wasteful. He initially proposes sending "is weapon equipped" and "is firing" as 1 byte a piece -- and later concludes he can get those in 1 bit a piece. True. That's a savings by a factor of 8 right there. But I can do you one better -- don't send it every update. How often do those states really change?
In Descent, we have two classes of data: position and orientation data that's sent at a steady rate (10-30 PPS), and event data that's sent . . . whenever it happens. Equipping weapons is definitely the second type; it happens extremely rarely -- like, seconds pass between weapon switches. :)
One thing that may surprise you. We don't send "is firing" as a flag with every packet -- we send one packet per shot taken! Two reasons for this: one, it's actually lower bandwidth. Shots fire rarely -- our fastest gun fires 20 bullets per second, but the next fastest fires six. Then five, then four, then . . . one shot every two seconds. And you're not always firing, either. Sending one packet per shot saves bandwidth. But it also increases accuracy! We attach those shot packets to a position and orientation update, so -- even if the receiver has incorrectly interpolated your position -- the shot goes exactly where you intended it to. This is very important. :)
As a player (and a programmer -- but mostly a player), I'm concerned about the article's conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".
Ok, yes, it does. But the thing is, you have a choice here. You can present your players with something pretty and smooth that is fundamentally a lie, or with something jittery that is the best knowledge you have about where the enemy is. This is a fundamental tradeoff: versimiltude or accuracy. You can't have both.
My players overwhelmingly prefer accuracy. To them, the avatars on the screen are targeting aids, and they understand that the data is old and a bit lossy and bursty sometimes, and they want the best data possible so they can take the best shot possible. :)
I suppose your mileage may vary by audience. Mine's been playing this game 20 years and "crappy but playable" is both normal and good to them. :)
But -- I can't imagine this would be different in another FPS -- taking a shot that you know is good on your screen, and have it not hit the other guy . . . that's rage-inducing right there!
Yeah, networked games are hard. For sure. And there are fundamental hard tradeoffs involved in engineering them. For sure. But it's an interesting problem, and also worth it. :)
I wrote the multiplayer code for Descent 2 and Descent 3 (as well as the graphics engine for D3). Although I can't remember all the details because D2 was back in 1995(!), D2 had a significantly overhauled network layer from D1. Some examples: Short packets, where position and orientation data was quantized down into single bytes instead of floats, lower packets per second (you could go down to 5 PPS if I recall correctly). We were also the first game where if the 'master' dropped out the game another player would become the master in a hand-off scheme that was a bit complex. The master controlled things like notifying other players of new players, end of level stuff, etc. We had to sweat every byte because we were trying to have 8 players with low lag over a typical 28.8 baud modem.
D3 changed the overall feel of the Descent series, mostly because we introduced a terrain engine and that had a cascading effect on the rest of the game. The speed of the ship, for example, had to be significantly increased because if we used the ship speed from D1/D2 then going out into the terrain felt like you were stuck in molasses.
Working on those games was incredibly fun. Ah, to be 25 again.
> "We had to sweat every byte ... 28.8 baud modem."
Yeah, that stuff really mattered in that era. Nowadays we've eliminated a lot of the compression and upped the packet rate -- a modern "bad" connection can handle 8 players at 10 pps without compression, and the accuracy is necessary with pilots who dodge as well as some of the modern masters.
> "if the 'master' dropped out the game another player would become the master"
We'd love to reimplement that. It was such a great feature! It was removed when one of the previous coders overhauled the networking code.
> "D3 changed the overall feel of the Descent series"
Yeah. It was a fun game in its own right, but it really shifts the emphasis from dodging to aiming. It didn't play as well in a 1v1 setting, but had some really compelling team modes. I played CTF every night from about 2004 to when my son was born in 2009.
EDIT: P.S. there's a Descent LAN coming up this month. If you still keep up with Gwar, he has all the info.
> As a player (and a programmer -- but mostly a player), I'm concerned about the articles conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".
I agree, predicting remote objects causes more problems than it solves. Any remote player input that affects the object causes noticeable position popping when the local client receives the update.
The Source engine has a better solution - instead of trying to predict remote objects, delay rendering by 100ms and interpolate the received updates. This makes the position updates appear smooth without any stuttering. However, now the client lags 100ms behind the server.
The server has to "undo" this lag when the player wants to execute a request (like shooting). It does this by rewinding the world state by 100ms + client ping when executing the request. This makes it so that a client's actions appear to execute without any lag on their view (i.e. they don't have to shoot ahead of their target).
This causes some temporal anomalies, for example, players can shoot you around corners because they see your position 100ms in the past. However, most players seem to prefer this over having to constantly shoot ahead of their targets to compensate for prediction errors.
That's one way of doing it! I suppose it all comes down to what your players expect.
Mine viscerally hate lag, whether they can see it or not -- but especially if they can't. I think if I built 100ms of lag into the client, I'd have a mutiny on my hands. ;)
But I do remember having a pleasant experience in TF2. Source probably is a good source of inspiration, those Valve guys know their stuff. :)
A further thought: this is another of the fundamental tradeoffs in doing this sort of design work.
In a laggy environment, you can either have dodging work 100% correctly client side, if you dodged it, it didn't hit you, no questions asked . . . or you can have aiming work that way, if you hit it, you hit it, no questions asked.
You can have one or the other. You cannot have both. (And in a server-based setup, you generally get neither).
And I think which you choose (or which you choose to emphasize, if you give both up) will depend on what sort of game it is. In an aiming-heavy, mostly fast-weapon or instant-hit FPS (like maybe DOOM), you should pick aiming. In slow-weapon, slow-ship, combat-flight-maneuver-oriented Descent, the original developers correctly picked dodging.
Which highlights something else: When doing netcode for an FPS, your engineering decisions are game design decisions. Never overlook that.
> "When doing netcode for an FPS, your engineering decisions are game design decisions."
Of particular note: your engineering decisions directly affect which tactics are viable and to what degree. Can you wait for the last instant to dodge, or do you need to dodge a shot well before it reaches you? Can you kill a laggy player before they have a chance to react, or can they dodge your shots long after they appeared to hit? Can a player with a better internet connection control an area simply by virtue of their shots taking effect sooner (destroying an opponent, or at least wrecking their aim, before their shot gets counted as having fired)?
This came up in the Descent series with a change to the networking model between Descent 2 and 3. The top competitive communities didn't consider Descent 3 to be "real Descent" because the move to client-server (as well as physics changes) changed the gamescape so drastically.
[Disclosure: I'm married to Dove, and we have been playing Descent together for 16 years.]
In Battlefield 3 and 4, DICE chose to implement the latter option (aiming and hit detection are clientside, no questions asked). The tradeoff - and I'm sure you'd agree there is always a tradeoff no matter how we deal with latency - is that as the person being shot at, you'll frequently die some fraction of a second after getting behind solid cover, because you weren't behind cover just yet on your opponent's screen when he shot you.
This is certainly frustrating and frequently complained about, but I think it's the lesser of various evils for these particular games.
I forgot to mention -- about PPS -- most modern high-speed connections can handle a 30 PPS 8-player Descent game just fine, but there are a few players stuck in rural areas with very old connections, who can't. I'm guessing -- from the 8kB/s limit he set for himself -- that the author's audience sees a similar distribution.
The thing is, you don't have to make those connections symmetric. One of the features I'm working on for the next Descent Retro release is allowing players to set a PPS limit for themselves, based on their own available bandwidth, that will both reduce their PPS upstream, and request that their opponents reduce the rate of the incoming packets.
This means people with fast connections can have high-quality 30 PPS game interactions, and people with slow connections can still play with them. They have to put up with 8 PPS (or whatever), but it's preferable to not playing. :)
If you're worried about popping, make sure you drop out-of-order position packets. In my experience, the 33 ms of player input between normal packets is negligible. You get popping in theory, but you can't see it. I haven't seen any for a long time, and the smoothing Descent does is both predictive and minimal. But a position packet arriving 100 ms late . . . that's a pop. When I inherited this code, ships did a lot of jittering; when I eliminated out-of-order packets, it pretty much all went away.
YMMV, of course. Descent ships are slow by FPS standards.
Please do not say such a thing without also including the footnotes that movies get away with that by:
1. having no interaction, so the audience has no way to feel the delay between frames; as opposed to interactive media where the delay is easily felt through the latency of input to action on screen
2. having interpolation built in naturally into the camera, i.e. motion blur being entirely free
It's all over UDP, but there's some reliability built on top of it. Most irregular events -- chat messages, weapon pickups, door openings -- are sent over and over, until they're acknowledged. Shots are an exception. By the time you can figure out the shot packet was lost, it would be weirder to send it than not to. 300 ms is a very long time in combat.
I do display connection statistics to players -- they use lag to lead, and loss to know if the number of dropped shot packets will be tolerable. A lot of players don't like playing at worse than 0% loss, and will meddle with their net settings (or play another day) if they see any.
this is so cool. I remember playing Descent on the Playstation. This was the first game I bought. I didn't like it at first because the guy that sold the playstation said this is sorta like flight simulator but apparently you never get to fly outside the mines to my great disappointment.
But I've put in hours on the game, even though it scared the crap out of me (flashing and dimming lights with alarm that tells you the mine is gonna blow and you haven't even figured out where the exit is claustrophobically genius)
To see so much work going into it in the community is awe inspiring.
Some quick technical commentary:
The bandwidth calculation in the article is predicated on sending updates at 60 hz -- or what we'd in the Descent community call 60 PPS. Probably because the screen is refreshing at that rate? It's unnecessary. You want a high framerate for control of your own game, but you don't need it to see enemies smoothly. Remember, movies only run at 24 FPS. ;)
The highest I allow in Descent is 30 PPS, and really . . . it's seen as a luxury. 20 is generally fine. Sometimes I play games at 10, and there you can definitely tell -- even with the smoothing (it sends velocity and acceleration, too, and interpolates) -- but it's perfectly playable.
Which is something worth remembering. With old school games, "crappy but perfectly playable" is actually all they were able to achieve.
No, the physics engines aren't perfectly locked, and how tolerable this is will depend on your game. In a simple FPS, this really isn't a big deal. You just lag lead (compensate in aim, both for the motion of your enemy, and the fact that the data is old). :)
Some of how he's proposing to send the data is wasteful. He initially proposes sending "is weapon equipped" and "is firing" as 1 byte a piece -- and later concludes he can get those in 1 bit a piece. True. That's a savings by a factor of 8 right there. But I can do you one better -- don't send it every update. How often do those states really change?
In Descent, we have two classes of data: position and orientation data that's sent at a steady rate (10-30 PPS), and event data that's sent . . . whenever it happens. Equipping weapons is definitely the second type; it happens extremely rarely -- like, seconds pass between weapon switches. :)
One thing that may surprise you. We don't send "is firing" as a flag with every packet -- we send one packet per shot taken! Two reasons for this: one, it's actually lower bandwidth. Shots fire rarely -- our fastest gun fires 20 bullets per second, but the next fastest fires six. Then five, then four, then . . . one shot every two seconds. And you're not always firing, either. Sending one packet per shot saves bandwidth. But it also increases accuracy! We attach those shot packets to a position and orientation update, so -- even if the receiver has incorrectly interpolated your position -- the shot goes exactly where you intended it to. This is very important. :)
As a player (and a programmer -- but mostly a player), I'm concerned about the article's conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".
Ok, yes, it does. But the thing is, you have a choice here. You can present your players with something pretty and smooth that is fundamentally a lie, or with something jittery that is the best knowledge you have about where the enemy is. This is a fundamental tradeoff: versimiltude or accuracy. You can't have both.
My players overwhelmingly prefer accuracy. To them, the avatars on the screen are targeting aids, and they understand that the data is old and a bit lossy and bursty sometimes, and they want the best data possible so they can take the best shot possible. :)
I suppose your mileage may vary by audience. Mine's been playing this game 20 years and "crappy but playable" is both normal and good to them. :)
But -- I can't imagine this would be different in another FPS -- taking a shot that you know is good on your screen, and have it not hit the other guy . . . that's rage-inducing right there!
Yeah, networked games are hard. For sure. And there are fundamental hard tradeoffs involved in engineering them. For sure. But it's an interesting problem, and also worth it. :)
[1] https://github.com/CDarrow/DXX-Retro