The hardest part is that most game engines are not designed to be networked in this way. Find an open source physics engine that natively supports teleportation, easing and prediction, they're not there. UE4 is the first engine I've used that seems to have a very nice multiplayer API, and it's only been out for indie developers for a couple of months.
So the hard part is not devising the networked scheme, it's building a whole game engine (or thoroughly modding one) afterwards, at least in my experience.
I was working on a multiplayer racing game project (like GTA2), and my approach was to run 2 physics engines in parallel. One physics engine would always be authoritive, and be in sync with the server, but because of the lag would always be a frame or two (or more) behind. The other would be working on predicting what is going to happen. Every frame snapping back to the authority, and then applying the (predicted and user) inputs over that.
The actual position the user would see would be an averaged position between the current predicted position, and its previous predicted position (to prevent too much jitter/snapback).
If you've got a better scheme to do networked physics I'm all ears :)
Back In The Day (just to show how ancient I am), I personally debugged the first peer-to-peer (vs client/server) networked simulation protocol on the DARPA SIMNET project: http://en.wikipedia.org/wiki/SIMNET
Interesting stories notwithstanding (anybody else here ever got to drive/fire a tank, and have fellow engineers get dosed with CS gas, as part of a software engineering job?), when time came to standardize this research protocol as: http://en.wikipedia.org/wiki/Distributed_Interactive_Simulat...,
The work was supported by a DARPA small-business project, so the IP was left with the company in hopes of commercializing it, and maximizing dissemination. The attempt to get the ideas incorporated as part of the standard was singularly unsuccessful. More's the pity, as I think it would have really helped make the simulations more capable of simulating a wider array of physical phenomena.
The commercial uptake was equally unsuccessful. I experienced some culture shock when proselytizing (again, unsuccessfully) at game development conferences.
> UE4 is the first engine I've used that seems to have a very nice multiplayer API
I'm working on a triple a ue4 title and i don't like ue4s replication system at all. It's a step towards making the networking concerns invisible, and that always seems like a mistake to me. If the protocol for keeping clients in sync were more explicit, it would be easier to tightly control what and when things get sent over the network.
Interesting. After reading this Gamasutra article[1] on how online multiplayer can literally add years to your development time, I got a little scared. Will UE4 allow developers to avoid this setback?
(On the other hand, developers like Carmack and Michał Marcinkowski only took a matter of months to add it to their games, and they were among the first. So maybe it's not as big of a deal as it seems.)
I only skimmed it, but I didn't see anyone in there who took years. They were all on the order of months. Entirely possible I missed it.
That said, adding networked multiplayer to a game is very difficult. But designing a game around networked multiplayer is not nearly as hard. Single player or local only multiplayer games can get away with lots of assumptions that completely break multiplayer, and detangling all of those assumptions is the most time consuming part of converting a game. If you do it "right" from the start, you don't ever allow yourself to make any of those assumptions and everything just works. If you know you're going to build a multiplayer game, then write it as a client server architecture from day one even if you know you're not going to play with other people for the first several months of development. Future you will thank present you.
"Implementing online multiplayer was very difficult," developer Casper van Est tells me. "People always warn you about it, and even with that taken into account, it turned out more difficult than we expected. We're a small two-person development team, so we already knew implementing it was going to take a while, but it ended up taking us several years."
Yes, definitely, but only if UE4's network model fits your game. It takes a lot of research and I'm afraid some experience also to design a real time networked action game.
This goes for all nonconventional features though, I saw for example that Cubical Drift announced last week that they underestimated their needs and will have to swap out their graphics engine mid game. Not fun stuff I'm certain .
Have had this problem too with making a racing game. We ended up making remote entities move independently of the physics engine (I.E. cars for remote players were just static objects that had their positions manually updated). This worked to some extent, but we had to do some fudging to make collisions work somewhat realistically.
The clients determined their own position, yes. It was not a peer to peer game and the server ran sanity checks on what the client was telling it (is their velocity acceptable for the car they have? did they accelerate too fast? are they in an area of the world that should not be accessible? etc) which worked pretty well. We deemed it was too resource intensive and too slow to run a physics engine on the server (it was a "massively multiplayer" game).
It doesn't sound like that at all. The person you're replying to just said that they were sufficiently frustrated with the physics engine's treatment (or lack thereof) of network phenomena that they bypassed it completely, telling the physics engine "this car is just a static object at this location."
That doesn't mean that there wasn't some other process which determined how that number was generated before it was handed to the physics engine -- either via a central server or something more distributed.
Speaking of the latter, I wonder whether consensus algorithms can be pulled off with a small enough bandwidth that you could incorporate them into a high-intensity game. Your consensus bandwidth is not the slowest-peer bandwidth but rather the median bandwidth, which can be much higher -- the question I'm asking is whether there's a good way for a slow peer to say, "hey, to improve my performance by reducing my bandwidth/latency, can one of you guys act as my server?"
Possibly, but what about collusion? This would be a problem especially with team games, one team has an extra player so 51% so they come to the consensus that everyone on the other team is always dead.
Well, I don't think it's a huge problem. Maybe I'm wrong.
I feel like it's core to the idea of "let's go play a game" that we largely agree on its rules -- and if more than half of us deny those rules, they are going to go off and play a different game; we won't want to play with them. So one response is for the reference client to simply say "if I see too many totally crazy things I'm just going to disconnect and ignore those peers who were saying that for a while."
But is it really a problem? You've got to imagine that we've got this red-vs-blue team game with two clients: ref and hax. The only way Red will be able to do this is if the red+hax population is greater than 50% among the server-population, because all of the ref clients will reject bad physics.
So suppose we've got a game of 21 people. 15 of them use the hax client (~70% participation), and we'll just assume there are no low-latency peers for the moment. The red team gets 11 consistently; the blue team gets 10. Then assuming team assignments are totally random, there's still only an 0.12% chance in any given game of the red team actually having 11 hax nodes and dominating the game. In the vast majority of the games they'll have to play honestly. And that's with 70% of the peers trying to game the system. (It gets a little worse if we include low-latency peers. So let's assume that there are 4 and they get distributed unevenly, 1 on red, 3 on blue. The consensus threshold is now 9. Assuming we lost 3 hax clients in the process, 16% of such games will be vulnerable to your attack. That's enough to make things frustrating.)
Meanwhile, the hax-client may make things unplayable unless it behaves like the ref-client when it's not in the majority.
The basic point is that the red-vs-blue partition makes those attacks not a concern. But when that partition doesn't exist, then there's a bigger problem. So the more concerning thing for me is denial of service. I don't think I'll get 70% of the legitimate customers to try to hack the game, but I do think that IPv6 support could lead to one person having a block of 10,000 IP addresses being able to take the majority of peers in all of my games. Okay: they may not have enough red peers to win the games for red or enough blue peers to win the game for blue, but suppose that their goal isn't to win, but just to shut down the system. Suppose their modified client, instead of saying "everyone on blue suddenly dies", says simply "everyone suddenly dies". Now 99% of games become unplayable, all of my legitimate users rage-quit, and I'm totally screwed.
So the problem is that sign-ups must be relatively closed and everyone needs to be able to validate that independently. I'm not sure how to solve that in a distributed way without some web-of-trust thing going on.
Sounds like you never played Red Faction. I love that game but watching people fly around the map, clip through walls and shoot 1000 rockets a second (much to the dismay of my graphics card) wasn't the most pleasant experience.
Your articles were SUPER helpful when I added multiplayer to my game, you may be the reason I succeeded in it, alongside Valve documentation. Thank you so much for writing them!
The original article does not discuss the aspect of scaling fast-paced games to handle large numbers of replicated objects and players under constrained network conditions but I found the approach proposed in the original TRIBES model to be the most (only?) credible so far. I don't see support for this in any of the popular, modern network libraries (RakNet, enet, lidgren, ...). They all seem to have taken the 'multiple reliable channels' direction but that just doesn't seem to scale to many connected players the way the TRIBES model does.
I would love to hear from anyone who has had experience with that!
In specific contexts like a 1v1 fighting game, you can get clever. GGPO*. Every move in the game has a specified "startup" time which is generally a) too fast to react to & b) consistent.
When you jump & throw a punch you send the frame along with your attack. My client speeds up your character's game state to match what your client experiences.
Another reason [TagPro](http://tagpro.koalabeast.com) is an excellent game. It's implementation of this stuff is great - with a reasonable connection my ping is usually under 10ms, and the game is often won and lost with the smallest of margins making this kind of thing very important.
Not to mention the excellent gameplay and mechanics - it's very simple to understand and learn, but very very difficult to play well!
Fascinating, I've been interested in what kinds of serverside software real time online games(WoW, Call of Duty, etc) run for a while now, but haven't been able to find much info on it.
Seems that they would need to be optimized for many very high stress concurrent connections with as little latency as possible, so I'd guess that they run C/C++ and/or Java? Do they use something like Websockets, or do UDP/TCP or some other persistent two way connection method?
There don't seem to be any publicly available libraries focused on this kind of thing, so I assume that they develop their networking stuff mostly in-house.
Anyone that knows about this stuff willing to share?
Unless it's in a browser it's not using Websockets. TCP where the overhead is acceptable or packetloss is not, turn based strategy, slower pased role playing games. UDP where minimal lag is needed, FPS's mainly.
WoW uses TCP the last time I checked (the overhead of TCP is acceptable for an RPG in most cases). I believe Diablo 3 also uses TCP and uses protobuf.
EVE online have some very good tech posts and are generally quite open about their technology (although it's mostly about hardware and infrastructure). Valve also have some good articles on networking for Source games (someone posted the link elsewhere in this thread).
> There don't seem to be any publicly available libraries focused on this kind of thing, so I assume that they develop their networking stuff mostly in-house.
Interesting. Enet seems pretty barebones, but I guess that would be beneficial for games where there can be a lot of variance in gameplay, what with the parent post's explanation of the level of precision required.
Just a few months ago, I did an overhaul of the network code for DXX-Retro[1] -- a source port of Descent. Descent worked much better over the (laggy, lossy, bursty) net than DOOM, and -- if you're looking to mimic old school games -- is really worth studying.
Some quick technical commentary:
The bandwidth calculation in the article is predicated on sending updates at 60 hz -- or what we'd in the Descent community call 60 PPS. Probably because the screen is refreshing at that rate? It's unnecessary. You want a high framerate for control of your own game, but you don't need it to see enemies smoothly. Remember, movies only run at 24 FPS. ;)
The highest I allow in Descent is 30 PPS, and really . . . it's seen as a luxury. 20 is generally fine. Sometimes I play games at 10, and there you can definitely tell -- even with the smoothing (it sends velocity and acceleration, too, and interpolates) -- but it's perfectly playable.
Which is something worth remembering. With old school games, "crappy but perfectly playable" is actually all they were able to achieve.
No, the physics engines aren't perfectly locked, and how tolerable this is will depend on your game. In a simple FPS, this really isn't a big deal. You just lag lead (compensate in aim, both for the motion of your enemy, and the fact that the data is old). :)
Some of how he's proposing to send the data is wasteful. He initially proposes sending "is weapon equipped" and "is firing" as 1 byte a piece -- and later concludes he can get those in 1 bit a piece. True. That's a savings by a factor of 8 right there. But I can do you one better -- don't send it every update. How often do those states really change?
In Descent, we have two classes of data: position and orientation data that's sent at a steady rate (10-30 PPS), and event data that's sent . . . whenever it happens. Equipping weapons is definitely the second type; it happens extremely rarely -- like, seconds pass between weapon switches. :)
One thing that may surprise you. We don't send "is firing" as a flag with every packet -- we send one packet per shot taken! Two reasons for this: one, it's actually lower bandwidth. Shots fire rarely -- our fastest gun fires 20 bullets per second, but the next fastest fires six. Then five, then four, then . . . one shot every two seconds. And you're not always firing, either. Sending one packet per shot saves bandwidth. But it also increases accuracy! We attach those shot packets to a position and orientation update, so -- even if the receiver has incorrectly interpolated your position -- the shot goes exactly where you intended it to. This is very important. :)
As a player (and a programmer -- but mostly a player), I'm concerned about the article's conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".
Ok, yes, it does. But the thing is, you have a choice here. You can present your players with something pretty and smooth that is fundamentally a lie, or with something jittery that is the best knowledge you have about where the enemy is. This is a fundamental tradeoff: versimiltude or accuracy. You can't have both.
My players overwhelmingly prefer accuracy. To them, the avatars on the screen are targeting aids, and they understand that the data is old and a bit lossy and bursty sometimes, and they want the best data possible so they can take the best shot possible. :)
I suppose your mileage may vary by audience. Mine's been playing this game 20 years and "crappy but playable" is both normal and good to them. :)
But -- I can't imagine this would be different in another FPS -- taking a shot that you know is good on your screen, and have it not hit the other guy . . . that's rage-inducing right there!
Yeah, networked games are hard. For sure. And there are fundamental hard tradeoffs involved in engineering them. For sure. But it's an interesting problem, and also worth it. :)
I wrote the multiplayer code for Descent 2 and Descent 3 (as well as the graphics engine for D3). Although I can't remember all the details because D2 was back in 1995(!), D2 had a significantly overhauled network layer from D1. Some examples: Short packets, where position and orientation data was quantized down into single bytes instead of floats, lower packets per second (you could go down to 5 PPS if I recall correctly). We were also the first game where if the 'master' dropped out the game another player would become the master in a hand-off scheme that was a bit complex. The master controlled things like notifying other players of new players, end of level stuff, etc. We had to sweat every byte because we were trying to have 8 players with low lag over a typical 28.8 baud modem.
D3 changed the overall feel of the Descent series, mostly because we introduced a terrain engine and that had a cascading effect on the rest of the game. The speed of the ship, for example, had to be significantly increased because if we used the ship speed from D1/D2 then going out into the terrain felt like you were stuck in molasses.
Working on those games was incredibly fun. Ah, to be 25 again.
> "We had to sweat every byte ... 28.8 baud modem."
Yeah, that stuff really mattered in that era. Nowadays we've eliminated a lot of the compression and upped the packet rate -- a modern "bad" connection can handle 8 players at 10 pps without compression, and the accuracy is necessary with pilots who dodge as well as some of the modern masters.
> "if the 'master' dropped out the game another player would become the master"
We'd love to reimplement that. It was such a great feature! It was removed when one of the previous coders overhauled the networking code.
> "D3 changed the overall feel of the Descent series"
Yeah. It was a fun game in its own right, but it really shifts the emphasis from dodging to aiming. It didn't play as well in a 1v1 setting, but had some really compelling team modes. I played CTF every night from about 2004 to when my son was born in 2009.
EDIT: P.S. there's a Descent LAN coming up this month. If you still keep up with Gwar, he has all the info.
> As a player (and a programmer -- but mostly a player), I'm concerned about the articles conclusions about the value of prediction, and the recommendation to avoid position popping by applying smoothing -- to quote the article, "it ruins the virtual reality of the game".
I agree, predicting remote objects causes more problems than it solves. Any remote player input that affects the object causes noticeable position popping when the local client receives the update.
The Source engine has a better solution - instead of trying to predict remote objects, delay rendering by 100ms and interpolate the received updates. This makes the position updates appear smooth without any stuttering. However, now the client lags 100ms behind the server.
The server has to "undo" this lag when the player wants to execute a request (like shooting). It does this by rewinding the world state by 100ms + client ping when executing the request. This makes it so that a client's actions appear to execute without any lag on their view (i.e. they don't have to shoot ahead of their target).
This causes some temporal anomalies, for example, players can shoot you around corners because they see your position 100ms in the past. However, most players seem to prefer this over having to constantly shoot ahead of their targets to compensate for prediction errors.
That's one way of doing it! I suppose it all comes down to what your players expect.
Mine viscerally hate lag, whether they can see it or not -- but especially if they can't. I think if I built 100ms of lag into the client, I'd have a mutiny on my hands. ;)
But I do remember having a pleasant experience in TF2. Source probably is a good source of inspiration, those Valve guys know their stuff. :)
A further thought: this is another of the fundamental tradeoffs in doing this sort of design work.
In a laggy environment, you can either have dodging work 100% correctly client side, if you dodged it, it didn't hit you, no questions asked . . . or you can have aiming work that way, if you hit it, you hit it, no questions asked.
You can have one or the other. You cannot have both. (And in a server-based setup, you generally get neither).
And I think which you choose (or which you choose to emphasize, if you give both up) will depend on what sort of game it is. In an aiming-heavy, mostly fast-weapon or instant-hit FPS (like maybe DOOM), you should pick aiming. In slow-weapon, slow-ship, combat-flight-maneuver-oriented Descent, the original developers correctly picked dodging.
Which highlights something else: When doing netcode for an FPS, your engineering decisions are game design decisions. Never overlook that.
> "When doing netcode for an FPS, your engineering decisions are game design decisions."
Of particular note: your engineering decisions directly affect which tactics are viable and to what degree. Can you wait for the last instant to dodge, or do you need to dodge a shot well before it reaches you? Can you kill a laggy player before they have a chance to react, or can they dodge your shots long after they appeared to hit? Can a player with a better internet connection control an area simply by virtue of their shots taking effect sooner (destroying an opponent, or at least wrecking their aim, before their shot gets counted as having fired)?
This came up in the Descent series with a change to the networking model between Descent 2 and 3. The top competitive communities didn't consider Descent 3 to be "real Descent" because the move to client-server (as well as physics changes) changed the gamescape so drastically.
[Disclosure: I'm married to Dove, and we have been playing Descent together for 16 years.]
In Battlefield 3 and 4, DICE chose to implement the latter option (aiming and hit detection are clientside, no questions asked). The tradeoff - and I'm sure you'd agree there is always a tradeoff no matter how we deal with latency - is that as the person being shot at, you'll frequently die some fraction of a second after getting behind solid cover, because you weren't behind cover just yet on your opponent's screen when he shot you.
This is certainly frustrating and frequently complained about, but I think it's the lesser of various evils for these particular games.
I forgot to mention -- about PPS -- most modern high-speed connections can handle a 30 PPS 8-player Descent game just fine, but there are a few players stuck in rural areas with very old connections, who can't. I'm guessing -- from the 8kB/s limit he set for himself -- that the author's audience sees a similar distribution.
The thing is, you don't have to make those connections symmetric. One of the features I'm working on for the next Descent Retro release is allowing players to set a PPS limit for themselves, based on their own available bandwidth, that will both reduce their PPS upstream, and request that their opponents reduce the rate of the incoming packets.
This means people with fast connections can have high-quality 30 PPS game interactions, and people with slow connections can still play with them. They have to put up with 8 PPS (or whatever), but it's preferable to not playing. :)
If you're worried about popping, make sure you drop out-of-order position packets. In my experience, the 33 ms of player input between normal packets is negligible. You get popping in theory, but you can't see it. I haven't seen any for a long time, and the smoothing Descent does is both predictive and minimal. But a position packet arriving 100 ms late . . . that's a pop. When I inherited this code, ships did a lot of jittering; when I eliminated out-of-order packets, it pretty much all went away.
YMMV, of course. Descent ships are slow by FPS standards.
Please do not say such a thing without also including the footnotes that movies get away with that by:
1. having no interaction, so the audience has no way to feel the delay between frames; as opposed to interactive media where the delay is easily felt through the latency of input to action on screen
2. having interpolation built in naturally into the camera, i.e. motion blur being entirely free
It's all over UDP, but there's some reliability built on top of it. Most irregular events -- chat messages, weapon pickups, door openings -- are sent over and over, until they're acknowledged. Shots are an exception. By the time you can figure out the shot packet was lost, it would be weirder to send it than not to. 300 ms is a very long time in combat.
I do display connection statistics to players -- they use lag to lead, and loss to know if the number of dropped shot packets will be tolerable. A lot of players don't like playing at worse than 0% loss, and will meddle with their net settings (or play another day) if they see any.
this is so cool. I remember playing Descent on the Playstation. This was the first game I bought. I didn't like it at first because the guy that sold the playstation said this is sorta like flight simulator but apparently you never get to fly outside the mines to my great disappointment.
But I've put in hours on the game, even though it scared the crap out of me (flashing and dimming lights with alarm that tells you the mine is gonna blow and you haven't even figured out where the exit is claustrophobically genius)
To see so much work going into it in the community is awe inspiring.
This problem was first described in Farmer and Morningstar's "The Lessons of Lucasfilms' Habitat" (1990), which was a MMORPG running on Commodore 64 machines over 300 baud dial-up modems. They referred to this problem as "surreal time".
Back then, they had latencies of 100ms to 5000ms. The original article here says they can get latencies of 200ms from 99% of XBox connections. Not much progress in latency in 25 years. I can understand the LAN party nostalgia.
I've been using Bolt [1] for Unity3D for a fast-paced game and surprised me how far I got with almost no knowledge of the library. It's UDP all the way and super explicit about everything.
I also love the Source Engine networking. Another one I like is The Torque Game Engine. It has a very similar networking model, I think it may even be a little better (it's the engine that powered the original Tribes game).
I remember buying a license to the original TGE years ago and digging through the networking code, it was great. They've open-sourced the latest version of Torque under the MIT license, and I believe the networking code is nearly identical to the original code used for Tribes.
So the hard part is not devising the networked scheme, it's building a whole game engine (or thoroughly modding one) afterwards, at least in my experience.
I was working on a multiplayer racing game project (like GTA2), and my approach was to run 2 physics engines in parallel. One physics engine would always be authoritive, and be in sync with the server, but because of the lag would always be a frame or two (or more) behind. The other would be working on predicting what is going to happen. Every frame snapping back to the authority, and then applying the (predicted and user) inputs over that.
The actual position the user would see would be an averaged position between the current predicted position, and its previous predicted position (to prevent too much jitter/snapback).
If you've got a better scheme to do networked physics I'm all ears :)