We built the exact same technology at Agawi(http://arstechnica.com/gaming/2012/09/report-cable-companies...). The only difference was that we did not have the parker API. We worked very closely with Microsoft and NVidia to make it work back then with full headless Windows GPU servers. H264 encoding both in GPU and CPU. We could have reduced latency by distributing the servers but we did not get to the stage of distributing GPU cloud back then. But the business never took off. I was not on the business side so I cannot tell exactly why. Probably latency but there are 3D strategy type games that you could stream. If you need, I can ask the business head of our team and he can elaborate.
One hard nut: the cost of provisioning datacenters with enough GPU capacity to meet the demand curve of a "hot" title, and still make sense in terms of inevitable idle time and depreciation.
Either you have insufficient capacity to meet day 1 load when everyone piles onto a hot new title, or you over-provision, meet that demand and have much of that hardware doing nothing during doldrum seasons (or when a title bombs).
Probably you need to figure out how to make GPU capacity useful when it's not rendering games, and sell that as a service as well (GPU-based machine learning?). It doesn't help that OSes have been prickly about letting processes share GPU resources; I imagine there are a fair number of thorny security problems, even with GPU MMUs.
This doesn't seem like something a cloud gaming start-up can really tackle; lots of capitalization, with lots of competition from entrenched providers, and no really compelling reason to put games in the cloud to begin with. The bigger (console) players probably realized that having consumers buy their own compute is not only cheaper and more resilient depreciation-wise, but also causes a nice platform lock-in effect once the customers have purchased a few titles.
This stuff always seems to be just around the corner, however there's some fundamentals that are always going to keep it from working.
1. Some games just aren't built for any latency. I'm talking fighting games(1 frame windows), majority of FPS' and anything that has a reaction instead of a prediction game model. These can be made network compatible with a time-rewind gamestate resolution(which many good ones do), however they aren't drop-in.
2. The market that's interested in this tends to be much more demanding than your traditional consumer. Drop ~150ms of packets at a critical time? You've got a really frustrated customer where a game designed for network degradation will handle this gracefully.
Kudos to them for trying but you're not dealing with just technical issues, there's fundamentals of design that need to be adjusted for that can't be done without direct developer involvement.
I don't disagree with you, but there is multi-ms latency typically in local setups as well, so certainly some latency must be tolerated by basically all the games. Carmack had an interesting point about this: http://superuser.com/questions/419070/transatlantic-ping-fas...
Sure, but as they allude to in that article local latency is consistent(dropping a frame in games is considered a cardinal sin for anything competitive).
Game design is built with these latencies into account, if people were to see the raw network updates they'd be shocked at how jerky and unplayable it is.
Generally if you're building a game to be tolerant of network latency you want a design where you're trying to be predictive than reactive. The former tolerates latency well since you're trying to guess where something will be in the future. The latter has latency in the feedback loop which is incredibly painful without some complex(time-rewind) mitigation measures.
The games you're talking about (competitive games, games where 1 frame drop is unacceptable) are a very small & niche part of the market. Look specifically at modern consoles- every console has builtin architecture that leads to a minimum input latency of 67ms. The game can add more latency (and most do), and a lot of popular TVs add more latency on top of that. Here's a database of game input latencies, it shows that even a lot of successful games have latencies of above 100ms: http://www.displaylag.com/video-game-input-lag-database/
So if you're talking about the "highly latency sensitive" players, then by definition those are only PC gamers, and only a small fraction of those players. The addressable market of gamers who tolerate 100+ ms of latency is very large, definitely large enough to build a business on top of.
One thing you have to take into account is that the vast majority of modern console gamers have never experienced PC gaming @ 120hz on a CRT connected to a good old 15-pin VGA connector...
If you were to take a console gamer that was just fine playing a game at 100ms+ latency, and let them play the same game, while magically removing 95% of the latency, they would probably be amazed, and never want to go back to their previous setup with horrendous latency.
So the point I was making(and a little poorly on re-reading) is that latency is fine up to a point(about 200ms) as long as it's consistent with no jitter. Input systems -> rendering fall under that category, you don't have your TV dropping frames or delivering them late. You can adjust for a constant latency and "lead" it, which is why game design that's predictive works so well in multiplayer.
Latency sensitive players are actually large part of the market, any action based multiplayer game will fall under that category, which includes FPS games which make up a large portion of game revenue. About 10% of the market makes 90% of the revenue so missing certain use cases excludes a large chunk.
The normal packet jitter on my connection is between 2 and 20 milliseconds. You could stream games to me with a fixed network delay of 120ms (plus 80ms for input/rendering) and there wouldn't be dropped or late frames.
What's the 99th percentile for that? How about packet-loss?
None of this is new stuff in gamedev, we've been build action games successfully since the days of 28.8 modems and 200ms pings.
Maybe that makes me a bit of an entrenched player(which is a poor position to argue on HN) but I've yet to see anything fundamental in these technologies that will address the latency issues in the same way you can with an interpolated client-side simulation.
I'm working on a game with my own GL engine, and i found locally even <5ms jitter is noticable, if only because the jitter occasionally causes a "frame time overflow", leading to a skipped frame or a frame being displayed twice.
You set up a fixed delay, that is large enough to contain your jitter. It doesn't matter if one packet takes 80ms and the next takes 105ms when you don't display it until the 120ms mark. There will be no visible jitter.
These are good points, but I'd like to add that unless someone is dealing with extreme latency (ie: > 150~200ms), jitter is typically a bigger problem.
If you have a consistent latency of, say, 100ms, you can account for that. People adjust. They can plan for being a little slower to react.
On the other hand, if you're playing a shooter and you're constantly bouncing from 20ms to 100ms, you're likely going to feel pretty unhappy. You'll be forced to constantly adjust for actions happening slightly too fast or too slow depending on the direction/severity of the jitter.
Networking trends in computers operate in a cyclical pattern. At some point, and probably soon since we're now overdue, we will be talking about 'private clouds' or 'home servers' and houses having one expensive, modular computer and a lot of tablets and televisions being driven off of it.
In that world, this sort of technology could be adapted to shipping video a short distance, say less than 10ms away. As a platform instead of a service these sorts of things might work as well as a port of the game to your platform of choice.
You throw your playstation 5 into a closet in the basement and you can play games anywhere in the house, or maybe at your friend's house a couple blocks over.
Isn't this already happening? I stream games to devices around my house from my office where I have my desktop, and a small Debian server as a media server, mumble hosting and the occasional chivalry or CoD4 server with friends. I get that the server has a bit of a barrier for entry, but the game streaming, that's just Steam. Granted you need some kind of i386 or x86_64 device at the streaming end that can install steam.
Yeah, it's technically here with media servers too but the barrier for entry is still higher than your average Joe will bear. The idea of an out-of-the-box solution has been tossed around here a few times I think; a little server that manages networking, storage and other services for your home. One that just works after a simple gui walkthrough for setup, and doesn't make us so dependent on cloud services that exist to sell advertising space.
I don't disagree completely. A certain subset of gamers (the hardware enthusiasts, pros) will NEVER use something like this.
But certainly over wired LAN it does offer near native performance for most games, past the threshold where many gamers would even be aware they were playing remotely. The real questions are: how good will the internet get, and how quickly will it get there?
While it's true that quite a few games (especially multiplayer/competitive games) have strict latency requirements, those are far from being all the games out there.
I think this model has many big features that makes it a worthy, even if it's not for everyone. Here's a few obvious ones:
- No download or even load time, jump straight in the game
- Can play on almost any device, even mobile
- Online co-op/multiplayer for local only games
- Easy to spectate friends (don't need good upload speed)
So if you are a gamer who plays a moba/fps every single day, then sure, having your own rig is nice. But if you're someone who likes playing the latest single player story game, or some indie games, but don't want to invest into a gaming PC/console every few years, this model could be very useful.
1) You can simply have vector clocks, and resolve differences like anything else. Most of the time the game's prediction would be correct. Sometimes there would be a small adjustment.
Except you don't have access to game state in these cases so there's no adjustment that can be made.
You can only move forward in the frames you display. You can't walk back in time to evaluate a player's action in relation to their latency and then reconcile game state with all other clients.
Yes you can. When the server sends the "official" record of moves made, if the client detects any discrepancy with expected results, it resets the game to a slightly earlier time and replays the "real" moves to reach the current point. All this is done instantaneously, and only at the end is the interface refreshed.
> resets the game to a slightly earlier time and replays the "real" moves to reach the current point.
That's exactly what properly engineered network games do. However we're talking about video streaming here, in which you don't have access to any game state. It's not possible to do without the developer going in and adding specific feature support for your streaming service.
Can you go into more detail with (1)? It's unclear to me how vector clocks could help with, say, a fast-paced FPS. The "small adjustment" might often be bringing a dead character back to life, if his bullet packet's vector clock claims he shot first. (Hack potential, too...)
There is always hack potential. All the server can do is make sure that the claimed move is legal (eg fits the rules of chess) and realistic (check a string of inputs to see if they probably have been made by a human).
As for the adjustment... Yes bringing a character back to life after a momentary error, in one place on the map, as seen by one person, is a small adjustment that happens rarely.
It doesn't defeat the purpose of streaming. You don't have to emulate everything in the game, just everything in your vicinity that can possibly affect what you see and hear.
That is how client-server games already work. The server has the full world state, and each client has a subset necessary to render the world for that player. Ideally the absolute minimum subset, so as to reduce the potential for cheating (wallhacks, etc.).
The purpose of streaming is to remove the need for the simulation and rendering at all. The client is a dumb terminal that just renders frames and records input.
The moment you start making the client smart again — making it aware of a subset of the world state, making it do its own rendering based on the input — you've just reinvented current client-server gaming.
That would be extermely complicated and would require additional support from every game. Streaming right now takes every frame, compresses it and forwards it, as simple as that and works for everything.
The cost you are talking about is only for physics and logic in multiplayer games, it doesn't apply to single player games. And even then it is negligible compared to the biggest cost in gaming: rendering. Prediction requires rendering being done on the client, which requires a hefty video card and defeating the purpose of streaming.
Yes you are right, but in this case we have a central server farm AND clients can time out. The server can drop the vector clock of all connected clients for tick X once all clents receive tick X updates or time out for tick X.
I'm very interested is streaming 3D content creation UIs.
A major issue with virtual filmmaking is the datasets are huge, which makes it very, very difficult to work with a remote team, because you can't effectively get the dataset to people's machines even just to work on.
What I'd like to do is host our dataset AND the 3D content creation apps on our own hardware at Switch (in Las Vegas), and then have our employees and contractors throughout California access 3D content creation apps remotely, via something like your streaming approach.
I just tested this service with a full blown 3D Development Environment like Unity3d using an AWS Instance and it works amazingly well and is very responsive. Through Amazon Gigabit connection everything installs in seconds as well.
I had a fairly low resolution (800p) set, but i can totally see this being viable for CAD/3D Development etc.
We've heard of something like this before, and while we're very much focusing on gaming right now, our goal is to make Parsec general purpose enough to fit into a lot of different use-cases. I see no reason why it couldn't be used for what you're describing.
Good job!
I see that your first picture is from Rocket League, i ussualy play rocket league using the steam controllers :) It would be really nice if the steam controllers could work with it.
Luckily the video processing libraries provided by Intel/NVIDIA/AMD are cross platform (Win32/Linux). So the arm specific video processing code for the RPI is a different can of worms, the rest of the client (window creation, polling for input, etc) should be able to support Linux and X11 generally.
Hey, nice post! Quite a detailed write-up and an impressively optimized video pipeline. I'm curious how much end-to-end latency you experience on average?
On newer hardware (Geforce GTX 900+ series on the server side, recent Intel HD Graphics on the client) our system only adds 8-10 ms. On older rigs it might be somewhere around 20. Of course the network latency will go on top of that.
Really cool stuff, i have just read the article that inspired you to do this today because i wanted to actually test it.
If i understand correctly you also have a Streaming Server to install in the local network, so it's basically doing what Steam in Home Streaming does ? How does it compare to that in terms of performance ?
Client side, we've already done it (we have a Raspberry Pi client that actually works well, but still needs a bit of polish.).
Server side I'm not sure. I know the Windows API for grabbing frames is really good, and I assume there's a similar API for Linux. Shouldn't be too bad. But you will likely see a macOS server first...
I wonder how well it would work with this setup if you were connected to it over a VPN. Presumably (despite the latency) it would consider it the same network.
This is really interesting, I did not know it was possible yet.
A future advancement that could be made possible by this technology is the complete elimination of "laggy hitbox" type effects: if the server that handles physics is also handling rendering, it can make sure to show a consistent picture of the environment to every player.
That would be equivalent to disabling lag compensation, which isn't possible anyways if you're just streaming video. Some older FPS games (Quake 1 original netcode, Halo) don't do lag compensation, and it's not pretty.
The "laggy hitbox" effect only happens because the client predicts non-player actions before the action data actually arrives. The alternative is to have actions "officially" occur when the packet hits the server instead of when the player actually acts, which means that players with high latency will be at a disadvantage. With streaming video it seems this problem is unavoidable.
All of the rendering would be server-side, so you won't even see the muzzle flash until the server is aware of it.
With client-side rendering, the client draws a muzzle flash as soon as you fire, and there could still be an enemy in the line of fire at the time the muzzle flash goes off, and the shot still not hit if the enemy has already jumped out of the way.
I've been wanting to do a project to let a group of friends play multiplayer in a certain turned based 4X game by streaming from a remote server like this. The latency wouldn't need to be super low since it's turned based, so a browser based streaming client could work. The game has a hotseat multiplayer mode, so one person could take a turn, and then the server could go into a suspended mode while it waits for the next person to log in. The idea is to save the time of having to load up the game to take one turn, and having to be at a computer that has it installed already.
Tried the AWS configuration, things went great (from Geneva to eu-east, symmetrical 1Gbit at home). Indeed, major source of frustration were videocodecs, not networking.
As someone who does not live near any major datacenters, this kind of service just doesn't work for me. At all.
The absolute lowest ping I can reliably maintain from my house to a major datacenter are those in CA, and it's around 60ms. So, I'm effectively attempting to play a game at 15fps. Workable for some 4x games and a few single player action games, but nowhere near pleasant.
Since my real world pings to game servers are much closer to 100/150ms (10fps) with the occasional burst of packet loss (especially in the evenings), cloud gaming is unrealistic (PS Now basically says "No, you can't use this service").
I'm getting 60ms averages from work (which is on fibre) to AWS-East (where parsec hosts their website from), so there's no way this would ever be viable for me.
Seems like it would have limited appeal to non-casual gamers, if I'm honest. You have to near a major datacenter with reliable broadband, and you could never play competitively in a broad set of latency-sensitive games. On the other hand, I could definitely see the appeal for those who want to play the latest Peggle or Parking Dash games.
You can't convert latency into FPS like that. You can have seconds of latency but stream at 60 fps, or 5fps streams with very low latency. The two numbers are independent.
The US is kind of at a disadvantage for this because distances are so huge, but in smaller european countries it's not uncommon to get <20ms pings to gameservers around the country if you have a decent broadband connection.
PS4 has been doing this for years through PlayStation Now and Gaikai tech. Remote play works phenomenally well, as does PS Now. You should go pick up a trial of the PC app (psnow.com) and try it out for yourself.
People laughed when onlive said they could do it. And then onlive went bankrupt.
As you admit yourself you can't do anything about network lag and reliability. Even if you get down to 0ms video enc-dec this will be not viable for any games where the camera moves.
I've played CS:GO from NY to Virginia with Parsec (to an AWS machine) and topped the scoreboard consistently (in pugs).
Also did a 3 hour raid night on the new Legion content with it on WoW, lot of dynamic camera movement there. Cleared Nythendra on Heroic as the 4th DPS.
May not be to everyone's liking, but worked for me.
At the end of the day it's just about how local you can make the data centers. If you have a few ms of ping, great bandwidth and hardware to encode the video, it's absolutely feasible.
While the speed of light isn't getting any faster, bandwidth and hardware encoding both are (relative to the speed at which we're increasing resolution and image quality).
And the really nice thing about it is that you can afford to spend an order of magnitude on the hardware if it's shared around and played casually than if it just sits in your closet most of the time. And you can game on laptops or even mobile devices easily.
The experience could actually be fantastic, and hugely social; imagine reading a review with screenshots from the game, and clicking on them allows you to start to playing in under a second from that exact moment, just like you can send a YouTube link mid video.
The trouble is, there are a hell of a lot of cities in the world. In Chicago, everyone could be gaming like this in ten years; but in rural New Zealand?
> The experience could actually be fantastic, and hugely social; imagine reading a review with screenshots from the game, and clicking on them allows you to start to playing in under a second from that exact moment, just like you can send a YouTube link mid video.
IIRC, Gaikai was aiming for that exact application (live game demos) before they got acquired by Sony.
I think this may be a solid option for some people; and the number of people is going to continue to increase. AT&T is actively building out fiber to the home, which dropped my ping times to nearby servers from 22ms to 5ms. (from interleaved VDSL2 to GPON; earlier i was on fast path ADSL1 and had ping times around 7ms, but much lower bandwidth)
I found some people on dslreports forums with Comcast's DOCSIS 3.1 service, and their ping times were around 15ms to a local server, which is still ok enough.
The upgrade process is gong to be slow, but 4 years after OnLive died, there's a significantly higher number of people who have a good enough connection to use it. It sounds like there's a lot of improvement on the server side too, as well as in hardware assisted compresison/decompression. Monitors have also been encouraged to decrease processing latency too, and everything adds up.
You're right. I misunderstood Op's "Even if you get down to 0ms" to be referring to network latency, stating that encoding and decoding would still be a problem (which the Wii U shows it's not).
That said, 15ms network latency isn't too bad at all, considering that it takes your TV 40-80ms to display a frame through HDMI. Fortunately, with the advent of VR display manufactures finally took notice of this built-in lag. So that's getting better, too.
I do not support the idea one bit.
Sure, great stuff an all, if you do not care having everything behind a bottleneck, in the hands of other people. I do.