Hacker News new | past | comments | ask | show | jobs | submit login
Revised and much faster, run your own high-end cloud gaming service on EC2 (lg.io)
592 points by SG- on July 10, 2015 | hide | past | favorite | 231 comments



I have followed the original instructions and after a couple days of tinkering around it is now my go-to service for gaming. I can play AAA titles on my mac without having them consume precious SSD space nor does the computer get anywhere as hot as when I was running them in bootcamp. The cost is quite affordable when making your own AMI with Steam and everything preinstalled. Since booting the machine and setting everything up takes around 10 minutes I also don't get tempted to play when I would have to work. It is a much more conscious decision. I only had to get an ethernet cable because wifi was too flaky. But now it is very solid with a 50M DSL line and average ping of 60ms to Ireland.


I, too, followed the original instructions and played the Witcher 3 and Ark: Survival Evolved via my mac against an AWS instance on the frankfurt datacenter. This was in Vienna which means that the average ping was around 40ms.

It was a fun little project to get it all working and it was playable but barely.

The added latency makes you feel a little floaty (like a bad VR experience - the kind that make you sick). The witcher looked good when you stood still but not while moving. Ark's performance was abysmal on the aws hardware such that I had to turn down the settings.

Part of the bad experience is likely my ISP's fault - I had spikes of packet loss - though I think this is fairly typical.

The cost is mostly not in the spot instance (that /is/ cheap) but in the storage (EBS) that you need to keep around to mount a spot instance whenever you want to play. The bandwidth cost is non-trivial for the amount of data you're streaming. I had about a 40 euro bill for the month under somewhat light use.

In the end, I can't say I recommend it. Onlive and Gaikai didn't do so well. That might have been their business models but I believe the tech is equally to blame. It's a ridiculously hard thing to get right consistently.


Well, to be fair, ARK is not the best benchmark. It's still early access and not optimized. I need to run it at Medium settings on my i7/GTX970 rig. The devs know this and they are working on it.


I used OnLive, the latency in response, even if its good for a while isn't consistent enough, even on a good connection, for playing anything other than Lego Star Wars or turn based games without screaming in frustration. You'll be met with sudden input delays or shuttering that just screws you.


This was not at all my experience with OnLive. I played Just Cause 2, and even having a relatively terrible connection through Time Warner Cable, it worked well enough that I was able to play the entire game without any noticeable input latency. The only exception being at times that TWC was having severe network issues, but that was both rare and understandable. The OnLive service itself worked pretty flawlessly for me.


How much data did you use and what did you pay for it? Probably like 75%, right?


I used about a gig an hour which came to around 8 dollars for the month. Storage was 15. The g2.2xlarge was 9 for 112 hours of use (I was lazy and didn't always shut it down between sessions).


Seems like it'd be useful to write a script that detects idleness and issue commands to the awscli to shut down the instance.


112 hours is not light usage..


Oh, great news. When I checked the last time there where no instances in Frankfurt. You of course have to use spot instances as well.


The guide advocates an EC2 security group that allows everything, plus disabling the Windows firewall. That's quite insecure, and unnecessary.

It's probably better, and not more work, to create a security group that only allows:

* UDP on port 1194 (Openvpn server) * TCP on port 3389 (Remote Desktop) * ICMP (for ping)


Aren't you leaving out all the networking by the games by doing that? My college was famous for blocking multiplayer starcraft games a new way every week and us students finding a way around it every week. Kind of sad Blizzard eventually sued the B.NET replacement that was needed due to crappy filtering like that.


In most cases, this shouldn't pose any issues (though some games doing crazy things with networking could be affected).

These rules restrict the ports listening directly on the internet. The average gaming PC is behind a home router, has no ports reachable from the internet, and network games still work fine.

If your college was like mine, what they did was filtering outbound connections, which is different. Outbound traffic is unfiltered by default on EC2, and it's fine to leave it that way.


It's probably not a good idea to play any competitive multi-player games on this service anyways, considering the additional latency and higher possibility of interruptions.


Is Openvpn even necessary? Open 3389 to only your IP address in the security group.


Yeah, using OpenVPN allows your client and server to reside on the same subnet, which I presume is necessary for Steam In-Home Streaming.


You need the openVPN for the Steam IHS Lan discovery and upd streaming. With my setup I still use Hamachi which also does the job and was much easier to configure. Downside is that it is proprietary and has phases with extreme paket loss which make playing impossible every now and then.


For this setup, yes - a VPN is necessary to use Steam's inbuilt streaming feature.


The idea is OpenVPN enables IP multicast to be tunneled over the public Interent, which Steam's streaming requires.


you're not playing over rdp; that would be terrible.

you're using steam in-home streaming through the vpn.


We might just have seen the future of PC gaming DRM. That you will pay per hour instead of a one-off payment.

There's one problem though, and it's latency, even 50ms will feel very laggy. We need more decentralized data centers! With a data-center in each city you could get latency down to less then a millisecond.

I think the next digital revolution will be low latency, and a flora of new services coming with it.


> We might just have seen the future of PC gaming DRM. That you will pay per hour instead of a one-off payment.

While I would hate to be locked into an ecosystem that was pay-per-usage, this article is just amortizing the cost of hardware over time.

The PC gaming community is (I hope) fairly intolerant of any sort of lock-in. If Steam went the way of pay-per-hour there's enough competition that we'd see a transition to other services.

Unless Valve goes through some other route to give themselves an advantage (e.g., buy Xfinity Valve package and get unlimited access to Valve Streaming services, the rest of the internet is low-bandwidth/high-latency), there's not much to worry about.


> The PC gaming community is (I hope) fairly intolerant of any sort of lock-in

> Steam

I've got no problem with Steam, but it's a pretty locked in platform already


I don't think a lock-in would be an issue. There's however other issues I think will be very hard to solve. This is a quote from a salesman trying to sell me 3G Internet.

"Why on earth do you need more data then this!? You know downloading movies is illegal, right!? Ohh, and you will also get a free Ipad."

The problem is that people refuse new technology like fiber networks, because they do not see any use-cases beyond e-mail and Facebook.

My depressing thoughts about ISP's is that the only reason they use fiber instead of copper is that fiber is much cheaper over long distances.


On the other hand, immersive VR headsets ala Oculus is just around the corner, where extremely low latency requirements pretty much makes streaming a dead-end.

IIRC Carmack had written several posts on how they are chasing and killing latency in everything from USB input to LCD front buffering.


I think they started with around 100ms response time and have managed to get it down to around 20 ms. Hopefully they will be able to get it even lower, so that one ms of network latency wont matter.

Think about getting a VR headset like the Oculus and the only thing you needed was to plug it into the network, and then have access to virtually all games available at a hourly fee!?


Even at 250 FPS, waiting for the next frame takes 4 milliseconds. If we can get data center latency down to 1 ms, it's no longer an issue in the context of VR, or gaming in general.


You realize that 99.9% of the world cannot reach an aws data center in 1ms based on the speed of light? Assuming routing took no time which is also false.


OnLive tried it already.

It failed spectacularly.


They tried it in the early 2000s, when the network throughput and latency just wasn't what it needed to be. Now we have much better networks, more available cloud computing, and a populace that's more used to streaming things. I'm not sure people would accept it just yet, but I don't think OnLive would fail quite so spectacularly now.


Early 2000s? The company was founded in 2003, and I can't seem to find any articles mentioning them prior to 2009.


Yes, they didn't get going until then, and ended up running... 4 years total?

For strategy games and such, I'm sure streaming will always be viable. As we move into the VR realm, even a tiny amount of latency won't be acceptable. So we're always going to need local rendering for action games.


I used OnLive. Onlives problem was NOT latency. It was shit content.


Yep. I was willing to pay for good games, but they had nothing new for years.


Games are going to have 100+ gigabyte install footprints later this decade, and terabytes when 8K gets here. Phones, tablets, vr headsets and cheap computers - especially cheap steam boxes - will have to rely on streaming.


I recently bought a 5tb hdd for $130. I don't think storage is going to be the issue.


Does that HDD fit in your VR headset? Phone? Tablet? Their constraints are why streaming will matter. That and multi-day download time vs on-demand.


The gpu needed to play games on a VR headset doesn't fit on it, thats why they plug into a computer. And for something that fits in a phone/tablet, a 64GB microsd card is $30. 64 GB on something the size of a fingernail for $30. 10 years ago a 60GB 3.5" hdd cost more than that . Storage expands and gets cheaper at a crazy fast rate.


Kind of depends on why it failed.


It is fucking expensive.

A solution like this is a clever hack, because it's taking advantage of an immense economy of scale provided by a company with really deep pockets operating on a multinational pool of servers that still on balance see a wide variety of different use cases, allowing for a lot of distributed load.

Imagine if every user was demanding that same level. As I write this, Steam is counting over 8 million players logged on. Now imagine trying to imagine trying to guarantee them all gaming-level real time performance (fun note here: that GRID card used for the AMI costs $2,000 alone), and doing it all on a reasonable price.

Even our Amazon hack isn't doing that. 50 cents an hour doesn't sound like much, but the average gamer is putting in 22 hours a week. Some five million enthusiasts are regularly pulling 40. That's anywhere from $40-80 a month. Not to mention the cost of the games themselves.

OnLive was trying to offer this for $15 a month. And originally, they were even footing the cost of the games, aiming for a Netflix/Gametap approach (the latter of which also failed, I might add).

Unsurprisingly, they went spectacularly broke.


Yep.

Something like this will only be economical if you really plan on only using it for the occasional single player game a few times a year.

If you use it to play games regularly at even modest frequencies (20+ hours per week), the 50c per hour will quickly start to accumulate and become a recurring monthly bill of $40+.

At that point you might as well invest in a decent gaming desktop and stream from it instead. A i3 + 750 TI based system can match the performance of this setup and would cost less than $500 all-in, which is about the cost of 1 year of streaming from this setup at 20 hrs per week. You'll get a much better experience due to the much lower latency and not having to worry about penny pinching on every session.


> A i3 + 750 TI based system can match the performance of this setup

Are you sure about that?


Probably not in terms of pure compute power, my wording is definitely a bit off there.

However, the overall gaming experience you get with the local streaming setup will probably be similar to, if not superior to, the remote streaming setup regardless of absolute hardware power, because i3 + GTX 750 TI can definitely handle most games at 60fps in 720p (and a lot of them at 1080p in my experience). So the comparison ends up between streaming 720p with 50ms latency due to bandwidth constraints over WAN vs streaming 720p (and some 1080p) with <5ms latency within a LAN.


The GPU alone costs $2k. The CPU they use costs $1.5k, and is split in two between you and another user.


Sorry, would you happen to have any sources on the claim that the CPU is split between only 2 users?

I had always assumed that their utilization rates would be much higher than 2 users per physical CPU, even for higher end instances like this one.

The Nvidia GRID GPU almost definitely is shared as well, since that's the entire purpose of the GRID SKU:

http://www.nvidia.ca/object/grid-technology.html

According to this, the K520 supports up to 16 concurrent users while sporting 2x GK104 based GPUs of power. When fully utilized I figured each user won't be getting much more power than a single midrange GPU, and I can't imagine why Amazon would choose to not keep them fully utilized either. If you have any sources otherwise, I'd love to see them as well.


You're probably right about the GPU.

Regarding the CPU, when creating an instance you can see this: "G2 instances are backed by 1 x NVDIA GRID GPU (Kepler GK104) and 8 x hardware hyperthreads form an Intel Xeon E5-2670". According to [Intel's product page](http://ark.intel.com/products/64595), the processor only has 16 hyperthreads, so 2 users per CPU. My reasoning may be wrong though, I'm not a virtualization expert at all.


From first hand experience I have to say even 120ms is fine when you are not playing competitively.


120ms latency in network communication of game state is different from 120ms in input latency. We have client-side prediction in engines to ensure that the game world responds to input and view changes in soft real-time even though the server trip is still unfinished.

If your mouse cursor or terminal was continually an eighth of a second behind your input, you'd get pissed fast.


However, since the game state has to be rendered on the server, there would still be at least 120ms input latency.


Correct. Strictly speaking, 240ms--120 there, 120 back once rendered.


There is a post sometime in the last year here about a Microsoft Research tech demo which abused bandwidth to send all possible futures as well as the current screen, enabling client-side prediction to again eat one way of the trip.

Anyone remember this?

e: Found it!

http://www.pcmag.com/article2/0,2817,2464341,00.asp

https://news.ycombinator.com/item?id=8210957 (comments on actual paper)

This is the relevant technical details from the pcmag article:

Microsoft's DeLorean system takes a look at what a player is doing at any given point and extrapolates all the possible movements. It streams a rendering of these from a server to a player's console. Thus, when a player decides what he or she plans to do, that scene—for a lack of a better way to phrase it—is already ready to go.


I don't think competitive is the right word (I'm leaning towards "engaging"). Even if I'm casually gaming, it can be a turn-off to notice timing issues. More-so if I have to adjust my behavior to accommodate.


Well when I was younger I was very particular about these things (CRT vs early TFT etc) so I would not say that is not engaging at all. Just that you will probably be frustrated very soon when trying to outperform players with direct access in something like FPS


I tried to play GTAV when the first article came out, and while it wasn't unplayable, the latency was frustrating at best. Driving was futile, and I have a muc, much lower latency to aws than 120ms:

  > ping -c 4 sdb.amazonaws.com 
  PING sdb.amazonaws.com (176.32.102.211) 56(84) bytes of data.
  64 bytes from 176.32.102.211: icmp_seq=1 ttl=239 time=7.35 ms
  64 bytes from 176.32.102.211: icmp_seq=2 ttl=239 time=7.51 ms
  64 bytes from 176.32.102.211: icmp_seq=3 ttl=239 time=7.11 ms
  64 bytes from 176.32.102.211: icmp_seq=4 ttl=239 time=7.15 ms
  
  --- sdb.amazonaws.com ping statistics ---
  4 packets transmitted, 4 received, 0% packet loss, time 3003ms
  rtt min/avg/max/mdev = 7.119/7.285/7.512/0.180 ms


7.5 ms of latency will not degrade your gaming experience, I'm pretty sure. If you're gaming at 60 fps, and move your mouse right after a new frame is displayed, the mouse movement won't be visible until the next frame, 16.66 ms later. And 60 fps feels smooth to me.

On my home internet connection, pinging, for example, google.dk gets me a response time of a few milliseconds, and a HTTP GET request for the root URL has the same latency. But if I do a HTTP GET request as part of the search, the latency is much higher.

I think the google.dk/com main page (and probably other heavily visited sites) are cached by ISPs, so that you don't necessarily reach Google when you ping or HTTP GET the root domain, but rather some network cache device between you and your ISP.

So be careful trusting that ping latency necessarily equals HTTP GET latency, or latency for some other request, to a server.


That is weird. It should be worth investigating if you have any other bottle necks that are increasing the latency. Monitors for example, usually have a response time, from input to when you see the image, of around 10-20 ms.

Input > PC > amazon > PC > monitor

Maybe there's something in your PC that is adding to the lag, like a slow software render.

It could also be that the machine answering to the ping is closer due to anycast routing.

Can anyone come up with a practical way to measure the actual lag from input to screen render!? For example using a high speed camera.


The monitor is http://www.samsung.com/us/computer/monitors/LU28D590DS/ZA which has a 1ms response time, never noticed a delay with anything before.

The decoding was in hardware, and I have dual amd 7850s, so I don't think that was the issue (and I wasn't trying to run at the full 4k either)

There was a very noticeable input -> display lag, according to telemetry on steam it was ~40ms total, which is fine for a lot of games, but really noticeable and annoying for something like gta. I mean I've played civ5 over vnc before, and 40ms would be a godsend compared to that, but it was still more than playable.


[deleted]


> 120ms = 8 FPS

Latency isn't FPS. You can have 120ms latency and 120 FPS; frame-rate and lag are orthogonal.


Latency will only be an issue if it's non-constant - most gamers can adjust for lag if it's expected.

I think the underlying idea here is consistency. If I point and click on something that was merely a "mirage" due to network effects, then this is somewhat of a bad experience.


No they aren't.

If you have a general latency of 120ms, then the maximum number of frames per second which react to distinct instances of input is 8.


> If you have a general latency of 120ms, then the maximum number of frames per second which react to distinct instances of input is 8.

If you have a round trip latency of 120ms, then the maximum number of action-result-reaction cycles per second is 8, but its possible to have distinct instances of input to information received each frame whether or not that frame accounts for input in the previous frame. You can have distinct instance of input -- and frames that react to them -- as fast as you can show frames and humans can process them. The frames showing the reaction will be delayed by the latency + human response time from the information they react to, but the frequency of those frames is pretty much unconstrained by latency.

Which is why, again, slow frame-rate and high latency are orthogonal (both create the perception of "slowness", but they are different and independent effects.)

The only real relation is that a low frame-rate can mask high-latency, as if the latency (including human response time) is less than time between frames, the latency becomes imperceptible. So, yeah, 8 FPS becomes the frame rate necessary to completely mask 120ms latency.


> If you have a round trip latency of 120ms, then the maximum number of action-result-reaction cycles per second is 8, but its possible to have distinct instances of input to information received each frame whether or not that frame accounts for input in the previous frame. You can have distinct instance of input -- and frames that react to them -- as fast as you can show frames and humans can process them. The frames showing the reaction will be delayed by the latency + human response time from the information they react to, but the frequency of those frames is pretty much unconstrained by latency.

Got it. That makes sense. I was putting the user more in the mindset of a webapp user, where almost all interaction is "action-result-reaction," but when I think about gaming, I am giving a pretty much constant stream of input, coming from a join flow of muscle-memory, my own desires for the outcome, and the input of the visual and audio.

So, sure, I will take many more than 8 actions, and all of them will just be delayed.


I have played e.g GTA5 with a latency of about 100ms and while it might not be enough for competitive gaming for some casual rounds it was really ok.


120ms doesnt limit the game to 8 FPS though - it means that the image is at ~60 FPS and 8 Frames into the past.

120ms is perfectly fine for RTS and most RPGs or even MMOs... Though MMOs would probably get another ~30ms increase in latency because of the connection from aws to the game servers.


If it wasn't a direct feedback game with mouse to look. I'm sure it wouldn't matter aS much. Such as point and click. Or click to move rpg. Rts. Etc. I'm sure if it was mouse to look or aim. Or even a strong key press to move it'd be noticeable. But still could be great for plenty of games. Though not all


I really appreciate a guide that takes you through the process, giving me a chance to understand all of the steps, before sharing the pre-packaged solution at the bottom.

I was a little surprised by the cost as well. At the rate that I'm gaming these days it would be like $10-20 per month, that's pretty damn good (price of games not included obviously).


When I see the 50 cent an hour price, I think of the old days when I'd pay 25 cents for one play of Pac-Man.

Say you allocate another 50 cents an hour for software then you are getting to a $20 or $40 a month spend which is getting in the range of a good video game habit. This leg could support sales of an AAA product and also could probably fund AA games.

Get your configuration right and you have a system for multiplayer games with pretty low latency behind the servers -- even if the video gets screwed up, the games will always stay in sync.

It is no way an accident that the costs wind up like that because everybody else is thinking about "Can I replace $X spent on hardware with $Y a hour spend in EC2."


This is impressive, but you should probably not use his AMI unless you use your own uniquely generated OpenVPN certificates/keys


Agreed. Only use the provided AMI for testing and learning how to create your own.


Big fan of this approach.

I wrote a simple script (https://github.com/zachlatta/dotfiles/blob/master/local/bin/...) to really easily spin up and down the machine I set up for game streaming.


Somewhere, someone at Valve is noticing this and pitching it around as a new service idea :O


It seems that the reason why this makes sense is Spot instance pricing - it wouldn't be economical with normal instance.

But don't they pull the instance out from under you if someone outbids you? Does anyone have experience with that?

And one more question - how is the performance? The OP shows screenshot of game running in 1280*800, but that might be because of the macbook resolution. Can it do fullhd or 4k?


Something that I'm a bit worried about, I used to try to run performance sensitive game servers on a Xen based virtual machine, was that no matter how many resources I tried to dedicate to the virtual machine. The xen scheduler would give hitchy performance. introducing large enough delays sporadically to make playing the game a little painful.

Does anybody know much about the EC2 hyper-visor schedulers or in the case of large instances, does it even run with a serious hypervisor?


I'm not sure about specifics of shared machines, but when creating an instance there is an option to run on single-tenant hardware (obviously at a higher cost).


One of the more exciting possibilities afforded by DIY cloud game streaming is the ability to interactively share single-player gaming experiences with people, in games that otherwise do not support co-op. Games like FTL and XCOM: Enemy Unknown come to mind.

However, one thing I would be extremely wary of is running your Steam account from AWS or any other server environment. The last thing you want is to get flagged as a bot or VPN abuser and banned; Valve customer support isn't exactly known for being particularly understanding or responsive. Personally I would just load up a throwaway Steam account with a few games and use that.


> The last thing you want is to get flagged as a bot or VPN abuser and banned; Valve customer support isn't exactly known for being particularly understanding or responsive. Personally I would just load up a throwaway Steam account with a few games and use that.

I wonder if you could set up the tunnel going the other way so that the EC2 instance goes through your home network to access the internet?


Even with tunneling there's still the possibility of hardware fingerprinting. Unsure if it would be limited in scope to Steam Guard or not.


No need to DIY -- Steam already has game streaming built-in, so your friends can watch you play if they want. It works pretty well. This is separate from the Home Streaming feature, which is what's used/abused in the article.

Home Streaming also works really surprisingly well, although it feels weird to play GPU-heavy games like Shadow of Mordor on my dinky little Macbook. With an XBox controller, no less. (I have the "unofficial" Mac drivers installed, but I'm not sure if Steam uses that or has its own XB360 controller support built-in.)


I'm talking interactive streaming, where your friends control the game with you, over the internet. For example, you could all share turns with each person taking control of the cursor at the appropriate time.

Steam's inbuilt streaming is fantastic for spectator use, it's just not interactive.


Am I ridiculous for wanting this to be a sort of on demand service with a small markup?


I would definitely be interested in such a service. I've basically stopped gaming these past 3 years and now my old rig is severely out of date. It is too much money and hassle to buy and build a new rig that I wouldn't be using that much anyway (and will be out of date in a few years!)

If anybody out there seriously starts working on such a service, let me know!


This solves so many problems for me. I love the idea of streaming games. I have driven the cost of computing at home sharply over the past few years by using a Chromebook ($130!) and Chromecast along with my phone. The last part of this is scratching that itch for PC gaming.

My rig is about 7/8 years old and I've only upgraded the video card when I had to. I'm looking at building a new rig just to keep up and a "cheap" one comes out to $800 for me. Something like this would be very desirable. I would love to be able to shoot this to my main big TV and only using my Chromebook to access it.


Not ridiculous at all. Seems like a nice logical step from the failed business attempt of ...forgot the name but the company that provided streaming service but had lock-in of games on their platform.


OnLive and Gaikai were the big two that I remember. Gaikai was acquired by Sony, and OnLive failed and parts of it were also sold off to Sony.


I wonder how much of this post is covered by patents now owned by Sony.


You're basically combining three parts, Steam Streaming, nVidia's hardware stuff, and OpenVPN. OpenVPN and all the high-level ideas it embodies definitely predates Gankai or anything like that. Steam Streaming and nVidia's stuff are certainly going to be covered by their own patents. In theory the combination could be patented but I'd argue for "a network stream can be run over a VPN" is firmly "obvious to those skilled in the art"... network streams are network streams are network streams.

Sony et al probably have more specific patents for their own setups, but to the extent they cover this setup they'd either be obvious, or they'd be trying to sue you for doing stuff covered under nVidia or Valve's patents. This, alas, doesn't necessarily protect you, but would certainly raise some PR issues. Plus Amazon might have some questions as well, since they don't want people getting hit for using cloud services to do things; anything that smells like special cloud licensing just because you're doing X "but in the cloud!" is going to hurt their business model.

I can't guarantee they have nothing to sue over, but the costs/benefits would not seem to argue in favor of Sony suing.




OnLive and https://www.gaikai.com/ (acquired by sony)


I'm actually really tempted to do just that - I reckon there are at least a few people who'd pay decent money to avoid building a new gaming machine.


The question is, what will you do different to succeed where they failed? Because otherwise, you're just gonna fail like they did.


I assume in their case they were investing in the infrastructure as well, which would be a significant cost; if one just resold this service on top of Amazon for cost +x% they wouldn't have that kind of capital exposure, not to mention no marketing / advertising if it's kept as a small side project.

The thing that I would worry more about when running a service like this would be liability for what users of your service do / say.


I guess you're not going to own all the infrastructure and simply have it as a cost of business which you can pass on to the customer.

On top of that hardware costs vs performance along with hardware h264 encoding is getting cheaper.


Do you realize Onlive failed on performance merits alone .. while incorporating tricks like video compression/decompression 'racing the beam' (to avoid waiting for whole frame). Even with all the highly custom magic most games felt LAGGY. Here with this EC2 implementation we see 30-50ms on top of network latency to get you a picture on the screen to begin with.

Almost no one will pay for laggy games at last gen resolution.


some people are clearly willing to use it, especially for games that are a bit more casual and story driven. and like I said there's no huge risk where you're sitting on all this top-end hardware to play the games with this setup.


I wonder if part of OnLive's problem is that their primary competition (people buying XBox etc and playing from home) was highly giftable, both the initial hardware purchase and the individual games.


So far it only makes financial sense if you use spot instances, and they can disappear instantly.


Has anyone tried SoftEther VPN in place of OpenVPN for something like this?

http://www.softether.org/

I've been using it to set up a site-to-site VPN between my home network and my Azure VMs and in my experiences the performance has been quite good. They also claim to have higher max throughput than OpenVPN but I haven't yet verified those claims myself.


i tried using the prebuilt ami. However after installing and configuring tunnelblick on my mac, when i connect to the vpn, i get: "This computer's apparent public IP address was not different after connecting to <<hostname>>'. now steam cannot detect the windows server. what am i doing wrong?


You can ignore that error message (or disable IP addressing checking). The OpenVPN connection isn't configured to forward all traffic over the VPN, so your public IP won't change.


Ah, okay. Strange though, Steam client won't recognize the server even when connected. Thought maybe the IP address error was the problem.


double-check the OpenVPN log after you connect to the server. I had to add a flag to tshark in the provided up.sh script to make multicast forward properly.


Would you be willing to post your modified up.sh script here, or somewhere? Would love to see what you did to get it to work (still no luck on my end).


Ah, my issue was I didn't have Wireshark installed, so tshark was not found.


I ran into the same thing. Not sure how to fix.


Same error on connect here, same problem with not being able to get the Steam programs to see each other.


I read the entire thing and I think it is cool that now days this is possible. However, I got to thinking at $.50+ per hour, if I play 6 hours a night, that's $3.00 a day on weeknights. If I only play $8 on weekends it adds up to $23 per week. This would equate to $1196 per year at (23*52). Basically I'd much rather invest in a gaming rig. My CPU and GPU haven't required upgrades now for years in this day in age. At least if I invest in a gaming rig I actually have a gaming rig.

While I respect and find the technology fascinating and cool, it feels like leasing a car I'll never own versus owning and evening buying a new one every few years at a much lower price. For those who game a few hours a week however, I can see this being a cheap alternative to a gaming rig.


6 hours a night on weeknights sounds like a heck of a lot! Owning might make sense for you, but you might be an outlier.

I would have to think most people with a job and family might struggle to find 3-4 hours a week to game. No offense intended by the above. I'm quite envious :).


Personally, I just don't like paying for something else monthly, or hourly. Even if it's a slightly better deal; I just don't want the hassle of another monthly bill-- looking for errors, new fees, the date the good deal goes away, the trust I hand over when giving out my credit/debit card. I know it doesn't make logical sense. As to a gaming computer, I would rather spend a couple of grand upfront, and know I can patch it together for the next few years. Same with an automobile, I don't mind driving an older car.(my current car is a POS to some individuals, but it's never seen a mechanic, it's safe, and it gets me where I need to go. I do need to thank the engineers at Toyoya though. They made some pretty indestructible automobiles.)

Sometimes, I think about that quote in that movie Office Space, when Peter Gibbons said, "You know, I've never really liked paying bills."


Back in the day I would have been in the same place as you, but since our first son arrived I'm down to far less gaming time - at the moment I'm lucky to get six hours a week, at which point this becomes far more appealing.


Anyone wanna team up and build something like this?


I'm interested, have 5k in AWS credit, and I've owned cloudsteambox.com, steamboxhost.com, and hostedsteambox.com for about a year in anticipation of one day building something like this


I'd like to chat about it. I've been looking for a service like that and have tried the AWS GPU instance approach as well. My email is on my profile.


I'm interested and had started looking into it when a similar article was posted a couple of months ago. My email is in my profile.


Also interested, im adept in most languages and have a bit of experience setting up scalable aws environments. Email in profile


Potentially. I assume you already know of OnLive? Email is in profile - I think it is visible? If not reply.


When people say "email in profile", they're talking about listing it in the "about" section. The email field is private.


Gotcha, thanks!


Email field is not publicly visible


Another user explained, thanks, got it now.


Interested in building it. Lets get a communication channel open. Email in profile.


I'm down, email in 'about' in profile, thanks!


What are you thinking about building? E-mail in profile.


Didn't see an email...


My bad, thanks for following up. xhr@io2g.com


Definitely interested. My email is also in my profile.


i have a pet project doing something like this hacked together with with obs, nodejs rtmp server, etc. email in profile


How do we get in touch with you?


I'm in. Email in profile.


Sure, email in profile


I'm interested.


If something like railgun(https://www.cloudflare.com/railgun) could be done with this. Im sure latency can be reduced further. Have small data-centers in various regions around the country but only a big data-center with all the beefy machines.


I'm interested.


Just curious how he got $0.11/hr for a "spot" instance of g2.2xlarge? Amazon's "on demand" pricing of that config w/ Windows on their website is $0.767/hr.


On-demand and spot prices are two different things. Keep looking at the pricing page and you will see. Spot prices fluctuate constantly and your instance is auto-killed when the spot prices rises above the maximum price you are willing to pay.


Thanks, I wasn't aware of the spot pricing (never scrolled that far down the very long pricing page :)

I'm not sure that I understand the economics of the spot option from Amazon's POV since it seems that a person who needed persistent servers could set their spot max price at the same price as an on-demand instance and always be ahead by doing it that way (with the caveat that there would be a short interruption if forced to switch to on-demand).

For someone who needs cheapest possible compute power with a flexible schedule, spot arbitrage makes sense of course.


> it seems that a person who needed persistent servers could set their spot max price at the same price as an on-demand instance and always be ahead by doing it that way

Not if you need persistence: the spot price can go above the on-demand price. So if you set your max spot price to the on-demand price your instance might get terminated.


If you need steady state you'll almost always be better off paying for reserved instances rather than a near hourly rate spot price.


> set their spot max price at the same price as an on-demand instance

People have played that game before and lost. Spot pricing can and has gone above the on-demand prices. It is normally a great way to save money, but there's no way to avoid the possibility of having your instance terminated. If that's not something you've built around then you have to stick to on-demand (or reserved).


Sounds about right. For example, Oregon 2b averages around $0.18/hour. Different AZs are drastically different. Some of them hover around the demand price while others are rock bottom.


Can you say how the prices in Frankfurt look like? It won't show unless you have AWS account.


eu-central-1a: $0.0938

eu-central-1b: $0.1053


we really need a focus on low latency instead of bandwidth but i guess that's even worse in terms of marketing than upload bandwidth. But its frustrating to know that <10ms latencys are easily achievable with current technology but ISPs just don't care. Lower latency even improves web browsing a ton, voice/video calls and any kind of realtime interaction basically. Then again with 4K gaming becoming more popular, even todays bandwidth will not be enough.


I'm not the biggest fan of ISPs but that comment isn't entirely fair on them.

In Australia the speed of light actually matters for the distances light needs to travel - this is probably the same for mid-west towns not near an AWS centre too (not sure of their presence there, but I expect they are predominately on the coasts).

The speed of light is 300,000 kms per second.

Distance wise: Perth to Sydney approx 4,000kms or 13ms travel time, and Brisbane to Sydney approx 1,000kms or 3ms travel time.

I'd say it is highly doubtful to get sub 10ms from Brisbane to Sydney if you're going via even just a handful of routers. And Perth to Sydney? Forget it!


If you want to be fair to ISPs, don't look at cases where they fail to do the impossible or very hard, look at cases where they fail to do the very easy.

The standard deviation of any ping test I do is at least 3ms due to how DOCSIS works, and I can't ping my next door neighbor (same ISP) through the internet in under 16ms. I've done traceroutes against several servers that are in the same city as I am in North Carolina and the only one where my traffic didn't first go to Atlanta or DC (or both) was to a server hosted by an ISP that has no physical presence outside of North Carolina. My cable modem and my ISP's CMTS each have out of control bufferbloat that adds hundreds of milliseconds of latency under load in each direction, which can't be entirely mitigated by my router's traffic shaping and AQM. There's little reason to believe that they've got got any AQM further upstream given the large latency spikes I see even when my last-mile link is quiet. Disregard for latency is pervasive in the design of the ISP's network.


Sure, Australia is super huge and <10ms is a different thing. I live in germany and there are ISPs where you can get <10ms in the whole country. Sadly when i moved to Berlin the ISP situation got worse than in my previous much smaller city and i am "stuck" with 30-40ms latency. I somehow care a lot about latency, but i can live with it ;) I have seen some people with Fiber connections basically get <5ms everywhere in the country, so the tech is there, we just have to get the mainstream interested. At the current rate ISPs will squeeze the last drops of bandwidth out of copper and cable connections until finally moving to fiber, but it's understandable as the mainstream demand isn't there and getting Fiber into all homes is a massive undertaking.


I'm ~2 light-ms from us-east-1, and my latency is 49ms. My latency is 11 times what it would be if there were a direct fiber run from my laptop to us-east-1.


Just for nitpicking, light doesn't travel the same speed in fiber as in vacuum. AFAIK the refraction index of the core is typically ~1.45 so your latency is "only" around 8 times larger than the theoretical limit.


You're right, but it's even worse than that. The speed of light in optical fiber is closer to 200,000 km/s.


Amazing idea! If this could be set up for a multiplayer games without much trouble (lag, cheating, licenses), this could be The Next Big Thing :)


I call tell you do marketing for a living.


I wonder why this works better than the Steam In-Home Streaming. I could never get it to be close to 60fps. The video suggests 60fps.


I imagine having the equivalent of two GTX 680 GPUs probably helps here, those cards are no slouch generally, and these ones have been especially put together to handle streaming games.


Im testing it now, but my guess is because you have your own allocated hardware for whatever purpose you desire. Where in house stream is more like 'sharing'


Has anyone tried this with Nvidia GeForce Experience and a Shield TV? I might try this instead of upgrading my aging desktop.


The ShieldTV comes with Nvidia GRID, which is basically the same idea, a big instance on AWS with a GTX980 streamed to the device, it works surprisingly well.


For some reson I haven't been able to get it to another resolution than 1024x768. Anyone else having luck on this?

Else I would say that it runs super well! The input lat is ~15ms, ping is ~30ms and display lat is ~30ms, reported from the steam client. I have no trouble whatsoever playing FPS games at all, or any other game for that matter.


So you can install MS Windows on an EC2 instance without having to pay for a license? How does that work?


Amazon is paying for it and it is reflected in the price you pay. Windows machines are more expensive to spin up than Linux machines.


The Windows license is factored into the hourly billing.


Anyone tried to play competitive multiplayer games like CS or Heroes of the Storm with such a setup? I can imagine that streaming adds a bit of latency, which isn't a problem in singleplayer games, but could add too much lag for fast-paced multiplayer games. Any experiences?


I find it funny that you're putting CS and Heroes of the Storm on the same level. Considering the latencies already mentioned (25-50ms to the closest AWS data center), CS should only be usable for casual play while for your other title... most Dota clones are playable up to 250-300ms latency, if you're not spiking.


Dota 2 is absolutely unplayable above 150 ms. Last-hitting and denying get hard enough, but at an even higher level you MUST be able to react to in-air projectiles with abilities such as Black King Bar, Manta Style, Blink, Naga's Mirror Image, etc to disjoint the projectiles.

Anecdotal evidence: historically I have a 60% winrate on US West servers (~40-70 ping) and only 50% winrate on US East servers (~110 ping). I've played on EU servers and it is absolutely miserable at 200+ ping. If I had to play regularly on anything above 110 and I'd probably drop to 45% or 40% winrate.


I've been playing Dota 1 and Dota 2 for 10 years now and I can tell you that with enough practice anything under 250ms is doable if there are no spikes in latency.

After a certain number of games you get used to it and start anticipating. It's not perfect, that's for sure, but it's definitely doable. Or are you talking about 5k+ MMR? In that case it is trickier, of course, but I know even pros managing to do it :)


Your blanket statement is false. Even professional teams playing online tournaments regularly deal with 200+ ms (EU vs. NA, etc.). It's not ideal, but playable.


Not yet, but I'll spin up an instance in the next day or two and try the New Unreal Tournament and report back. https://www.unrealtournament.com/


That would be great. I might try it myself, too. Currently, I'm gaming on a MacBook Air 2013. Works alright, but I play Heroes of the Storm on low settings and it could indeed be a bit nicer :)


I have a VMware ESXi box at home that I use to run a Windows VM with a video card passed through. I've been using it to stream Heroes of the Storm with Steam's In-Home Streaming and I get a consistent 30 FPS (1920x1080) with no noticeable lag.

Admittedly, it's not exactly the same as the setup in this article since it's not streaming over the internet, but I would assume that as long as your latency is low and bandwidth is high, you'll have no problem with this setup.


I'm just wondering. Theoretically, you have an additional hop.

Game server <--> Amazon <--> Home

From a pure latency point of view, this is probably not an issue. However, whatever events you see on screen, the game server sees you reacting slower, because your keyboard basically has a delay of 20-30ms (Home --> Amazon).

Maybe, it depends a bit on the game you play, if this has any negative consequences or not.

Edit: This could be somewhat compensated by the faster connection of Amazon <--> Game server (just assuming that it's faster than a connection from home).


Have you tried to VT-d passthrough the Intel GPU? I was trying to figure out if this would work.


If all of your matches were against people hosted on the same cloud, I imagine the latency between game clients would be very low and then the latency would just be the same as your normal client <-> server lag.


True, but right now, I can't make this assumption. Cloud gamers are probably rare :)


I wonder how the hardware encoders and decoders compare to software implementations. They of course use less CPU, but also generally tend to compress worse and have higher latencies than software implementations. Is nVidia's hardware specially optimized for this use case?


Am I the only getting stuck on OpenVPN instructions? Are they copy-paste easy or need some know-how? All I get is errors about can't open /etc/ssl/openssl.conf and list from --help command saying that -config is unknown option. Build-dh hungs


Nope, same issue here. Started from scratch twice and build-dh is still hanging. Also getting the warnings about /etc/ssl/openssl.conf... Anyone know whats up?


This would work fantastic with Steam Link that they have coming out: http://store.steampowered.com/universe/link

It uses the same in home streaming feature of steam.


I was thinking the same, but you'd have to figure out how to create the VPN to the EC2 instance for multicast forwarding, since presumably the Steam Link won't be able to run OpenVPN.


Add another piece of hardware. Cheap off-the-shelf Linux-running hardware. A rebadged Netgear router that'll run OpenWRT is £7. Worst case, Raspberry Pi is £45 including case, cheap small SD card, PSU and an extra USB Ethernet adapter.


This can work very well for some applications. I have a startup doing something similar to this with the Second Life Viewer with good results. The most painful parts turn out to be in the plumbing around state and change management as you might expect.


What is the latency of a setup like this? Could I play an intensive FPS and be competitve?


No. The latency isn't just in the game like if you connect to a high ping server, there will also be some delay from when you move your mouse to when the screen changes. This is one of the reasons competitive fighting game players say never practice online. You're gaining useless muscle memory.


Click the button on this page and it will tell you the latency to every AWS datacenter:

http://www.cloudping.info/

For me (London) it looks like I could get a latency of 28ms if I hosted my rig in their Dublin datacenter, which sounds OK. (But obviously this would be on top of the ping to the multiplayer host, if you're playing online.)


Also, this would be input latency, which is much more significant of a drawback than normal in-game network latency.


According to that site, I can't get less than 44ms with Amazon where I live...

This site has a much larger number of providers:

http://cloudharmony.com/speedtest-latency-for-compute-limit-...

But of course most of them don't offer GPU servers.


nice website! thanks :)


Well, for multiplayer, at least the EC2 machine would have a reasonable ping with the other players, it would just be display/input latency.


From a pure network connectivity standpoint, you're essentially looking at a bare minimum of ~two frames of latency (about 30-40ms), ignoring additional latency added by video compression/decompression. So, probably not going to be useful for competitive FPS play, but for most games it'd be tolerable, if even noticeable.


Any latency between AWS <--> Home will become hardware input latency. Which nothing can compensate for, you just react that much slower. Furthermore your hardware input latency is variable, so you'll never be able to from muscle memory.

The post suggest ~60FPS limit, which isn't competitive really at all. 120-144FPS is the standard for competitive play currently. If you encode at 144FPS, and decode at 144FPS then you already have a GPU that supports competitive play, so cloud platforms become moot.


From my experience with a ping of 60ms to ireland games like GTA5 or Crysis2 where playable to a point where you would forget that you are streaming. I tried to play Counter Strike online but for some reason even without any professional aspirations I felt kind of handicapped. For Strategy it worked quite well though.


I think we have a new winner for the stupid and inefficient ways humanity is going to start using its precious infrastructure and resources.

Previous holder of the title was spending terawatts of power bitcoin mining.


Cool experiment. I thought about trying this for streaming video or music to cheap devices in various places as well. For now, I just use my smartphone and WiFi as it's cheaper. :)


Does someone have experience hosting a dedicated server on EC2 24/7? How's the performance and is it cost effective? Or is it preferable to host on digital ocean/linode?


At my current employer, we run about 10k+ EC2 instances 24/7.

DO / Linode will mostly be cheaper and proooobably offer a better price/performance ratio. You will however get a lot less options (especially GPU wise)


If you want to get a lot of power, especially for gaming, you can forget DO.

Let’s take your average (okay, this is really beefy, but I couldn't find something lower at most dedicated hosters) gaming system: 8 Cores (16 Threads), 32GB RAM, 2TB HDD. At Digital Ocean, you pay approx. 320$/month for 12 Cores, 32GB RAM, and 320GB SSD

Let’s take your average small hoster, in this case Hetzner, as comparison: You pay 55$/month (plus 55$ setup) for 8 cores (16 Threads), 32GB RAM and 2TB HDD.

I’m just showing this as example: Using Cloud hosters for stuff that is running 24/7 is not cost effective, especially if you always need the same performance – use a dedicated server instead, hosters that specialize on that can provide far better offers.


I agree on using dedicated hosting. However, average gaming systems are that powerful. I was digging for an above-average rig for a friend to play the latest games. The system I built him (Jackel) was 6 cores with 8GB of RAM and runs most of them very well. Only cost $700 with the Win7 license.

At those prices, it would be a better deal than hosting after around a year given it would continue working and have resale value.

Jackel Gaming Rig http://elitegamingcomputers.com/gaming-computers/#2


I think your thoughts on average gaming system are skewed


Well, it’s the cheapest dedicated server that Hetzner and OVH provide, so I used it for comparison, and a lot of people have similar specs nowadays.


What games are you playing that need 32GB of RAM?


If you're going to be running a single, stand alone server 24/7 with high resource requirements your best bet is a dedicated server, not a VPS.


Yeah, I got quite annoyed when trying to run a gaming server on DO, even their 160$/month tier is worse for that than a dedicated server for 20$ a month from a competitor.


If you don't mind your instance crashing for no reason it's fine.


Has anyone thought of selling preconfigured cloud gaming services?


OnLive (now defunct), NVIDIA (ramping up), Gaikai (working with, or acquired by?, Sony) just to name a couple. They don't give you your own VM to host and manage, they're subscription game streaming services, but it's the SaaS version of this.


Acquired by Sony, and the source for the Playstation Now service released earlier this year, which lets you stream PS3 games to a few devices

http://arstechnica.com/gaming/2015/01/playstation-now-review...

http://arstechnica.com/gaming/2015/07/hands-on-with-playstat...


There's also the Nvidia Grid for the Nvidia Shield (which you could plug into a 4K TV to play on, and play using a wireless controller):

http://shield.nvidia.com/grid-game-streaming

But maybe it's not quite what you want, since it's limited to their games. Never personally tried it though.


Grid is meh, but using the shield to play PC games anywhere in the house is a win. (I use the shield portable)


We're using Frame [1] to do that for Second Life Viewers right now[2].

Frame has a platform offering if you want to setup some other virtual world or game.

I guess I should do a Show HN, but it just hasn't been a priority with working out the kinks.

1. https://fra.me

2. http://brightcanopy.com


Oh, you mean OnLive? http://onlive.com/


https://www.gamefly.com/ recently launched on Amazon's Fire TV android box.


A lot of technically clueless people targeting casual games did actually. All of them failed or got acquired by even more clueless people.


This is crazy awesome. Since it uses h264, I wonder how well a Raspberry Pi would work as a client machine. Heck, you might be able to do a whole LAN party just with PI's.


Imagine the bandwidth required to stream a lan party :)


I thought of that, but it would be downstream intensive, so even at 5Mbps for HD, you could still get at least 4 people working on a simple 20 meg line.


RPi is bound at the network interface, then. Even the onboard Ethernet runs over the USB hub.


Does Google Compute Engine have the instances needed for this as well? Their datacenter is centrally located, so long times are much faster for me than east or west coast.


GCE doesn't (yet) offer instances with GPUs, so no.


i did this about a year ago to run 3dsmax/vray on ec2 gpu instance via RDP. Worked ok-ish but i found it quite clunky to mess about with AWS interface to start and turn off an instance every time I wanted to use it.

Has anyone managed to script something where you just press a button/run a local script and it does all the work, including saving your image to EBS before you turn the thing off and stop paying for the instance?


I believe that is exactly what he does in the article.


I'm going to set this up to see how it compares to my Windows Steam on Wine that sits next to my native Steam on Linux with its smaller library.


How does the AMI handle the certificates? It seems like everyone would use the same cert and thus log on to any instance of the AMI.


i hate to admit, there are a lot of places I thought cloud services could be leveraged, this just wasn't one of them (keep in mind, I say useful, not necessarily best fit).

This is such a cool idea, makes me realize what other creative solutions are just lurking, ready to slap me across the face.


Anyone know what protocol (RDP, VNC, h264, ???) is underlying the Steam remote display?


This is actually a cost savings. Windows games are much cheaper than their OSX versions and they are available much sooner on Windows than OSX.


That might have been true 20 years ago with big box retail games, but on Steam, GOG, etc. everything is the same price and many games are multiplatform.


About 30% of the games are Windows only.


Their prices are the same though.


g2.steam instance type please.


This is really important as virtual reality begins to take center stage, but most people don't have the rigs to run it.


I'm pretty sure the latency would make VR headsets unusable with this setup.


In the VR context, latency is just a bandwidth problem (think instruction look-ahead), and bandwidth keeps increasing.


The context isn't just VR. It's the internet. Probably the best ping you can expect to get from an aws data center (for most people) is around 30ms or 40ms. You probably only get 20ms if you're lucky. According to John Carmack, the maximum acceptable latency for a VR system is 20ms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: