Very nice. A direct competitor to the entry level DigitalOcean 512MB servers which cost $5 a month. The great thing is that as your grow, you won't outgrow AWS, which is not necessary true with DO.
You can't even log into the instance at all for that price. You also have to pay for bandwidth (very expensive) and EBS (not bad, unless you want it to be as fast as DO's local SSD).
Setting aside that the DO droplets are far more bang for the buck in terms of consistent CPU usage, and also (to be fair) that AWS offers far more scalability options, bandwidth alone is shockingly expensive at AWS.
At 9 cents per GB, you'd be looking at $90/mo for bandwidth alone for what DO gives you included in the $5/mo.
The $80/mo Digital Ocean server w/ 5TB of transfer would cost $450 at EC2 JUST IN BANDWIDTH ALONE!!
And, at DO, if you exceed that bandwidth allocation that's included for free, it's $20/TB vs $90/TB at EC2 (both charge outgoing only).
These are costs that most people don't consider but really amount to poorly explained fine print costs. (Others include Glacier restore pricing, intra-region bandwidth, etc.)
(disclaimer: I'm an AWS certified SA and my SSH key manager startup, Userify[1], is an AWS partner, but even so we are still forced to use DO for a large part of our infrastructure -- especially where bandwidth is concerned.)
Why not just get two dedicated servers and have one be used as a backup?
For $60/mo you can get 4Tb space free, basically unlimited bandwidth and a permanent 4XL-equivalent instance that makes scaling up to that point extremely easy with no need to configure autoscaling, s3, inter-service communication, cloud formations or complicated fail-over strategies. Unlike AWS instances that die frequently, a dedicated server is much less likely to malfunction.
I used Hetzner a few years ago, in the space of 6 weeks I had 3-4 times I couldn't login because the network was saturated under a "DDoS", I had random and sporadic network drops even when they said they weren't having problems.
Latency was an absolute disaster even typing characters via SSH was horribly slow.
The management platform was hilariously bad (they where still sending me notification emails about 9mths after I cancelled).
We went back to linode for everything after that, for our needs it hits the sweet spot between cost and reliability (back then DO was new and had it's own issues, largely due to a flaky network).
Now I'd possibly consider DO for compute or job servers but I think I'd still use Linode for anything that wasn't running on my own hardware, I've been a customer with them for something like 6 years and they've never let me down.
I've used Hetzner for 2 years now for a lot for personal projects, and I do get slowish SSH sometimes, but that's what Mosh is for. Also, it's much better for European customers than for US ones.
They are also the only service I can afford for personal projects that requires a lot of RAM. Basically, any machine learning with a ton of features.
Here's a test with latency:
URL tested: http://rerecommender.com
Test performed from: New York, NY
Test performed at: 2015-12-18 04:36:44 (GMT +00:00)
Resolved As: 188.40.128.87
Status: OK
Response Time: 0.306 sec
DNS: 0.075 sec
Connect: 0.115 sec
Redirect: 0.000 sec
First byte: 0.116 sec
Last byte: 0.001 sec
Size: 7688 bytes
OVH puts a ban on business from mainland China. It`s kind of weird. You maybe hate those harassment, and you want to shame mainland Chinese(because almost all other countries, even Hong Kong SAR and Taiwan are accepted). But a) this is business, you should not be that emotional, there are good people in China too. Doing business in this way may only humiliate yourselves. b) you can not just ignore China market, it`s a market no company should overlook.
No offence, fighting spam in that way is either an IQ problem or an EQ problem. Banning mainland China business completely for couple of years to clear blacklists? Brilliant. Actually you can buy any servers from EC2, linode, DO freely from China anytime. How can they deal with this issue?
Totally unrelated to this post, but THANK YOU for Userify. I don't know how you guys are making any money yet (or how you plan to), but you make my life so much effing easier!
We're racing against time and 4,000% growth... Userify[1] is also offering an enterprise[2] version for customers that require it for compliance reasons.. and working on some really helpful upcoming features.
I just checked userify (haven't heard of it before), looks like a useful tool. What I found a bit odd is that pricing page is nowhere to be seen and it doesn't seem like userify is one of those enterprise 'request a demo' kind of services.
It might be ok if I really really need it like right now but if I'm just 'oh looks cool, might be useful' and have no clue how expensive it might get (and not motivated enough to sign up and find it out then) then it's not great.
Not to speak for Userify, but assuming the prices are 'relatively similar' to other DevOps hosted service type tools, I'd be happy to pay.
The ability to arrange user by projects, and then roles in projects has been awesome. You could have devs who have full access to development servers for project #1, limited access to staging, and no access to production, while having totally different levels of access on a different project, etc. You don't have to organize that way, but you basically get an org structure of a top level group that contains any number of child level groups (so, two levels of organization).
The total number of options are limited, I would say Userify is focusing on doing their one thing (or few things) really well.
Paired with Puppet scripts, AWS, Deploy Bot, and Laravel Forge, I'm able to deploy PHP, Java, Python, and Node.js apps to all kinds of funky server configs and access management is not even thought about any more. Just put the right keys in the provisioning scripts and the server gets the correct SSH user list and user keys.
Also, being able to centrally revoke privileges is also very awesome as contractors come into and out of projects.
And to use 5TB in a month you have to sustain over 15 megabit/sec 24 hours a day. DO is betting on you not doing that (and in aggregate nobody does), because in any other context sustaining 15 megabit is about as expensive as Amazon. Quote a 100 megabit drop some time if you don't believe me.
If every VPS on Digital Ocean and Linode actually used their quota, they wouldn't have uplink capacity to support it and the network would fail. That's overselling. Numbers that high are extensive overselling. Linode has 40 Gbit links (at least they used to), and give 2 TB (6 Mbps) to each small Linode, meaning about 7,000 Linodes actually using the quota would saturate the link. They have a few more than that. Do the math.
The bandwidth quotas are sales stuff so that you will say exactly this in threads like these, and it's amazing how well it works.
Sure, 100Mbit/s circuits aren't very cost effective -- also, no one buys them. If your needs are a little larger, you can do a 1 gig commit on a 10 gig circuit in a well connected datacenter in a first tier region (e.g. Silicon Valley), and figure on paying between $1.50 - $2.00 per megabit at 95th percentile. Much better prices are available with some elbow grease applied. Add $350 per month for the cross connect and $500 for the installation.
That comes to $22,700 for a year of service if you don't exceed 1 gigabit/s at the 95th percentile.
In Amazon's bandwidth terms, that's 331 terabytes transferred, or $17,000 per month if all of that bandwidth is outbound. You would have to limit your average transfer to 110mbit/s in order to achieve cost parity.
So yes, bandwidth is incredibly expensive in AWS and you can easily do better in datacenter-land (or with cloud providers who bill differently), provided that you are operating at sufficient scale.
"provided that you are operating at sufficient scale"
Where "sufficient scale" probably means you're spending 100k+ a year on infrastructure. Let alone staff costs to support it. Also in your example you have no datacenter redundancy so it's 44k if you want to be in 2 DCs, which means "sufficient scale" is probably more like 200k+.
"if you don't exceed 1 gigabit/s at the 95th percentile"
That's a big if, the internet, startups and business in general are uncertain and it's often very difficult to know what your 95th percentile will look like in the future.
Also, it's unlikely that your network utilization is constant. A utilization rate of 50% would be ambitious.
So a better comparison is 44k for 1gig in 2 datacenters vs 102k for AWS (17/2*12), assuming you can predict your network utilization pretty well.
> "provided that you are operating at sufficient scale"
Nothing says you have to do it all on your own. Let the DC operator do the heavy lifting, all you have to do is rent a dedicated server, VPS or colocate. Total spend to sufficient scale: a few dollars - a few grand per month.
> That's a big if, the internet, startups and business in
> general are uncertain and it's often very difficult to
> know what your 95th percentile will look like in the
> future.
It's really not that hard. Look at historical usage and assume it continues as before. If you are worried have a spare dedicated server or two with 1G/10G interfaces.
> Also, it's unlikely that your network utilization is
> constant. A utilization rate of 50% would be ambitious.
This really is a moot point as you effectively only pay for peak usage with 95th percentile billing. Alternatively you can pay per terabyte of traffic, which is effectively average usage. Average usage also correlates strongly with peak usage, so in the end it's all the same, give or take a constant.
> So a better comparison is 44k for 1gig in 2 datacenters
> vs 102k for AWS (17/2*12), assuming you can predict your
> network utilization pretty well.
Nope. A proper comparison is about $1000 per month for two dedicated servers in two DCs versus $102k per year for AWS.
The oversubscription model actually works, just like insurance does. We consume massive amounts of bandwidth on a continuous basis due to thousands of servers checking in. (We're not spikey in our bandwidth or CPU -- we're surprisingly rigid at 87% to 90% continuous.) We're anomalous, which kinda proves the point.
What is ironic about your statement is that t2 CPU allocations are massively oversubscribed in the same way. (but far more aggressively!)
A former Linode employee, actually (why did you edit out the ad hominem accusing me of working for Amazon?), but my employment is irrelevant to pointing out that over subscription means you are not actually getting what you pay for. If everybody used it you wouldn't have it any more. It's not yours. You are deciding entirely on perceived instead of actual value.
And yes, AWS's CPU over subscription on these instances is important if your workload is CPU intensive. They're very quickly outgrown due to that but useful in a pinch.
Like the parent said, it's just like insurance. You can get what you pay for, because almost everybody else is paying for something they don't use, just in case they need it—and there's no situation where everybody's "just in case" will happen at the same time. Aggregated traffic is predictable in a way that individual flows aren't; you can have enough surety to stake your business on giving some of your customers a throughput/latency SLA, without reserving any circuits for them—because you know that even the peak of your traffic's froth is predictably below your uplink's capacity, and that there's no way for this fact to change on a timescale small-enough that your staff can't cope with it.
over subscription means you are not actually getting what you pay for. If everybody used it you wouldn't have it any more. It's not yours. You are deciding entirely on perceived instead of actual value.
I don't agree; as long as you can always use it when you needed, you are getting what you paid for - that's actual. What you're talking about is the value in an hypothetical situation. Now, if you are actually impacted by the oversubscription, then the actual value is below what was promised, but that doesn't seem to be the case for jamiesonbecker's use of DO.
Explain why despite wholesale internet prices collapsing, AWS continues to charge the same with virtually no yoy decline? Azure et al also do this fwiw. It's just gouging.
Isn't that the whole point of VPS pay a fraction because you you'll only use a fraction.
Even internal clouds, people migrate their data centres to VMware, xen, kvm etc... because most servers only use a fraction of CPU and network allocated to them.
When you do need full cpu or bandwidth, you know it and you build for it.
No real reason to use Linode any more, either. The AWS skill set is valuable in the industry and using it for your personal stuff gives you a leg up careerwise.
My AWS skills (VPC architecture, Direct Connect, boto, etc.) have been a big hiring plus for me in the past. Since nobody relevant uses Linode for production any more due to security and other issues, probably time to move personal blogs and accrue transferable skills.
People also compare the big holy wow bandwidth/SSD/CPU offered by DO and Linode without accounting for the fact that you could almost never use it all at the price point without hitting (a) capacities of the instance and (b) annoyed employees. Jeff's data on how successful their CPU quota accounting is backs this up. If you're pegging a core in your workload you should probably own the core.
Seriously, think about it. DO offers you a terabyte on the low end plan. You have to sustain 3 megabit/sec every second of every day to hit that. Maybe in some scenarios you are, but almost nobody running personal gear is doing that. The higher levels are even more ridiculous. But the sales stuff works: people are concerned about it in this thread, merely the potential to use instead of paying for actual.
With the introduction of the t2.nano, not really. Even if you don't want to go Amazon, GCE is super useful for personal stuff too.
Old school VPS providers can't compete due to resources. Amazon and Google have far more people working on perf, security, and so on. Sadly, VPS providers are going the way of shared hosting.
You seem to be brushing off the fact that the vast majority of even the tech-savvy market simply doesn't require the flexibility of AWS and will happily go get better perf (CPU, disk), and more memory and transfer, at a fraction of the cost elsewhere.
The nano doesn't tempt me to move any of the half dozen or so VPS's I have running to AWS in the slightest. I'll stick to vendors who answer support tickets from the little guys, oversell fairly, and don't have a ridiculously complex pricing structure.
And shared hosting isn't dead. Plenty of small businesses still pay good money for managed shared hosting to run their webpages etc, instead of paying a wannabe-sysadmin who probably doesn't even shell-in to the VPS once a month. My uncle runs a small business and pays his webmaster around $30/month for a website running off an IP that hosts at least 3,300 other domains...most of them small businesses just like his. Someone is making a mint on that box.
Your comment has really tempted me to switch to AWS from DO just to try the waters of the cloud. However, the terminology and all the different services offered by AWS are mind-boggling.
Are there any good introductions to cloud computing you could recommend?
If you're coming from running standalone servers on DO you don't need to worry about most of AWS' services.
To get the equivalent you'll need to read up on the basics of:
EC2, which provides you with the actual server.
EBS, their network attached storage which your server will boot from.
Elastic IPs, to give the server a stable public IP address you can point DNS at.
And Security Groups, which don't have a DO equivalent, but control network access to your server. They're arguably worth moving from DO for alone.
If you just want to test the waters then EC2's startup wizard will handle all this for you, and you don't really have to think much more about it than you would on DO. However, you've then got the ability to grow into the rest of AWS as you need it.
How is less than half the storage and 1/500th the bandwidth competitive?
The problem with dropping a care-free VPS (what DO is for, and what you're comparing against) on AWS has always been transfer. Even my least used personal VPS eats 25-30GB/month in outbound. How much does that cost at AWS? Another $5? and then you're constantly worried about your usage month on month. I'd rather give $10-20/month to DO/Vultr/Linode to begin with and get more memory as a bonus.
I'd rather pay slightly more for an ovh server and not have to worry about nickeling and diming... And whether my system will crash overnight because it needs more than 1 gig of RAM
I'd rather not have any infrastructure, than to run OVH servers. This is least reliable company out there, that fails its customers in every possible way every now and then. Friends dont let friends use OVH. Go with Hetzner or Online instead.
> The t2.nano offers the full performance of a high frequency Intel CPU core if your workload utilizes less than 5% of the core on average over 24 hours.
This means you get the full CPU if you have a bursty workload, and really is no different from what DO's policy is:
> We do not set a cap on CPU usage by default but we do monitor for droplets doing a consistent 100% CPU and may CPU limit droplets displaying this behavior.
In the EC2 case, the CPU throttling policy is just explicit.
Having been hit by AWS's CPU cap before (on the t2.micro instance type) and having used many other providers, I can tell you firsthand that AWS's CPU limits are MUCH lower than their competitors'. In our case, a sustained usage of about 15% CPU caused our VM to eventually be starved of CPU time, which in turn crashed the software running on it.
I've never dealt with stricter CPU limits than AWS's. Most providers will not be happy if you peg an entire core to 100% (after all, the physical cores are oversold), but they usually don't mind if the percentage is even as big as 50%.
You've outgrown the micro. The strict CPU accounting doesn't exist beyond the micro and the nano. I regularly pegged every core offered on 4xls and what not and never heard a peep from Amazon over several years.
My point is this is an instance type thing, not a provider thing as you've extrapolated. They're experimenting with vast overselling on those instance types and it's not across the board.
Moreover, and this is something that I would think should be bleedingly obvious: this is exactly what those instance types are for. Everything up-front discusses CPU in terms of low usage and brief bursts. You're getting a discount because you're not using it in the same way as you would an M or a C or whatever.
I'm actually contemplating moving out of AWS to DO. I signed up for a 1-year m3.medium reserved instance with AWS (Singapore region) sometime in May when the reserved instance cost was ~$50/month (it's now $35/month but I'm stuck with their old rate).
This doesn't include bandwidth and IO requests.
I might understand charging separately for bandwidth, but IO requests? I end up making between 75M to 95M IO requests, and it adds a good $8 or so to my bill. Plus, there's EBS which for 60GB (not SSD!) adds another $6.
So all in all, for ~$65 I'm getting a single core, 3.75GB RAM, 60GB storage, and am stuck with their old pricing for a year, whereas with DO for $40 I can get 2 cores, 4GB RAM, 60GB SSD and no "IO request" costs!
EBS w/ SSD doesn't have I/O request fees attached. DO is still a better deal overall.
For crazy dynamic workloads with moderate bandwidth usage, AWS makes good financial sense. Also, S3 and SQS are phenomenal. (As are DynamoDB, Redshift, Lambda, and Kinesis, but these can get expensive fast.)
AWS really has simply awesome technology, and they can afford to charge what they want for it, and I do believe they strike a pretty good balance most of the time. Also, the Free Tier is really very nice and fair. I'd just like to see the cost model get a bit more transparent and a bit lower in fees, especially on the low end.
Did you check out Google Cloud ? I moved all my company's instances to them and I ended up cheaper than on Vultr (similar pricing with DO), while also benefiting to more features (storage, mysql, etc) and better uptime.
For 28$/month you can get a similar instances as on amazon (1CPU and 3.75GB RAM) and 60GB SSD is another 10$. 60GB standard storage is 2.40$.
I'm impressed that Google Cloud is adding a couple of new features every month and I can see them a strong AWS competitor soon.
Google Cloud had a very impressive year. It has lots of killer features that AWS lacks.
* Cloud Shell: Sweetest thing that happened since I started using cloud, no need to install dev tools
* SSH right from web console
* No need of managing ssh keys: Google takes care of it. If a user is removed, the users ssh keys are automatically revoked.
* Security by default: Encryption at rest and on wire for all the products (Storage, Disks, SQL .. )
* Datalab (in alpha): Do data science from the browser (Jupyter as a service).
* Pub/Sub + Dataflow: This is how I envision next generation Kafka + Samza to look like.
* Container Engine, consistent performance, blazing fast disks and more.
I recently went DO to GCE when my app graduated to big boy status, and have been happy. Maybe grass is greener ;)
if you only need like 1 server though DO is great. Security and some things in bigger systems you get to roll your own (i.e. no storage, private IP aint really private, etc).
Are there any dedicated server hosters that can compete with AWS/DO in pricing and flexible web interface? (I heard good things about Hetzner, but I would need something in US).
How does Amazon implement instances like t2.nano? If there are 40 t2.nano instances on a quad core physical machine, what happens when all the users want 100% CPU, even if it's only for 10 minutes? Are instances automatically migrated to a different physical machine if this happens?
I've tweeted to you twice and asked your customer support about this too but I have never gotten a reply to it. So I'm asking you here.. When are you gonna allow reserved instances for Indian customers?
Right now I cannot purchase reserved instances and so my bills are much much more than what others are paying.
P.S. Here is the screenshot when I try to purchase. There has been no update for 1 year now.
In all honesty I don't think it's in Jeff Barr's authority to release roadmap plans or definitive answers to questions like this. My experience in the past has been he has always been helpful in connecting you to individuals who may be able to answer these sorts of questions. So I suppose it doesn't hurt to ask.
Huh? How is it that I'm able to buy reserved instances? I'm in India and I've bought reserved instances. It's been a pretty bad decision to buy (https://news.ycombinator.com/item?id=10741966), but I can. I can't sell them though, so I'm stuck.
I'm not sure. See, the screenshot above. The confusion about this is the most difficult part of it and Amazon support just replies with a canned reply.
That is why I was hoping someone like jeffbarr could shed some light on this.. Looks nice it was just another vain attempt to get an answer from them :(
This would be excellent if you could accrue more than the 72 minutes worth' of CPU credits. At least my use case is 'low traffic, with the occasional link from a high-traffic site'. These happen every few months, not every three days. But they also last 36 hours or so, not 72 minutes. Total CPU usage is similar, but it's distributed differently.
There's a trade-off between how much they allow you to spike and how long they allow you to spike for. They could have 20 t2.nano instances packed onto one CPU; the longer you can spike for, the more likely it is that another instance is going to end up spiking at the same time as you. I'm sure Amazon has looked at the CPU-usage behaviour of millions of EC2 instances -- quite likely across hundreds of billions of data points -- and picked this as a reasonable tradeoff.
For your use case, I think autoscaling is probably the answer. Keep one t2.nano running continuously but tell EC2 to spin up a new t2.large if you get a burst of traffic.
Yeah, they obviously put more thought into this than I ever will. Although they offer instances with 40 (v)CPUs so I'd assume they could technically run 800 nanos on one of those and efficiently deal with such spikes? Either way, I suspect their reasoning is more economical than technical, in that they know full well that I can afford more than 5$/mo.
(which I'm taking to linode for now, but oy vey, AWS, we'll always have Paris)
Right, these are presumably running on boxes with many CPUs each. Whether instances migrate, I don't know; due to NUMA concerns they might be pinning instances to cores.
The other issue to consider is correlated spikes: When you aggregate 800 instances, you're probably going to see a pretty clear diurnal cycle. (And quite possibly middle-of-the-night spikes as synchronized cron jobs all kick off, too.)
Check it out yourself: https://www.linode.com/pricing. As I said, "more than 5$", in this case meaning 10$, which gets me a CPU core and 24GB of ssd storage. It replaced a dedicated server that was 50€/mo. That server had probably 8 to 10 times the power but the vpn solution can scale within two minutes if needed. I was also having suspected hardware trouble with the dedicated server that the provider couldn't diagnose / wouldn't replace.
It's an incredibly boring setup that serves about 200 visitors on a normal day, 2000 during a handful of spikes /year. I know I could probably run it from a Casio watch but it generates six-figures of revenue / year so a bit of overkill is justified.
It still is excellent, because it reduces both the granularity and baseline cost of an autoscaling setup which is probably what you want for your use case.
Been using them for a while, it's great. I'm running a non-mining bitcoin client (basically just sync-ing the blockchain), it runs well. Honestly a better proposal than Amazon's IMHO.
I used one server as a Tor relay and another to server static files. The Tor node used almost 100% of one core to server about 25mbps of traffic and there wasn't enough load on the static file server to notice the CPU load, but decompressing an xz file took many times as long as an x86 VPS I had.
wow, I must be doing something wrong... I have a single c4.4xlarge running a single wordpress site at max 200 active users and still running into cpu bottleneck... 1K+/mo sheesh.... I use t2.medium for DC's for 5 users and 5 servers lol... please let me know what a nano is good for?
not to mention, I need multiple 10+TB volumes, and magnetic only go to 1TB, so I need to span, and spanning breaks after 4+ drives, so now I'm at SSD, and that's costing me 1K/mo for each copy and I need many, sigh
i miss buying my own supermicro systems at ~10k/ea, hosting a full rack at at a colo for 1k/mo and then just setup correctly and check-in once a month
now, amazon is getting 15k/mo from me, but i must say, my back thanks them for 0lbs of equipment to lift, so probably worth it for a hernia surgery.
My use case is I need to download lots of files, each partition at around 5MB (for now, but planning to increase the size and measure an good chunk size). But I am constantly downloading files, so I need stable and consistent throughput.
Doesn't seem like one or two problems. I would really be concerned if my 200 users website needed much more resources than a full fledged IoT platform catering to millions of requests per day. If you are really doing something that compute intensive, what is it?
Less flippantly, because I sound like a dick there, try putting Cloudfront in front of the site. Assuming WordPress now properly sends cache control headers it should drastically cut down on the traffic your actual server is seeing.
Besides being poor choice it also costs more.
I.e. c4.4xlarge linux $7726.320 annually.
c4.4xlarge windows $13542.960 annually.
What are you wasting $5.8k for?
? meaning DO/Linode support for a $5/month customer is good? I've not had positive experiences with either... but then again my experiences with AWS (non-business) support haven't been so positive either.
I've had pretty good support on DO's $10/month plan and I imagine their $5/month plan would be the same.
1-5 hour turn around times on initial responses and I've gotten them to enable things like the recovery partition due to an instance running out of disk space.
Sometimes it takes a few responses to get a resolution but at that price point I can't really complain because they've even helped resolve issues that were my fault.
Good or bad is very subjective, but in AWS without a support plan, there is no way to contact them except you create a thread in a public forum asking for help. If you are lucky your post is not being ignored then someone will reply you maybe within one day (then back and forth might cost you one or two days to get issue resolved luckily). In DO/Linode you can create a support ticket to them and usually they respond within a few minutes, this is a huge difference.