Hacker News new | past | comments | ask | show | jobs | submit login
Rubygems.org AWS bill for Feb 2014 [pdf] (dropbox.com)
251 points by vbrendel on March 5, 2014 | hide | past | favorite | 147 comments



I thought I'd answer some of your questions, as the person that pays the bill.

1. This can be cheaper on AWS. We've been meaning to move to reserve instances, paying a year at a time, for a while and simply haven't done it yet.

2. Fastly has already donate CDN usage to us, but we haven't fully utilized it yet as we're (slowly) sort out some issues between primary gem serving and the bundler APIs.

3. RubyCentral pays the bill and can afford to do so via the proceeds generated from RubyConf and RailsConf.

4. The administration is an all volunteer (myself included) effort. Because of that, paying a premium to use AWS has it's advantages because it allows more volunteers have help out given the well traveled platform. In the past, RubyGems was hosted on dedicated hardware within Rackspace. While this was certainly cheaper, it created administrative issues. Granted those can be solved without using AWS, but we get back to again desiring to have as low of friction on the administration as possible.

Any other questions?


> In the past, RubyGems was hosted on dedicated hardware within Rackspace. While this was certainly cheaper, it created administrative issues. Granted those can be solved without using AWS, but we get back to again desiring to have as low of friction on the administration as possible.

If Rackspace can be of assistance in the future, feel free to reach out (brian.curtin@rackspace.com). We currently donate hosting to many open source projects, including ones in a similar space, like the Python Package Index.


Thanks! I'll bring it up with the team.


Note that if you can get Rackspace or whomever to donate the hardware/bandwidth, you can use less than 7k/month to hire a very competent admin to solve the administrative issues, which would probably lead to better service for everybody.


On that note, you might check out the Open Source Lab at Oregon State University. They host many projects: http://osuosl.org/communities


Hey Evan, as with Rubyforge the last 7-odd years, you'd be welcome to a free account on Bytemark's UK cloud platform bigv.io, or dedicated servers, or a mix on the same VLAN. We're a Ruby shop ourselves, and we host a fair chunk of Debian in our data centre too these days (https://www.debian.org/News/2013/20130404). I So just drop me a line if that's of interest <matthew@bytemark.co.uk>.

I assume this was posted because it's an enormous bill :) but obviously if you're happy with it, carry on!


Did you consider using a mirror network, with servers run by external organizations, instead of going with AWS bandwidth for rubygems? Seems like that would be a good approach for the static/bulk part of your dataset, and there are lots of companies and universities who are set up to serve software. (The mirror I manage serves about 50 TB/month for several Linux distros, and many sites are larger.) Do the work and infrastructure required to manage these networks make them not worthwhile?

Edit: Found a post [0] calling for a rubygems mirror network. Otherwise there is lots of information about setting up local mirrors of the repository.

[0] http://binarymentalist.com/post/1314642927/proposal-we-have-...


It's been discussed many times before, yes. Rubygems usage pattern by our users make any kind of mirror delay unacceptable. We currently run a number of mirrors, configured as caching proxies. I want to get us going on a CDN like Fastly soon because they provide effectively the same functionality but distributed to many, many more POPs that I will ever setup.


I suspect mirror delay is less of an issue than you might perceive it to be. Many CPAN mirrors manage to stay within tens of seconds/no more than a minute from the main CPAN mirror that PAUSE publishes to.


If it's just the sync delay, you could track each mirror's last-updated time and only direct users to a mirror that had synchronized with the master since the package in question was released. Otherwise, serve the content from AWS. Though I'm sure this couldn't beat the service that Fastly's donating.


The caching mirror configuration achieves nearly the same thing. In the past, people have wanted to run their own mirrors that we directed people to, but that's got reliability and security issues.


Mirrors shouldn't be a security concern, the signatures of packages should come from "headquarters", same goes for reliability, clients should be able to, and SHOULD pull from multiple sites simultaneously.


Even if package signing works perfectly, when I connect to a mirror and request a patch for foo, the mirror learns my IP address and the fact I have an as-yet-unpatched version of foo.


Very true on the signatures. Using multiple sites isn't necessary though, imho.


I could be wrong, but it seems like a nice hack to pull for say 3 mirrors at the same time at some offset into the resource using a range get for say, 16k each. The first one to complete does a pipelined request for another 16k slot and this process continues until the entire asset is downloaded. The fast mirrors would dominate, a small percentage of the bandwidth from slow mirrors would assist and truly slow mirrors would be ignored.


It would be really interesting to see the bandwidth broken down by gem - I suspect rails would be at the top, but it'd be interesting to see.

If most of the installs are on servers, have you considered talking to server providers about setting up internal mirrors on their networks? That might save everyone a lot of bandwidth.

Of course, people shouldn't really be installing their gems from ruby gems on servers anyway, is there any way to prod bundler to make it default to package gems and do a local install where possible, rather than downloading them every time there is a deploy (the current default)? At present you use double bandwidth from people downloading once on their local machine, and once on their server to update.

Fetching the ruby gems index with bundler/rubygems still takes a while every time I bundle update, have you looked at optimising that part of the process further (at least it doesn't fetch a list of all gems now, but it still fetches a list of all versions of each gem doesn't it?), say caching older gem results? The list of gem versions available should not change for old ones, so you should really only need to fetch a very small list of latest versions. The memory usage and bandwidth usage is still quite high there.


Hey, a chance to plug my thing!

I built S3stat (https://www.s3stat.com/) to fix this opaqueness that comes with using Cloudfront as a CDN and get you at least back to the level of analytics you'd get if you were hosting files from one of your own servers.

RubyGems guys, if you have logging set up already, I'd be happy to run reports for all your old logs (gratis, naturally) so you can get a better idea of which files (and as another commenter wondered about, which sources) are costing you the most.


Off topic: S3stat is our go to service we've been using it for years and really couldn't survive without it as its how we charge our clients!


I still don't understand how AWS was preferable to a dedicated server host. Could you elaborate on that?


Virtualization allows us to spin up new instances and migrate traffic to them. This means we can work entirely from chef and keep things clean. This is important for our volunteers to have a complete picture of an instance and to be able to make new ones.


You can easily do that on dedicated hardware too. We run all our stuff in vm's and containers even on the office dev servers 3 meters behind my desk.

And pretty much "all" dedicated server providers these days also have cloud offerings if you need to spin up some instances quickly to handle traffic spikes etc., or for dev/testing purposes.


Do you know who are the biggest consumers of bandwidth? I would guess the CI servers (Travis, Circle)


I think that bandwidth consumed by Circle should be free, since we're also hosted in AWS. Maybe somebody who knows more about the details of Amazon's billing can confirm/deny.


bandwidth is free in the same region, but not across regions.

*edit and I believe not if you end up using the public IP address instead of the internal ip address.


if you use the ec2 public dns it will resolve to an internal ip when the request comes from within ec2


A very good question. I'll see about crunching some of the logs to break it down by subnet.


Great. Whoever the major commercial users are have a financial incentive to keep the service performant. They should all at least be sponsors at some level if they're not already.


Here is a partial log, every /24 that had more than 10k hits in the last 24 hours: https://gist.github.com/evanphx/9361755


Top 5 are hosting providers. Makes sense.


isn't bluebox where travis is hosted?


Hey, as I mentioned in another part of this thread, my startup crunches those logs for a living (and they're sadly not really designed for crunching by anything that comes off the shelf). Ping me if you'd like a hand doing the crunching.


How have the costs changed in the last year or so? It would be cool to see a month-over-month graph.


I'll put that on my todo list.


No question. Though I'll use the occasion to thank you for all the dedication, financial commitment and awesome software you've provided us with in the Ruby community.


Need help with the VCL with Fastly? Drop me a line to my name minus ct @npmjs.com.


Thanks! I'll definitely keep you in mind as we're (finally) getting around to setting up correctly.


Same here: we host Maven Central with fastly and are willing to help out any way we can. @sonatype.com


Have you looked into the Rackspace Cloud offering?


While one could probably knock a couple thousand bucks off that if one cared to (which is probably penny wise and pound foolish but invariably comes up in HN discussions of hosting costs), the amazing thing is that hundreds of thousands of people worldwide are able to use core infrastructure which costs less than the fully-loaded cost of a single billing clerk in your local municipal water department.


> which costs less than the fully-loaded cost of a single billing clerk in your local municipal water department.

To be fair, a lot of maintenance value goes into the software that is never quantified. Broken software breaks hard, not partially, so maintenance is even more crucial.


When a levy breaks people die. Software maintenance and damage is nothing compared to real engineering.


It was never definitively proven but poorly designed software was considered to be at the heart of a helicopter crash that killed 25 people including almost all of the UK's top North Ireland intelligence experts: http://en.wikipedia.org/wiki/1994_Scotland_RAF_Chinook_crash...

Software controls everything from nuclear power stations to missles to dams to radiation therapy machines (where, again, software killed 3 people - http://en.wikipedia.org/wiki/Therac-25)

Proper software engineering is increasingly more important and, I'd posit, likely to become even more important than civil engineering for public safety as time goes on.


> Software maintenance and damage is nothing compared to real engineering.

http://en.wikipedia.org/wiki/Cluster_(spacecraft) cost $370 million when an overflow caused a rocket to explode.

I'd imagine there's some mission critical software running nuclear plants, aircraft, cars, etc.


I'll agree mission critical software exists. I however imagine there are far more engineering projects across the planets whose failure result in mass casualties than software. There is a reason actual engineers are legally liable for their work.


Yes there is. It is an older industry. One that existed when Common Law was being formed hundreds of years ago.


What about the defense industry? People die if you screw up. I mean, people die if you don't, too, but you know what I mean.


That may be true of web development, but certainly isn't of software as a whole: http://en.m.wikipedia.org/wiki/Therac-25


I disagree with the web development comment. What if there was a web interface on top of Therac-25 that had an error in it?


Real engineering projects every day use software—I don't think you can realistically draw a line between the two, even if there are different auditing standards.


What is funny is that Github is footing the bill for most package systems, which were likely inspired by ruby gems, yet Github itself was built with Ruby gems. I am pretty sure the hosting costs for homebrew/npm round to nil (I could be wrong).


If you mean the npm homebrew package, then yes. If you mean npm packages, then you might be living under a rock.


Why do you say GitHub is footing the bill? I understand for Homebrew since it uses GitHub repositories, but npm is served via a CouchDB instance which I believe is sponsored by Joyent.


Serious question: muni water dept billing clerks make over $84,000 per year all in?


Once you factor in employer-paid benefits, pension, etc, the employer's cost is something like 140% of the employee's gross pay. This turns the $84,000 into a $60,000 gross salary.

If you include other costs, like the office space and equipment used by the employee, it starts to sound pretty reasonable.


So true! Also, in most organizational structures, every X employees will need one manager, which is also a cost that needs to be factored into the cost of hiring an employee.


An employee costs a lot more than just the cost of their salary.


BTW this is paid for by : http://rubycentral.org/


Thank you, Ruby Central!


Was wondering that, thanks.


At a glance, this looks like AWS being used like a dedicated host, which as demonstrated, isn't exactly cheap.

There's no spot or even reserved pricing, just a bunch of on-demand instances that were up 24/7 for all 28 days in February.

Seems like a genuine dedicated host, reserved instances or an architecture that leverages the elastic in elastic compute cloud would be worth considering.


A lot of the price is bandwidth. They are effectively being reamed by using CloudFront instead of negotiating a better rate with a "real" CDN (which will also give then much better performance, as CloudFront doesn't have many edge locations).

(Although, actually, while I verified their total dollars spent is greater than what would be required to get a fundamentally better deal on bandwidth, I didn't take into consideration that once you slash their costs the amount they would be paying might no longer be ;P.)


> They are effectively being reamed by using CloudFront instead of negotiating a better rate with a "real" CDN (which will also give then much better performance, as CloudFront doesn't have many edge locations).

You can negotiate with AWS to get the same Cloudfront pricing as you would with Akamai. I know because I'm in the the process right now.

More importantly, they could be running on 2-3 dedicated servers at OVH or Hetzer, and have Cloudflare in front of them instead of Cloudfront. Or, if they insist on Cloudfront, switch to Price Class 100 (US and EU only). Its cheaper, and latency isn't that much higher vs serving out of all Cloudfront locations.

As long as most of your content is static, and you have a solid CDN, your origin doesn't have to be highly reliable or scalable. Its just an object store to persist data for the CDN.


CloudFront doesn't have many edge locations

This is nonsense. They have more edge locations than most. I didn't try all comparators in the list, but out of half of them I tried, none had more than Cloudfront: http://www.cdnplanet.com/compare/cloudfront/maxcdn/

So if Cloudfront has 'not many', who has 'many', and how many is that?


MaxCDN is a very low-end "CDN". If you can buy your account from the website without talking to an account manager, and the plans are as low as $9/month, you should not expect a lot of performance, features, locations, etc.: what you should, however, expect is "cheap"... MaxCDN is appropriately cheap.

To look at something more reasonable: CDNetworks is realistic competition; they are strong in Asia, and were the people I was comparing the pricing to (so they aren't going to be horribly expensive). According to the comparison website you are using, they have almost four times as many edge locations.

http://www.cdnplanet.com/compare/cloudfront/cdnetworks/

Honestly, though, the reality is that the really great CDNs don't even have data on this website (even for CDNetworks I think this data is not accurate: looks like an approximation): the leaders in this space are Akamai and Limelight, and both just show "Not Available" for the number of edge nodes they have.

Even going a little lower on the CDN pecking list, though: Level3, which according to this website you are using is mostly "competitive" with CloudFront (sometimes actually worse) in the regions CloudFront bothers to cover, is clearly covering entire subcontinents where CloudFront has nothing.

The reality is that CloudFront is still trying to grow out a network: they have poor coverage in Europe (which is pretty key), a few nodes in Japan/Singapore, and then next to no coverage anywhere else. Yet, they insist on pricing their product as if they were a big player (12c/GB is Akamai-level expensive).

(So, do I get to condescendingly say "this is nonsense" now? I mean, seriously: you clearly didn't spend much time using this website and you didn't look into who the leaders are to verify you weren't comparing low-end to low-end... also, I think you are not appreciating that 0->2 is infinitely better ;P.)


Well, like I said, I didn't check just one, but about half in the list - maxcdn was just in the link because you can't link that site to just one service. Akamai had nothing listed, and level3 and cdnetworks weren't in the ones I checked. From what I saw, they still have more than most.

I still think you're mischaracterising AWS as being a bit player - they have a decent presence with Cloudfront, it's just that there are a couple that are bigger. Like I originally said, 'more than most'. CDNetworks certainly does pound them in numbers, though.


Could probably fix some of this just by talking to Amazon about it. It's not like this is a 'for profit' setup.


Exactly, and if you include a little Powered by AWS CloudFront i am pretty sure they could drive down the price a lot. Or, they could start talking to Fastly, I am pretty sure they can work out a much better deal while being faster.


Amazon doing anything for free for the open source community would seriously shock me.


They do spend a lot of time courting ruby/rails devs, though, so being able to say rubygems is hosted on AWS might be worth throwing them a significant discount.


Right now we're a top 25 grossing iPhone game developer. The last AWS bill I saw was January's, a little under $200k.

I'm not on the server team, so I don't know exactly what contributes most to it. But part of me really thinks it could be reduced!


This bill is 2/3 bandwidth, and 1/3 compute.

Some games require massive amounts of compute, but the bandwidth to deliver the assets is generally paid by Apple.

I can guarantee you, your company is paying a metric fuck-ton more. It is called Apple's 30% cut.

Your company is paying AWS $200k to pass json messages around for analytics and social aspects of the game. You are paying Apple something like $1 million per week to distribute, market, and collect payments for the game.

I am not saying your company is dumb, or Apple is evil. I am saying your experience and anecdote isn't relevant to Ruby Gems, and offering a different way to think about the games industry vs. the open source software distribution world.


We aren't paying that much in cut just yet. We're a small team (6 engineers in total). You don't have to be pulling in millions per week to get high on the grossing charts. We're probably around 1/4 of what you estimated.

Though you mention delivering the assets. Actually (like a lot of games) we make a big effort in getting under 50MB over-the-air limit on the App Store. The total content for retina iPhone is ~300MB, delivered in parts as you progress in the game. That's kept on S3, downloaded through CloudFront.

But yes! You're right, it's mostly a hell of a lot of JSON flying around.


Have somebody spend a day or two looking for low-hanging performance fruit. Start with your JSON library, there are some slow ones out there. Also see if you might be unnecessarily de/serializing data structures multiple times in a single thread or process, I've seen that kind of thing creep up over time in reasonably modular codebases.


FYI, the OTA limit was increased to 100MB back in September.

We're managing to squeeze our apps into this at the moment, but will likely need a similar solution using S3/CloudFront in the near future.

[1] http://www.macrumors.com/2013/09/18/apple-increases-over-the...


We support non retina devices, which are stuck on iOS 6. When this came out we weren't sure whether it applied to that too, so stuck with 50. We'd already been keeping it under 50 for 6 months by then so we had all the infrastructure set up, its mostly automated.

Haven't looked at it since iOS 7 launch though, do you know if it was iOS 6 too?


Just checked with a couple folks here, the limit is for the iTunes store, and therefore was also raised for iOS 6 (we assume older versions as well, but don't support them either).


Interesting, thanks. Maybe next time we update, I'll try to convince everyone to panic-delete a little less :)


If you've got Business or Enterprise support from AWS, look at their Trusted Advisor product. Its included, and does sanity checks for cost and security against all of the AWS resources you're using.


I love that you qualify it with "right now". Applaud your anti-hubris, stickydink.


The most interesting thing that I found about dealing with stuff at 6 figure+ scale per month on AWS was the un-advertised limits (nodes, provisioned volumes / total size, snapshots, elbs, etc) that you have to either hit, or extract from your account manager.

If anyone ever ends up doing something like this; ask them upfront!


Amazon has a whole section in their documentation for the default limits. http://docs.aws.amazon.com/general/latest/gr/aws_service_lim...

When I've hit them, I've usually had a response to the "raise my limit" form within an hour or two.


Package Control is a far cry from the scale of RubyGems. PC uses a little over 2TB a month, whereas my calculations show RubyGems using around 50TB.

That said, early on I chose Linode because of their generous bandwidth that is included with the boxes. For the price of less than 1TB of AWS bandwidth, I get 8TB, plus a decent box. The bigger boxes have an even bigger proportion.

I'm not posting this to give any suggestions for RubyGems - I know nothing of the complexity of that setup. Mostly just figured I'd share the research I did for finding reasonably priced bandwidth.


The thing is there are many providers who can do the same and most of them will do it for less than half of this. Some less than 1/5th. I think they should move this to Digital Ocean and save $5000.

The bias towards AWS for this type of application is ridiculous and a big waste of money.


Whenever anybody makes this type of statement, I'm alway interested in knowing if they've ever run a site with this type of traffic, and this many customers.

In particular, have you ever run a site that consistently serves over 25 Terabytes of traffic/month, or have you worked with someone who has?

I guarantee you that no company I have worked for in the last 15 years, could have ever run this type of infrastructure for $7K/month. Its absolutely amazing.


My site serves 25 TB/mo, and it costs me $80/mo...

$60/mo for a dedicated server, $20/mo for CloudFlare. The dedicated server only serves 1 TB of it, the other 24 TB is static assets cached and served directly by CloudFlare.

Here's a screenshot of CloudFlare Analytics for the last 30 days: http://d.pr/i/6Z8S/5GU2Ni8t


Thanks - that's eye opening.

So, what this really comes down to (after a good nights sleep) - is what type of traffic/transactions are you running on your back end infrastructure.

If the data is static, then you can probably (these days) cut your costs for 25 Terabytes/month from $8K to $800 (or, in your extraordinary case, $80), simply by being a bit intelligent as to how you make use of VPS/CDN/CloudFlare Transfer allocations.

On the flip side, if much of the data you are transferring out is the result of dynamic back end transactions, queries, and generation, then it's unclear to me that you can (easily) recognize the savings that you might see when generating static content.

I'm interested in knowing if CloudFlare will start throttling/shutting down people who pay $20 and use 25 TBytes in the long term though - that alone, for some organizations, will cost them more than the extra $8K they would pay to AWS (who, have zero problem with you using 25TB, 250TB, 2.5PB, etc...)


Yeah, I'll admit that other CloudFlare customers are likely subsidizing the amount of bandwidth I'm using.

Funny thing - back when I was using 10 TB/mo, my site was hosted entirely on DreamHost's $9/mo shared hosting. I moved mostly because I was starting to get several hours a month of downtime - presumably, they were gently nudging me off their service.

I've seen plenty of $60-$100 dedicated servers come with unlimited-use 100Mbit connections, which work out to 16ish TB/mo before you start getting to 50% saturation. Of course, those are still subsidized in that that pricing is possible only because most people who buy it don't max out a 100Mbit connection.

Still, though, S3's 9-12¢/GB bandwidth pricing seems a bit high. Bandwidth at DigitalOcean (presumably unsubsidized) is 2¢/GB, which comes out to a much more manageable $500 for 25 TB.

With dynamic content, CloudFlare has Railgun, which takes advantage of the fact that dynamic content is usually mostly static. Still, though, if you have 25 TB of dynamic content, I presume bandwidth stops becoming the limiting factor in your cost of operation.


CloudFlare cannot be making money on you as a customer. $20 for 24TB is too cheap.


It's offset by the many customers paying $20 and using 1 GB, I'd imagine.


I'm not sure if RubyGems gets more traffic/has more intense computational needs/has more users than OkCupid, but that used to be hosted for about ~2-3K/mo from what I recall. However, that's not amortizing the cost of the hardware.


> I guarantee you that no company I have worked for in the last 15 years, could have ever run this type of infrastructure for $7K/month. Its absolutely amazing.

True, but lets not compare offerings of the past to now. There is still room for practical efficiency gains.


Are you including person costs in that $7k? If so, I totally agree.


Only the person costs associated with the configuration/management of the servers - not the people time associated with the code, and high level system administration (which you still need with AWS).

I.E. The people who bought the servers, racked the servers, went down to the CoLo at night, set up the virtualization environment, hooked up the routers, configured the routers, the switches, the firewalls, the vlans -- those people I am including.

I'm not including the DBAs who manage the schema, people who push the code, do the design, etc...


Have you worked for Rackspace, Linode or Digital Ocean?


I've worked for one of their (direct) large competitors, but haven't worked for those three companies.

I've currently got active accounts with all three of those VPS providers - I love them, and use them every day - particularly Linode, but also Slicehost/Rackspace, and DigitalOcean. I even have a bare metal server at ServerBeach - which I realize I need to shut down...

At this exact instant I have six terminal windows open across DO/Linode. I host a moderately popular California Food Blog, and have about 15 years experience in various companies that have had hosting responsibilities.

I'm not saying you can't do great things with the VPS providers - I'm just suggesting that the tradeoff between saving $2-$3k (at most) with Digital Ocean, would be more than made up by the technology risk, hassle of having to re-invent a lot of the services that you get automatically from AWS.

That could change sometime in the (near) future - but right now, AWS is an easy (and honestly, all things considered, relatively cheap) solution for this type of application.


But they are just using Amazon's Cloudfront. They aren't using SQS or anything.

What technology risk is there in setting up Varnish and nginx on Digital Ocean? Or better yet some kind of out-of-the-box open source CDN. You would save a lot more than $2-3k.


I'm referring to Digital Ocean's technology. AWS (and Media Temple) had it's teething pains as well in it's first 4-5 years. I remember some fairly broad outages with their back end storage - but they've mostly dealt with those, and, the risk has gone down.

Note - there is another option - Deploy on multiple Platforms and be smart with your DNS balancing (http://www.dnsmadeeasy.com/services/global-traffic-director/) when serving content. Particularly now that Digital Ocean is in Singapore/Amsterdam/NewYork I can think of some useful things I could do with $10/month droplet (2 Terabytes of Transfer) $300/month, in theory, gets me 20 Terabytes in Asia, 20 Terabytes in Europe, 20 Terabytes in North America. Now, whether DO would shut me down if I actually started using that Transfer is another question altogether...


Last test i tested DNS balancing ( from DNSMadeEasy )sometimes doesn't work as well as you did imagined, at least compared to EdgeCast or MaxCDN. Although that is quite long ago.

Would love to see any recent input.


There is no such thing as an open source CDN; there is a free CDN called Coral Cache (http://www.coralcdn.org/). CDNs cost money because you're dragging content from an origin to edge locations all across the world, and keeping it hot for client requests.

You could simply serve the content out of nginx, but you wouldn't see the performance benefits of keeping your content closest to the end user.


> The bias towards AWS for this type of application is ridiculous and a big waste of money.

They could get an even better deal by just going through a dedicated server provider (or even better, colocating).

There's little advantage with choosing DO versus going with a dedicated server provider (and again, colocating). I guess the advantage would be the control panel that they wouldn't use, having a few one-click stacks that they won't use, stuff like that.

If someone can afford a $7,000 AWS bill they can afford to put some money towards hardware and an onApp license if they want "cloudy" stuff. To colocate their hardware it would probably run them anywhere from $400-$800 a month depending on where they go. Their total bill would be decreased by $5500 a month. The upfront investment of the hardware wouldn't be more than $12,000 either. LOE? Probably two weeks with a competent sysadmin.

Yes you can have issues with your hardware and stuff and then you have to take care of that, but if you're good with your DC, they're great to you.


at $600/month you've only saved them $1500/month (the hosting portion is only $2.1k), and now they also have physical hardware to manage, requiring a broader skillset from the volunteers, plus someone having to be in physical proximity for 'on-call' issues.

I don't know what datacentres tend to charge for data transfer, but as that's the largest item on the bill, it's the more salient point.

Also, just because it's not on the bill doesn't mean they're not using other AWS services; there are several free ones.


> To colocate their hardware it would probably run them anywhere from $400-$800 a month depending on where they go.

For one datacenter, but CloudFront gets you 40+.


Bandwidth is by far their biggest cost, colocation/dedicated hosting would save a substantial amount but you are still going to be looking at something in the ballpark of $1,000/mo for 1Gbps. (Unless Cogent has slashed prices even further)


Digital Ocean doesn't provide a CDN. EC2 only account for 1.4k in the bill so I don't see how you would spare $5000.-


Bandwidth on DO is (worst case), $20.48/TB, so their 17TB usage would cost $348/month. A far cry from $5,000.


You miss the point; bandwidth from a single source != CDN. Hosting without a CDN would be extremely slow for people that were far (hops) away from DO. CDN's solve this, and other issues.


How much does that really matter? Even going to the otherside of the world is only 200ms or so, and the time taken to run rubygems is hardly a factor in just about any workflow I can imagine.

Think of it another way - what would be more valuble - RubyGems hosted on CDN, or RubyGems on DO and give a couple of grants for talent hackers to work on their gems fulltime for a few months. (ala GSoC)

Even if you ARE concerned about latency, have one download server in the US (E.g. DO), one in Europe (e.g. Hetzner) and one in SE Asia (Not sure who's cheap and good-ish there), and you'd still be a 1/4 the cost of AWS bandwidth or less.


Person who pays the bill here. Latency does matter and we're paying an additional $1.5k new in Feb to improve european latency. RubyCentral can afford to spend the money to improve latency issues, so we do!


Really, why would that be the case? I mean the difference in Latency.

A self set up Linode CDN with all six location Tokyo, JP London, UK Newark, NJ Atlanta, GA Dallas, TX Fremont, CA

Would have provided 48TB of Pooled bandwidth at a very decent speed and cost around $480. Linode's Network are great, much better then DO. I am not sure if it match CloudFront, which isn't exactly the fastest CDN anyway.


Great info. Did you use rsync and push data out in a tree? How did you handle DNS? Did you do 302 requests to closer servers (by some metric) ?


Having volunteers spend countless hours to setup and maintain their own CDN is a preposterous idea.


Here

http://psyphi.net/blog/2013/12/content-delivery-network-cdn-...

I have 10 fingers, so that is definitely not "countless" hours of work. And No, Maintenance are minimal or non existent. You could even get smaller VPS behind each node balancer as HA. Since Linode VPS ( Unlike DO ) are deployed on physically different hardware.

While i say it is fair enough to use AWS because money doesn't matter, i thought there are definitely some better alternative for the same price( if you really cared about latency ) or cheaper options.


Loved the blog posting :)


Fair enough!


That's been my impression so far with repositories where you configure a mirror explicitly, like Debian or CPAN. I used to be diligent about doing "the right thing" and switching out the default (usually something in the U.S.) for a Danish or nearby mirror. But I've stopped caring much because it doesn't really seem to make any perceptible difference. If I remember to, I still will switch it just so I don't unnecessarily waste intercontinental infrastructure, but it doesn't make much difference to my own experience.


It's often difficult to migrate providers when the application is complex or the owners may see the value in the provider. They're falling right into the hands of most cloud providers evil plans, they make it cheap to get started but as time moves on, it becomes more difficult to migrate away.


We moved the stack to Amazon in about 60 hrs last year (gems were already on S3). Given that time involved writing a lot of chef recipes, I'd say if pushed we could move out again in an even shorter period of time.

Everything needed to build the rubygems.org stack can be found at https://github.com/rubygems/rubygems-aws


I guess host it in AWS is a benefit for the integration with other services hosted in Amazon like TravisCI (the most popular CI for open-source Ruby projects) and Heroku (the most popular hosting for Ruby projects)


There's already 30+ comments on this thread and no one has pointed out the obvious: this is all for the peanut gallery to laugh at Npm, Inc.

If the bill remained relatively consistent they could host Rubygems.org for ~28 months with 200K.


We run into the same cost-related problems for our CDN. What we did to solve it was to rent dedicated servers that are near AWS regions. We used Route53 latency based routing to route traffic to that dedicated servers + Nginx + LUA. We're serving 300+ TB of traffic per month and the total price is just a percent of the RubyGems AWS Bill. There is some maintenance included with this solution and the problem is finding the right dedicated server providers.


That's not as bad as I was expecting. I was once working with a startups infrastructure (>100 servers) and it was near 20k/mo (mostly reserved instances)


Yes, this seems quite reasonable considering the scale it handles.


Since it can take a bit of time to read through the invoice, here's a summary of the bill:

CloudFront $1,071 Data Transfer $3,597 EC2 $2,184 S3 $ 228

While "bandwidth" costs equate to ~$4,668/month, only $1,071 is CDN (CloudFront), with the balance just raw Data Transfer.

Since lots of folks are commenting, and not everyone realizes the difference it's also a good time to point out the CloudFront vs. Data Transfer distinction.

Using Amazon's terms... Data Transfer means anything directly served/coming from EC2 or S3 (or a few other services which aren't relevant here), but NOT anything for CloudFront (which is, obviously, a separate line item, as shown above).

The bulk of CDN (CloudFront) usage ($735 worth or 69%) is US.

The bulk of Data raw bandwidth (Data Transfer) usage ($2,931 ~80%) is US East.


Is any of this good/bad/right/wrong? I have no idea. That depends quite a bit on what THEY are doing with it and why. For example, it can be cheaper to distribute from CloudFront versus straight from S3 for some use cases. Though, generally, you are not only looking at using CloudFront to save money over S3 ...there's typically a performance reason.

And sometimes the hosting costs simply don't matter. It's easy for us engineers - siting here on HN - to sit at our keyboards and play around with hypothetical ways to save money. This isn't necessarily a bad thing, but there are numerous things in IT that it doesn't make sense to optimize. Why? Because the ROI on the engineering time, CapEx, and OpEx (and the time, energy, and focus of ANYONE involved or impacted at all) to do the optimization doesn't outweigh the opportunity cost.

Sometimes there are simply better uses of our limited capital and time.

Not everything needs to be optimized. And the argument gets stronger when there are other factors more difficult to factor in: adopting a platform that isn't as widely known or isn't backed by a similar level of maturity (even with it's quirks, at least they are well known), etc.

The risks/concerns not only vary between organizations, but often from one period of an organization's growth to the next. The beauty is every organization gets to make their own decision ...and none of them have to give a damn if the HN community agrees or not. :-)


While by no means insignificant, this bill is no where near what I'd imagine would warrant a HN post. I wouldn't be surprised if most startups beat this regularly.

The startup whose backend I co-created racks up an AWS bill that hovers around a half million dollars a month. We make use of all of the ways to save with Amazon: pre-paid reserved instances, negotiated deals, etc. And we're not even that big; imagine what Netflix's AWS bill must cost?

We've tried other providers, toyed with co-locating, but at the end of the day the flexibility and cost benefit of IaaS outweighed the lower base price of CPU cycles when you roll it yourself.


> this bill is no where near what I'd imagine would warrant a HN post.

Can only guess at why folks like any post, but it's not necessarily how large the bill is. Maybe it's how low it is for a service that's widely relied on, or maybe it's the level of transparency, which turned out to include evanphx above showing up to answer questions about the project.


Absolutely, this is a transparency thing.

Compared to npm asking for $300,000 in donations to keep the thing running. I'm glad RubyGems can run for relatively so little, and be transparent in doing so.


With most of this being bandwidth costs, it seems like switching to a host like Digital Ocean would make more sense here. The bandwidth costs are a fraction of Amazon's in comparison.

As for the CDN, switching to something like Cloudflare might make more sense rather than relying on Cloudfront. At the least, there's a "US and EU only" option for edge locations to use which si considerably cheaper than the default option of all edge locations.


Wow, as someone who uses rubygems all day and is not in "US and EU only", I'm glad you're not involved in this project.


I presume you mentioned Cloudflare because of their "unlimited bandwidth". That comes with some constraints as to the use/application: https://www.cloudflare.com/terms.html

It's possible RubyGems.org would be classified under one of the "not really allowed here" terms.


> With most of this being bandwidth costs, it seems like switching to a host like Digital Ocean would make more sense here. The bandwidth costs are a fraction of Amazon's in comparison.

That's just replacing bandwidth costs with build-and-run-your-own-CDN costs.


Why was this even posted? Looking for help reducing it? Complaining about the amount spent? Looking for a pat on the back?

I saw a talk at Ruby/RailsConf about the work spent building and maintaining rubygems.org. It smelled a bit martyrish. "Look at the thankless work we perform behind the scenes".

Well, if help is required building or operating rubygems.org, please just say so. As a seasoned Ruby developer I'd be more than happy to contribute development time, and as a daily user I'd be willing to commit financially in a small way towards operating costs. Not that that is required - given all the offers of free hosting this post received in response.

If we don't know about a problem, we can't help. Just ask if help is what you want. It's not like the Ruby community doesn't have great communication channels.


This seems reasonable to me? Why is this a newsworthy item?


Transparency is nice and the guy paying the bill answered questions for curious minds.


exactly. I came here expecting to see a $200k/mo bill tbh


How can I donate to Ruby Central? Checked out their "Support" page yet it didn't help much. Any easier ways like donating via Paypal?


could easily knock multiple thousand bucks off of that by just reserving the ec2 servers you know you'll need, plus reserve the cloudfront bandwidth you know you'll need (for the amount of data served I believe you should be able to cut CF costs by at least half).

3 year heavy EC2 reservations pay for themselves in ~7 months, cloudfront reserved bandwidth is just a 12 month agreement so that costs nothing up front. You might want to experiment with some different instance types though, depending on your resource utilization. Personally I really like using the new c3.large instances for my web servers and anything else that needs more CPU than memory, proportionately. If the standard instances suit your needs better you still might want to move to the m3 class.

Aside from those two items it looks like you are sending out a considerable amount of stuff from EC2->internet (27 TB transfer out from US-East to internet). I'd recommend looking at whether you could set up a cloudfront distribution with your EC2 servers as its origin.


I had no idea it cost this much to host rubygems.org.

The website says that hosting is provided by BlueBox?


They might provide by paying for it rather than hosting themselves


This seems to be mostly their CDN bill. Not sure, but I don't really consider a CDN as part of hosting fees, more of a general infrastructure fee.


This includes all our compute fees too.


Bluebox hosts one of the gem mirrors...AWS is the primary source though.


Could it be possible to cache the version list locally and then just update it incrementally, e.g. via Git? Wouldn't this save both download time (for us), and bandwidth (for RubyGems)?


Interesting. Looks like most are bandwidth cost.


assuming there is a direct correlation between requests and projects, we can do a guestimate on ruby developers and projects which are active


That's it?


WHY DOWNLOAD!!! WHY




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: