Hacker News new | past | comments | ask | show | jobs | submit login
Linode Simplifies Plans, Reveals CPU Priority (linode.com)
115 points by john2373 on Nov 27, 2012 | hide | past | favorite | 91 comments



It wasn't immediately clear to me what changed. Here's the old homepage:

http://web.archive.org/web/20110713211922/http://www.linode....

It appears Linode removed the 768 and 1536 plans, renamed the 1024/2048/4096 plans to 1GB/2GB/4GB, and added an 8GB plan. They also added a row in the table showing CPU priority. The 512 plan is unchanged, as are specs and prices for the other three remaining plans.


They didn't actually add an 8GB plan, they've always had larger plans, just too many to display on the landing page. If you went into pricing or tried to signup, the larger plans were always available. Since they scrubbed some plans I guess there's room for the 8GB on the landing page now.


I am consistently surprised how many VM hosts refuse to tell you what your CPU guarantees are. EC2 at least gives you a general equivalence to a specific hardware. Rackspace refuses to go into detail. Others I've spoken to will only commit to saying "core", without specifying what the reference hardware is. And even then, actually making sure you've got a commit of that CPU is a whole other issue. I think EC2's compute units are a commit, though.

Linode's "priority" seems like ex-Slicehost's way of saying "hey bigger machines get a higher proportion"... nothing really useful for figuring out exactly what you're buying.

And you can't ever really figure out what you have: things could be severely over-committed, and you'll never know until you get starved. So you can't just benchmark your way out of it.


I cofounded slicehost and spent a lot of time at Rackspace thinking about this exact problem relative to Rackspace Cloud Servers.

The real issue is hardware skew. We like to buy/sell and build on cloud as if it is a pure utility where every unit is equivalent, but every year processors change, go EOL, etc. As a provider you have to make a call about how much of that complexity you expose to the end customer. Some customers want complete transparency, which I understand, but the downside of that is hundreds of variations of pricing options and complications around managing heterogeneity (e.g. how do you represent simply how much available capacity there is when you effectively have 300 variations of the same 'size' instance).

Of all the component parts of compute, CPU is the one that changes the quickest. Disk capacity is easy to model, disk throughput has changed much at all (minus the introduction of SSD), memory is pretty stable (minus some increases in databus rates). All in for the typical instance in a multi-tenant virtual environment, the same today as in 2006, the two most vaguely defined attributes are cpu and i/o. With the increasing use of 10gigE as well as SSD, hopefully we finally push through the i/o piece. Not sure what it will take to get us there for a clean way to model and describe 'standard cpu' as a provider.

Also, if anyone has specific questions about Slicehost cpu priority handling circa 2006-2008 or Rackspace Cloud Servers cpu pre OpenStack, just ask and I'll be happy to answer.


Create some sort of composite performance profile, then give me CPU information in terms of that. EC2 sorta does this[1], and although it's far from precise, at least it gives us a rough guideline and is at least an actual statement on what we're going to get.

The other issue, which I don't see Rackspace (or Slicehost or many others) addressing is the actual commit. It's fine to say "you get 2 cores", but then not tell me if those are reserved or if they might be sometimes overcommitted. This is a larger issue, because it means things might work fine... until they don't.

(I tried one provider out, and things worked swell in all our tests, but rarely in production, the entire VM would get paused for a few hundred ms ore more; something that wouldn't happen if there was a non-over-commitment guarantee. Right?)

1: "One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor."


We did talk about something basically like what you describe for both slicehost and RAX, but it really doesn't solve the problem. Take that EC2 definition itself (which is frozen in time in 2007, surprise surprise), how many 2007 Opterons are there on the market now? none, they are using completely different CPUs and they've benchmarked them to be roughly equivalent, but by what benchmark? performance just isn't that simple to benchmark and there are hidden tradeoffs.

Regarding the guaranteed capacity, this is another one we spent a lot of time thinking about. At the time EC2 made the call of guaranteeing CPU share by hard-limiting the ceiling of performance by any VM. We (slicehost and RAX) took the opposite approach of weighting CPU by vm size when under contention, but allowing for burst-ability to full core power when the box was idle. That meant that in the case of low-ish utilization customers Slice/RAX VMs had a higher higher average performance but EC2 had more predictable performance.

It's a super interesting tradeoff. People would benchmark us and we'd come out way better one day but not so hot the next and that was the reason why. The flip side of that was that EC2 is stranding CPU capacity on physical hosts that are full of underutilized VMs. Even without us describing this well to the market, customers and workloads adjusted and customers and use cases migrated to each provider according to better fit (higher 'average' performance with tolerance of variability go to RAX, hardline predictable guarantees go to EC2). We had some video transcoding services that could not deal with the volatility and preferred lower performance that was predictable (and therefore kept a lot of workload on AWS).

A similar tradeoff where AWS did it one way and we did it the other is around the expectation for persistence on local disk versus ephemeral VMs. Persistence (what we did) was more like what people were used to. ephemeral is more 'cloudy'. EBS as well as hosted datastore services mitigated the difference, but the AWS approach required legacy customers and applications to re-architect for cloud and ours was more friendly to the transitional customers or people that just wanted classic feeling hosting on demand. The AWS bet was right in the sense that more of the usage of cloud (and everything actually) is in the future and not the past, and the choices we were making were more as a gap solution (but with eyes wide open about that).


It'd be cool if someone did a random survey system that launches instances on various providers, does some benchmarking, and then reports the results. It'd have to be anonymous, so that the providers don't feed it "nice" hardware.

The problem is how to make money at it.


But that's my point: You can't benchmark your way out of an overcommit scenario. It won't fail until it does.

You could test (or run in production) for a month, and every day things work fine. Then for whatever reason (extra sales, internal policy changing, other customers' work profiles increase) you end up CPU starved.

Without a commit/guarantee, "Past performance is not an indication of future results."


I think that is what these guys are going for:

http://serverbear.com/


I've never heard of any of those companies. I'd be careful going with someone that you don't know a single other person who uses. Social proof matters for service businesses like hosting, because point in time performance is such a weak indicator of overall experience.


Since you're being quite candid, how did Slicehost make it in the beginning? How do any hosting companies achieve the social proof? Every hosting company has to be hosting some people otherwise they likely wouldn't exist. But maybe there's just that many customers in the world so that you haven't heard of a particular company because their customers are spread out?

Please don't take that as confrontational, I would just like to hear your opinion since you really are the authority here.


In general starting from zero you'll find some customers somehow, even just randomly and then you have to be awesome for them and build from there. The level of churn and flux at the low end of hosting is actually pretty high. You could put up little more than an unbounce page and probably get signups with credit cards at a trickle volume.

For us at Slicehost, we had an amazingly fortuitous start, due in equal parts to strategy and luck. We saw pretty clearly that rails was picking up steam really quickly and the hosting options were pretty shitty (shared hosting at the time didn't support the versions of ruby and rails people needed not to mention the memory hungriness of the framework and dedicated was still fairly pricey with 100-200/month being dirt cheap).

So we picked a really ripe initial niche market to spend time making ourselves visible in, which we did in forums, chat rooms, etc. The luck came in that we got some pretty vocal early customers who all had a great experience and evangelized us. That was lucky because either of those factors could have easily gone the other way. They could have been quiet customers or we could have had early blips in service (we had plenty of later blips, we just had a nice patch of initial smooth sailing).


...every year processors change...300 variations of the same 'size' instance...

Shouldn't it be four variations? (Or maybe only two if you count a tick and a tock as the same.) Would it really kill providers to offer something like Nehalem-1C-4GB and SNB-1C-8GB?


It's 4 CPU variations if you have purchasing agreements in place. If you're buying off the shelf from the vendors then yes it can vary much more. This is even evident in the desktop market. Compare the Dell business class to the Dell consumer class. The consumer class will change frequently, but in the business class you can buy 2 year old processors to match what you've got.


Yes you can buy 2 year old processors but then you leave 2 years of improvement/price falls on the table. Is a definable 70% performance better than a vague handwavy 100%? Depends on your use case but if predictability of performance (of a single host) is a priority you should be using a dedicated box.


My point was mostly that they sell other stuff but you have to go looking for it. The business class desktops are mostly for IT standards, so they don't have to update the image so frequently.

Making major purchases once a year and always getting the new stuff, you get 4 processors over the life of the hardware warranty (4 years, which I also believe is the timeframe for capital depreciation).

Purchasing 2x a year, you'll end up with 8 CPUs in the environment unless you opt for the 2H purchase to be the same as the 1H purchase.

If you're buying new hardware every quarter, those agreements become more important if you want to keep your environment homogenous.

If you're running thousands of servers, is it better to pay a premium on your hardware to be able to get the same machines for the entire year, or do you want to incremental improvements and environment fragmentation resulting from updating? Different models sometimes have different utilities to manage them, and now you're writing abstraction layers to handle mass updates.

I agree that predictability of performance requires dedicated hardware. I'm just going from the IT perspective.


for just cpu change, less than 300 but certainly more than 4. In just the 2 year from 2006-2008 before we exited to Rackspace we had more than a dozen cpu variants in production. We loaded the machines in a way where 'roughly' (with some hand-waving) things were the same for a given VM machine to machine, which allowed us to keep charging 20 bucks a month for the same instance regardless of which physical host it was on.


@seats I've been a Slicehost customer for 3 years. Was very happy with the service until bought by Rackspace. Currently looking to migrate my servers to somewhere else, blah blah.

Any interest on your part in starting another VPS hosting company? I figure if you did it right once, you can do it again. You'd likely have a customer in me.


Thanks that's nice to hear.

Not in the cards for me. Technically still under non-compete, but even if that weren't the case at this point I am connected to so many other hosting providers that I'd rather just support them in doing the right things and work on other stuff. Right now I run a startup accelerator and am spending my time as a full time seed stage investor.

What's your beef with the Rackspace service right now? As far as 'big' providers, besides the obvious AWS and Rackpsace, you should also consider SoftLayer and Joyent, and depending on what you are doing the PaaS providers are all getting pretty solid now - Heroku, AppFog, DotCloud, GAE even Azure (node is a first class citizen on Azure now).

If you want something that feels really VPS-y that is run by a Slicehost-style team, Linode is a definitely a solid option (mentioning them first since this is their thread). I also am a fan of DigitalOcean (they went through our accelerator and I've spent a lot of time with them), and 6sync is run by Mario Danic who was a super active early Slicehost community member.


Thanks @seats. I was a Rackspace customer with via a previous company a few years ago and wasn't happy with their reliability and pricing. I'm leaning towards Linode because they offer the most similar Slicehost experience and have excellent pricing. I don't have personal experience with their reliability yet, but from my research its top notch.


I used Slicehost and Linode in parallel until it was obvious that Linode's CPU performance was quite consistent and Slicehost's would waver from marginally better to significantly worse. This is probably because Linode is configured to use a fixed CPU time allocation method.

Amazon's CPU commitments are just as reliable as Linode's, but they seem to be less for the same dollar.

As for actual hardware, to go based on `/proc/cpuinfo`, some of the older instances I manage show up as Xeon L5420 CPUs at 2.5GHz, and the newer ones tend to be L5520 at 2.27GHz for what that's worth. They seem fairly consistent, and if you don't like what you get, just like AWS you can delete the instance and get a new one. You get refunded for your unused time.


You get refunded for your unused time.

Please notice that the minimum time block is a day. If you spin an instance, you pay for the whole day.


> Others I've spoken to will only commit to saying "core", without specifying what the reference hardware is.

There's probably no reference hardware beyond `cat /proc/cpuinfo`.


Even that is not accurate... on an OpenVZ instance set for 2 cores, this command gives me the specs on the CPU, an E5405, twice. Of course, this chip actually has 4 cores, so to show only 2 is not really accurate.

On a nearly-identical XenServer instance, it shows the same info, showing 2 CPUs but identifying the CPU as a 6-core Opteron.

From this, however, you cannot infer what percentage of those 2 virtual CPUs that you are going to get.


/proc/cpuinfo for me shows 4 Intel Xeon L5520 @ 2.27GHz


My Linode also shows 4 L5520, but that's probably 2 cores with 4 threads as the L5520 [1] has 4 cores and 8 threads in total.

http://ark.intel.com/products/40201/Intel-Xeon-Processor-L55...


Linode keeps instances homogeneous, so only the same type/size of instance exists on a given bare-metal machine.

Unless that's changed, that would mean that all instances running on a given machine share the same CPU Priority, there will just be fewer instances demanding service from the CPU(s) the larger the plan you have.

...so wondering if that's what CPU Priority means, or if Linode is about to mix instance sizes on same hardware?


Great question. If only the same type/size exists on a machine, then these priority numbers are just a rough measure of relative CPU power (between plans).


Linode forum discussion on this topic (that I could find): http://forum.linode.com/viewtopic.php?f=17&t=9544

And maybe I'm just ignorant on the topic, but what exactly does CPU priority do here? I understand basic linux process priority (like the 'nice' command), but how exactly does CPU priority behave on linode. Searching through their docs, I couldn't find anything.

EDIT: to maybe answer my own question, maybe this is the Xen credit schedule? http://wiki.xen.org/wiki/Credit_Scheduler


2x CPU priority may simply mean that you're sharing the host with 1/2 as many other guests (and each guest is guaranteed at least its equal share of the CPU). They've stated publicly in the past that larger Linodes share the host with proportionally fewer other guests.


What do they mean by CPU Priority?

I'm assuming that meant access to part of a processor, but how does that work with 4 CPU and 16x priority? (I'm working on the assumption that 1x priority ~= 1 core.) Of course, my assumption is probably wrong - just curious how this affects the load on a given server and how the VPS interacts with other VPS's on that node.


It's probably arbitrary units, so 16x just means 16 times more than 1x. I suspect that 1x is something like 1/8 or 1/4 of a core.


But the server configuration of 16x is different then 1x, they wouldn't be hosted on the same server; I still don't understand what the priority is relative to


For example, if 1x means 32 VMs per server and 16x means 2 VMs per server (assuming fair sharing), then the labeling is proportional to the minimum performance you can expect.


Am I the only one here who thinks that for personal use, owning a server in house is a better choice than using a hosted VPS or server?

It's quite easy to get a decent micro HP server (even with SSD storage) within $1000, which would cost $150.00 - $300.00 a month for a equivalent plan on Linode. Suppose you upgrade your server every two years, the monthly cost of the server is less than $50. You get dedicated CPU time and I/O, permissions to managing everything.

Internet bandwidth might be a problem. But let's put ourselves in the 2 or 3 years future. What if you already have Gigabit Internet like Google Fiber for $70/mo?

And you get other benefits for owning a server in your house. Since it's connected to your home LAN, it can be used to help build a smart home, control smart sensors/cameras, or serve as a media server.

Am I missing something here?


> Am I missing something here?

You are missing a lot of things.

1. Linode et al buy top-end hardware. It is, generally, going to be more reliable.

2. Linode et al have redundancy in multiple parts of their system. Redundant power, redundant networking, redundant disks, redundancy all over the damn place. A server sitting in a hallway closet does not have these advantages.

3. Finally, you assume that your time is worthless; as in having a $0/hr value.

I charge a lot more than $0/hr for my time. If, in actual fact, I could successfully farm out my little Wordpress blog network to a reliable host who charged a lot more than Linode, I would do so in a heartbeat because it makes financial and hair-pulling sense.

I farm out the management of physical servers to Linode for the same reason. I am nearly 32, my time is expensive, my patience is short and my interest in hardware has long since abated because I have other shit to do. Linode is a bargain from my POV.


If you have fibre to your home and a UPS and are okay dealing with the hardware? sure. You are better off. (cooling, in most places, is unlikely to be a huge deal if you only have one server. You'll cook before the computers will.)

The last mile is a huge problem. If we all get gigabit fibre to the home in a few years? everything will change, and of course, you will be right.

But, here in reality, if you want a network connection with a decent upload speed and decent reliability, you are paying a kilobuck or more a month. 'round here, it's usually $3-$5K/month for 100 to 1000Mbps (Up; you can get 100M down from comcast for like $400, but that's only 10M up.) and this is silicon valley. the place is lousy with dark fiber.

(It's better if you live in Santa Clara or Palo Alto; both places have municipal fiber. But you are still talking tens of kilobucks to get the fiber from the street to your house, and that's if you are very close to the city fiber, and then you've gotta buy bandwidth at a datacenter.)

But yeah, all that said, there are some places with decent last-mile internet; sacramento has had surewest FTTH for far longer than google has. Some areas, Verizon does it. Maybe we will all have it in a few years? It sure would be nice. But I ain't holdin' my breath.


There's quite a bit that you're missing.

Let's start with point #1. You're paying $1000 upfront. With a VPS, you can pay a few bucks per month to get very decent performance (assuming you go with a LEB instead of a overpriced Linode). You assume that you could use your home internet, but the reality of that is that almost every consumer ISP on the face of the earth won't allow customers to run servers. Can you get away with it? Usually, yes. Is it a good idea? Not at all.

Why spend the equivalent of $50/mo? You can get budget dedicated servers for that price range, with a heck of a lot better network resources, and no need to maintain your own hardware.

Really, there's a ridiculously long list of reasons that running any public-facing server from your home is a horrible idea. Take the game server I used to run as an example — I'd be completely and utterly screwed if my home connection was getting 4Gbps DDoS attacks, yet with it being on a remote server, I have options to mitigate it or even ignore it (nullroute, yay).

Edit: There's a ridiculously long list of reasons why home-hosting is bad.


For the things I actually use a personal server for, $150/month is overkill. A server that used to handle my family’s email\* and runs my personal Web site runs just fine on my Linode 512. Heck, if they offered a “256” plan with half the capacity for $9.95/month, I’d be tempted to switch to that.

\* I moved to gmail, not because of cost, but because I got tired of managing the spam filter.


Yes. Your home internet connection is not nearly as reliable as a datacenter's. (Also, you have guests over and the 5 year old unplugs the machine. Your home catches on fire or, worse, you decide to move to another city.)


And then a hurricane hits and you lose power for a week... (this was the event that deterred my super smart plan of hosting in my house years ago)


I'm actually disappointed. I liked the 768 package, it was big enough that you could run a fair amount of stuff [0], and cost only $30/month. I was planning on buying a new one over my Christmas holidays and moving my stuff over so I could get onto a newer CentOS. CPU has never been a problem, so this new priority is meaningless to me.

For my needs, $30/mo was about as much as I'd spend on a server to host mine and a few friend's blogs, some photos, and some remote services. $40 is too much for me and the lower plan just doesn't have enough RAM to be interesting.

So now my options are 1) find somewhere else, or 2) backup my data and rebuild the box in place.

0 - I manage a few Linode 768s including my own. 768 was a great size for a few small blogs and a low traffic Rails site, or a larger traffic blog.


Another potential option is using a Linode 512 with "extra" RAM (see the Extras tab in the Linode Manager).

Additionally, I've always found the Linode support folks to be fairly accommodating, so maybe it's worth asking if they can still provision 768s.


Yea, some people suggested that in the forums. It doesn't end up being the full 768 (which is not the end of the world) but you also lose the extra HDD and bandwidth so you're actually paying more at that point.


768 is still available as a resize option - not sure if that's a mistake though...


Have you thought about switching over to a kimsufi.ie server? Your $30 will go a lot further and you're not sharing a disk or CPU. I'd only stick with linode if you need the scale-ability.


I recently evaluated some cloud providers. There were differences of 10x latency for a bunch of basic (unix filesystem plus some bash script) level operations between EC2 and Rackspace. The Rackspace people failed to take complaints seriously, so we took our business elsewhere.

EC2 is good but their spin-up time is crap.

Though same-kernel is obviously a security reduction, the speed is far better: I for one can't wait to see more LXC and other lightweight virt stuff being made available with real cgroup-level guarantees.


...differences of 10x latency ... between EC2 and Rackspace...

Was that PV vs. HVM by any chance?


Anyone care to share their experience with linode.com vs. http://prgmr.com/xen/ ?

(We setup a few vps's with rackspace and have been happy so far.)


I've been with Linode for just over 2 years now (I think my monthly bill is just north of $300, about 10 instances with them) and I'd highly recommend them. I've had maybe 4 or 5 servers down over my life as a customer, normally due to ~10 minute network issues and I once had a server go down due to the host machine and I had it back in 30 minutes.

The best thing about Linode is the support: if an issue happens they open a ticket with me and if I reply with questions / clarifications I'm replied to within a couple of minutes. Their pricing is a bit higher than elsewhere but after having shitty experiences with another company (vps.net) I decided to bite the bullet and switch and haven't regretted it since.

The billing isn't as flexible as EC2 (but then I guess they're different markets) however you get pro-rated payments to the day. So a server up for 10 days will cost 33% of the monthly cost and they refund the amount to you in account credit when you remove the server. Flexible enough that it can be helpful when you just need a server for a couple of days. Oh and their nodebalancer product is great.

I have seen complaints about one of their datacentres having some issues (Newark) but I can't comment on that as I use their London datacentre almost exclusively.


They are kind of different beasts. Linode can spin servers up and down rather quickly and you can deploy different OS's all through the online menu.

Prgmr you buy a machine send an ssh key they set it up and send you the bill. Then you get a login console at xxxxxx.prgmr.com and login and setup a user or deploy your own OS using centos recovery. But you can't create and destroy servers like linode or aws or rackspace.


I've had a couple prgmr VPSes for about a year and I'll say this: I've never had any interruptions, they are always fast when sshed into, and make good web servers. The only problem, and its kind of a big one, is that support sucks. Reseting your ssh key (which I stupidly have needed to do a couple of times) takes about 3 days. You send them an email and they respond... eventually. But you get what you pay for and its hard to ask for much service when you're paying those low prices. For the price, the servers themselves are awesome.

I'm switching to Azure because their prices are reasonable and you get the full management experience.


Fairly similar performance, but almost $20 difference in price.

Here's some data comparing the 1GB plans:

Linode 1GB: http://serverbear.com/13-linode-1024-linode

Prgmr 1GB: http://serverbear.com/1709-1024mib-prgmr-com


I've only been with Linode for a year or so after coming previously from a Mediatemple Gs plan which costs the same per month but lacks in performance. I've found Linode's support to be better than expected, I am somewhat new to administering my own server via a command line and accidentally destroyed my site and they were able to bring it back for me (a configuration issue somewhere deep when trying to setup email). I've had nothing but a great experience with Linode not-to-mention their helpful guides for installing Nginx, Memcached, Wordpress (via command line) and offer third party apps definitely helps considering I am no system administrator by any means.

I am currently hosting one major site running Wordpress on my Linode box which gets roughly 17,000 uniques per month coupled with a plethora of other domain names and blogs (about 10 other sites) they don't get as nearly as much traffic though I have running on their 512mb plan and I haven't hit any kind of resource limit in terms of CPU, space, memory or bandwidth just yet. I am pretty amazed a small 512mb configured correctly can handle what I've thrown at it.

If you're new to managing your own server, get their $5 per month backup service (trust me, you'll need it). Because as you're learning, you're going to potentially destroy and break your site a lot and it's easier to revert to a backup than it is to decipher and fix Linux configuration issues when you have no idea where to start or even search on Google. Restoring from a backup is pretty quick as well.

My limited experience with VPS hosting (I've dabbled with Rackspace before and a Mediatemple Dedicated Virtual server as well) is pretty limited, but I have yet to see an affordable host allow you to destroy, create and rebuild instances so quickly like Linode allows you too.


We've had a great experience with Linode, horrible with Rackspace. Rackspace London is a disaster - I'd stay far away from them if possible.


Would you mind sharing more? I've never heard a bad thing about Rackspace.


Linode has consistently amazed me with their stable service and excellent customer support. I've only needed support a handful of times, but when I submitted tickets a response usually came within 10 minutes, and has never took more than 30 minutes (during normal pacific-time business hours).

I've had fewer issues with my Linode hosts than with EC2. Granted, the services offered by EC2 are much more advanced -- VPC, routing, firewalls, NAT, etc.. so perhaps this is to be expected.


I use Linode for the name servers for my DNS hosting service SlickDNS (https://www.slickdns.com) and am very happy overall with their stability and performance (much better than EC2 at the low end).


Out of curiosity; wouldn't a Raspberry Pi be sufficient for the lower levels if you don't account for reliability and high-performance?

I am very happy at the moment managing two Linode accounts, one for personal use and another for an organization I am the IT guy for.


From my testing the Raspberry Pi benchmarks similar to the Amazon Micro instance. Happy to post some results if you want to see them.


That would be interesting. This offer to colo a RasPi for free (which was on HN awhile back) is apparently still valid:

https://www.edis.at/en/server/colocation/austria/raspberrypi...

I've got two Raspberry Pis coming to me in the mail, so I'm thinking I might do it.


Here's two Rasbperry Pi Benchmarks (these are also pre 512mb model), bear in mind these are both just run from home networks:

http://serverbear.com/benchmark/2012/09/11/ENc5kl1X2ZciF0LZ

http://serverbear.com/benchmark/2012/09/08/gcMHO1PDOY6Crq2W

Compare with the Micro performance:

http://serverbear.com/166-micro-amazon-web-services


Please do. I would have thought the micro would be faster than the Pi (I have a Pi)!!!


In my limited experience with Amazon micro instances, the I/O throughput on them is surprisingly abysmal.

I'm not surprised that a Pi is similar in overall performance (I have a Pi as well).


The change (removal of some plans) is a couple weeks old.

About CPU priority, Linode never kept it a secret. For the small VPS (512MB RAM), you get a guaranteed 1/20 of a 4 core XEON processor and it scales linearly with each plan's RAM.

As explained on their FAQ, their machines have 8 cores each and house 40 512MB VPS.


While I love Linode (It's my VPS host of choice) it's always bothered me they don't do any kind of volume discount - ie: the cost of 8 GB is just 32 times that of the 512 MB.

I was hoping this change would rectify that, wishful thinking I suppose.


The CPU priority may be considered a volume discount.


Why would someone get a linode when they can get a dedicated server for $15? http://news.ycombinator.com/item?id=4838729


And why would they get a Linode when they can get a VPS from another provider for much, much cheaper? They don't bring much to the table for the price that a cheaper provider can't offer.


Features? Performance? A lot of people have benchmarked various cloud providers and Linode consistently gets high marks on CPU and disk IO. That particular $15/month dedicated gets you a single-core (!!!) Atom/Celeron processor. Who knows what hard drive they put on there. At $15 a month for a slot on the rack, I doubt it's any good.

For the longest time I lived on a single 512 linode, and ran 6 mid sized Django sites, all with postgres and redis on the same machine. I knew how to keep lean (taking advantage of varnish, nginx, uwsgi) and even used that linode for a ongoing mumble server and irc. I've since got more Linodes (grew with more sites) and split up the responsibilities, but I still remember getting a lot of performance out of that one little instance.

Also, Linode brings other features to the table, like a great API, easy to use dashboard, free nameserver, and availability in various geographic locations. Granted, my opinion is biased - I'm a really happy customer there. But I have had experience managing other machines on Slicehost (horrible experience) and EC2 (decent enough, but doesn't make me smile).


I have a BuyVM VPS, even with the filtered IP addon it's still under 1/2 the cost of a Linode with inferior allocations of disk and bandwidth. According to ServerBear, it has a (much) higher overall score, slightly worse average network/unixbench performance, and MUCH better disk performance. It's a great little VPS, and the only issues I've had were sub-par routing to some places for a few months (fixed completely when they bought a new router for SJC — now it maxes out my home line and does great in overall network performance). I've had my VPS for almost 12 months now, and I would never go to Linode...

An API sounds cool, but I have no use for that. The control panel is excellent, I have no reason to use Linode instead. BuyVM has free DNS (and 5GB of free backup space), so no reason to use Linode over them. BuyVM has SJC and Buffalo locations, sure, they could use more, but I don't need multiple servers. I've stuffed quite a bit of stuff onto my VPS, and performance has been amazing. They've even absorbed large DRDoS attacks for me with the filtered IPs, and were willing to accommodate my unusual situation (the DRDoS source slaves didn't realize the source IP was spoofed, and sent abuse reports to my provider, which were useless as they were the ones attacking me) — after explaining the situation to them, they agreed to ignore any further reports from said network. I currently have three low end virtual servers for (a lot) of things, and my favorite out of all of them is the BuyVM VPS. So, at least in my case, I can see no possible reason I'd want to go with a provider that is 2x the cost, has historic security issues, has been reported to be unwilling to deal with any sort of DDoS attack whatsoever beyond a nullroute, and offers inferior performance and resources.

I actually don't like the idea of getting a OVH budget dedi over a VPS for any type of production site, it's just not as stable.


Since there are all VPS users here, what do you think is the best way to market a VPS product.. or rather, how did you end up becoming a Linode customer?


The best way to market a VPS product is to make a better VPS product. Linode offers a very good product, but it could be better. Their dashboard and management utilities are better than most, but aren't especially manager friendly and don't scale well to teams.

My ideal VPS provider would be somewhere between Heroku and Linode, offering self-managed hosting when you want it, and fully-managed hosting where you need it.


I've heard good things about Linode, but ultimately, why not get a dedicated server for just a little more money? I pay 30€/month for mine with Hetzner, yesterday there was another host with prices starting from 10€?

So what is the appeal of Linode? That you can upgrade to a faster server quickly?


Some things may be a factor

- If you hardware says goodbye, your server says goodbye as well. It needs to be physically rebuilt. With Linode, maintenance time means your server shuts down here and reboots somewhere else.

- Can you reinstall your server, pick a new distro automatically?

- Can you add more memory or more storage to your server with a click?


I'm not much of an admin, but is it so much easier to transfer a virtual image to another server than to transfer an installation from one server to another?

Presumably as long as the server your VM sits on has more memory than your VM, you can increase memory easily. But the maximum might only be what a dedicated server would have given you from the start?

Edit: I just checked, seems Hetzner has a server with 16GB RAM for 49€/month (64$). The maximum Linode VM with 8GB sets you back 320€/month.

It seems the 49€ is the cheapest standard Hetzner server atm, but you can get cheaper ones via their auctions. Of course then if you need more memory you have to move server, not sure how complicated that really is...


Depending upon your provider, it can be trivial to transfer VMs from one host to another. So if you want more resources than the physical host has available, or there is a hardware failure you can be up and running again in minutes (and at the company I used to work for, we automatically did the transfer, so the customer didn't even need to get out of bed). Of course, you pay for that, both in performance and cost, but that's the decision you have to make.

Edit: overuse of the word "host"


I wonder how these new plans will affect existing users. My plan falls directly between two of these simplified plans. The prices seem the same still, so it would cost me $15 more a month to increase to closest new package.


caker (the CEO) mentioned in IRC that current users will be able to resize in to those plans for the times being, they just won't be able to add new Linodes with those plans.


This is disappointing. My Rails app gets just enough traffic that it uses 1.2-1.4 gigs of ram on average. The 1.5 gig plan was perfect for me and I've used it without issues for years now.


Weird they haven't updated their blog with a post about this.


Have they updated their security and disclosure policies ? If not they can remain in my "dodgy vendor who you can't trust" list.

For those that don't remember hackers managed to get root access to several VPS via some Linode vulnerability. Didn't bother to let customers know. Didn't bother to update their status/website. Didn't bother to tell anyone what they've done to fix it. Compare that with CloudFlare: http://blog.cloudflare.com/post-mortem-todays-attack-apparen...

Linode continues to be a recurring example of how not to behave as a vendor.


Continues? I've been a happy Linode customer for a long time. Their boxes are snappy. I rarely have issues, and when I do, they are pretty quick to respond and help out. I've also had my fare share of free hosting thanks to their referral platform.

Aside from the issue you mention, what else have they been doing wrong?


Congratulations. I was a happy Linode customer too until I had to find out from Reddit that a major security hack had occurred and my VPS had potentially been rooted. I don't recall every receiving an email from Linode about it.

And the fact is that every single day that passes without them updating their security/disclosure policies and showing some commitment to transparency is another day they will be classed as "untrustworthy".


If you are referring to the Bitcoin incident, the only accurate statement in your comment is that hackers managed to gain root access on several VPS. The rest of it is nowhere remotely close to the truth:

http://status.linode.com/2012/03/manager-security-incident.h...

You were very active in the very forum thread wherein the announcement was posted by another customer, not half a dozen posts above you, so I find it hard to believe this falsehood is not intentional:

http://forum.linode.com/viewtopic.php?f=20&t=8509

Considering the grandstanding you did in that forum thread and are continuing to do here with your overly aggressive (and false) commentary, I question whether you have some kind of overt agenda against Linode that is clouding any message you might have. Every company makes mistakes, and Linode, in my opinion, handled this one as appropriately as they could have; were it Amazon, who are far more secretive (particularly with outages), we might have never known.


taligent's primary complaint seems to boil down to Linode not making a public statement about the problem until after one of the affected individuals had taken his case to Reddit.

I don't think that's an unreasonable complaint. I'm still a pretty enthusiastic Linode customer, but that incident bothers me a little bit too. I have to wonder if they would have addressed the problem publicly at all if the story hadn't made the rounds on the social news sites.

You shouldn't question his motives unless you have something more solid to go on than, "unhappy former customer".


Things take time to investigate and fix. The investigation was probably underway when the story went around. Rushing something out, be it a fix, release, whatever, is risky and a good way to be wrong (which is worse than deliberate). Imagine sending out a press release saying that you fixed it and the incident repeating itself an hour later.

Usually, I side with "better eventually than never".

I agree on the root complaint, and it is valid, but OP did pretty directly say that Linode did not notify customers about the issue, implying to this day. That's demonstrably false, and I don't like to see Hacker News threads turn in to a whirlwind of fairy tales.

My conclusion regarding OP is based largely upon his behavior in the forum thread I linked. I actually remembered him by name when I saw his comment, which should say something.


Alright, I scrolled through all 16 pages of the singularity of stupidity that was that thread. I don't see anything in there by taligent that stands out. About the worst he did was let himself get dragged into a personal fight in the first few pages. (I wonder now which one of the users you were in that forum thread. sednet?)

You linked to the email exchange between Linode support and one of the affected customers. You know that Linode already had an idea that they had a problem before the rest of their customers found it. Do you think it would have been so unreasonable for Linode to at least put up a message on status.linode.com, "We are investigating an incident of unauthorized access to one of our customer Linodes, we will update this as we investigate it"?

And I don't read that implication from taligent's comment here. I think it's obvious that he's saying that they didn't bother to let their customers know when the incident occurred.

Basically: he thinks they didn't handle the disclosure on that matter in a way befitting its seriousness, and he thinks that they've done nothing to show that they'll handle it differently in the future. I agree on both counts. As he said in the forum thread, what makes this so frustrating is that Linode has been so spectacular in every other regard.

He's right also to point to the CloudFlare post-mortem as an example of Doing It Right. Surely you see the stark difference between CloudFlare's handling of their incident and Linode's? We still don't know the exact nature of the compromise (former employee? Did Linode have an externally-accessible customer service interface? What happened), nor do we have any idea what they did about it, other than that they say they "will be reviewing our policies and procedures to prevent this from ever recurring" -- an extremely wormy statement that will still be true even if they choose to change nothing at all.

I don't like to see HN threads turn in to a whirlwind of pointless personal attacks. Let's just discuss the facts, OK?


I find it interesting that you're imploring me to discuss the facts when I started this thread by calling out incorrect "facts". I'm the only one that seems to be interested in the black and white facts, whereas you'd prefer to alter the OP's words so that they become facts.

What you're interpreting from his statements certainly isn't obvious, as it's just the way that you interpreted it. I interpreted it differently, using only the words that he typed and not filling in any of my own as you have -- I think you realize that, too, since you italicized your additions.

> (I wonder now which one of the users you were in that forum thread. sednet?)

I do not post on the Linode forums.

Fine, you're right; I might have been a little harsh on taligent, but I'm perpetually annoyed by crusaders who latch on to one mistake so strongly that the surrounding facts of the mistake begin to distort in their memory. If you're going to have a problem with Linode, back it up with the truth -- we get enough of alternate reality with politics.


I get that you're annoyed. I'm trying to convince you to be less annoyed. You and I have had perfectly reasonable discussions in the past; I'm surprised that you're responding this way to someone else.

I think it should go without saying that we should read other users' comments as charitably as possible. You say that my reading of his comment is "just the way that [I] interpreted it", but then you bless your interpretation of his comment as being "the black and white facts".

But English is messy. It carries nuances and context and hidden clues. Worse still, everyone has the attention span of a coked-out gnat now. Brevity is supposed to be the most important property of a statement, so we don't go around explicitly writing in all of the nuances and blanks and context. Thus it's natural to omit something like, "when the incident occurred" from the end of every statement. (Which, by the way, I italicized as emphasis; even a cursory glance at my comments page would have clued you in that I do that habitually.)

Your interpretation assumes (emphasis again) that he was deliberately lying.

You called someone a liar.

Publicly.

Based on your interpretation of what they said.

Whereas I assume that it's more likely that he was simply being brief.

Maybe you're right and I'm wrong. But, I'm unwilling to assume that someone else is a liar when there is clearly room for misinterpretation of what they said, just as I'm unwilling to assume that anyone that I'm talking with here is an idiot. (Although, I'm becoming more willing to assume deliberate obtuseness and argumentativeness ... not apropos of anything in this thread.)

I don't want to brow-beat you for your reply to him, but you're still thinking of him as a "crusader", and you're still assuming that the facts are "distorted" in his memory. When I asked to stick to the facts, I meant that it would have been sufficient to say simply that Linode notified the 8 affected customers and posted a statement to their site about the incident.

That would have left room for both you and him to be right, instead of accusing him of grandstanding and being a liar and a crusader and so on and so forth.

And most importantly: whether or not we agree on his characterization of what happened, he does still have a legitimate point. Linode did not handle that incident admirably, it can be contrasted starkly with the way that CloudFlare handled their incident, and Linode is still compounding their initial error by not taking steps to correct their handling of future incidents -- all points from my previous comment which you completely ignored, in favor of continuing to attack another user here.

HN needs to calm down just a tiny little bit.

Sorry for picking on you today.


Since there are all VPS users here, what do you think is the best way to market a VPS product.. or rather, how did you end up becoming a Lin




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: