I've been messing around with Scaleway lately and found out a few interesting things.
Scaleway is part of the Iliad family of companies, which you probably haven't heard about if you're outside of Europe. This includes Online, a networking provider and Scaleway's parent, and Free, which is a huge French ISP with a fairly hackery internal culture.
Based on the names, it looks like it's 100% Cisco with a mix of Nexus 9000s, Catalyst 4500e edge, and 4900 ToR switches.
Scaleway is one of the only cloud providers I've seen that is using NBASE-T technology. The Intel Avoton SoCs used by their C2 instances support 2.5Gbps link speeds using an appropriate external PHY.
Their C1 instances are Marvell Armada 370 XP parts, datasheet here:
These are using 1 gig connections. The instances I've spun up connect to a gateway with a Cisco MAC address.
The C2 instances connect to gateways with MAC addresses registered to Freebox SAS. Figuring this out took some digging. It turns out that Scaleway/Online is manufacturing their own 2.5G capable switches. Because Free, the ISP, already has an OUI registered and was the only company in the conglomerate which made hardware, it looks like Online had Free make some custom switches to deploy in their datacenters.
They're also making their own BIOS for the C2 instances. dmidecode reports the following cutesy BIOS data:
Manufacturer: Online Labs
Product Name: SR
Version: (^_^)
Serial Number: 42
The root devices for C1 and C2 instances are both NBD. Performance was better than I expected; I get wire rate I/O throughput on both (>90MB/sec on C1 and >250MB/sec on C2). Of course, the disk bandwidth is shared with the Internet connection on both instances, so there might be contention for some applications.
I didn't know Online was related to Free! I used their insanely cheap €2-per-month mobile plan when I lived in France for 6 months last year. Amazing value, they seem very popular for DSL too (though I didn't use that). Back in Germany now everything seems so damn expensive. The internet, the mobile plans, the electricity...
Anyway, thanks for doing this digging, it's interesting. Scaleway has its shortcomings but I find it to be one big fascinating experiment.
After seeing positive Scaleway mentions here on HN several times I opened an account to test performance. Damn it was slow, and I had set my expectations very low.
I can understand for a site that didn't need CPU-heavy performance (I intended to use it for a personal Nextcloud instance) but my main issue was with reboot time. Change any settings in the control panel? Reboot needed. Applied kernel updates and need to restart? Wait for that reboot. It was agonizing, I had to abandon the process and check back an hour later to see if starting or stopping the machine had completed.
Even when I decided I was done and ready to cancel, one more wait of 40+ minutes for the machine to shut down so I could delete it and release the reserved IP. I admire the fact that they decided to try something very different, but forcing every customer to copy the entire contents of their drive to a second device and then copy it all back again to restart is nuts.
Currently on OVH for most things (by far the best performance/price ratio for my usage) but still keeping a couple DigitalOcean VPSs due to concerns about OVH's 'special' version of Ubuntu[0], which has me concerned. Hope that's resolved soon or I'm back to DO + AWS as a Plan B.
I don't have to restart my machine often. And it might depend on which server you used exactly, since they have different machines (I'm using the bare metal machines). But that said: I don't remember at all that the server shutting down took a long time. Might it be possible that your server was broken?
All these benchmarks seems to fail in the regards that whenever they spin up a VM, the performance will vary hugely depending on the host machine's hardware and capacity. If they want to do it right, they need to spin up atleast 100's of them by regions, at every single hour, and then maybe, just maybe, you can get some kind of an accurate measurement.
I sent them a message saying exactly that- I've moved all my machines to linode because the price for the speed (both memory and cpu) is much better. They responded back saying that they are competing on platform, not price.
I believe it'd be more appropriate for Linode to say that, which has long had good reputation in terms of the variety of regions, the quality of network, etc., and which used to be on the more expensive end. DO, on the other hand, actually debuted with the low prices, if anyone remembers.
I would jump away from Linode in a heartbeat if I found something comparable.
The article mentioned dataloss with Linode... I experienced the same problem. Well, technically the data was there, but there was a corruption problem and the disk was being mounted read-only. Their default settings for ext4 in their arch linux offerings was not 'data=journaled', and it should have been.
Well... I say it should have been, but I found out why it wasn't their default when I went to rebuild that VPS. Apparently something in their infrastructure doesn't work well with ext4 journaling because everytime I tried to set it to journaled (so I could avoid a repeat of the corrupted disk issue...) it would reboot lmounted as readonly and there was no way to fix the issue except to rebuild.
When I contacted them about it, they told me it was a "known issue" that would be fixed in the next 3-6 months.
That was the SECOND issue I found with them, the first is that doing a 'pacman -Syu' on a completely fresh install would hose the install... again, something to do with their infrastructure not working well with arch linux.
It took me days of fidding around and learning what I could, and could not do, before I was able to successfully rebuild that VPS.
I would absolutely love to find another VPS provider that offered arch linux with comparable performance.
Which doesn't mean that they should chase those customers. If they're not generating significant profit, it doesn't matter much whether they stay or leave.
Hey, I did the same! Somehow, I don't believe them, though I have to say that their e-mail sounded quite assertive. I guess they'll change their mind when they lose enough profit.
I love DO but they deleted all my backups and servers because of a billing problem with just 15 day notice. Ever since I make sure to do backups outside DO.
I was a paying customer for 2 years. I was gone on a vacation where my cc was stolen and I canceled the card. When I came back, my server and backups are gone. I understand if I was a new customer but I was using the service for 2 years. Fine, kill the server but why delete the backups for which I pay 5 USD a month extra?
"They have had huge problems with their regular cloud instances though (not the local storage ones), they are based on Ceph and were from time to time unuseable. Also it's sometimes a gamble if the system creates your machine or if it will get stucked, we also had the problem that we couldn't delete instances, they somehow where stuck. Their whole system is based on OpenStack and it seems they have problems managing it (who doesn't)."
I'm curious to learn more about this. I'm currently using OVH public Cloud, and before doing so I was trying to find some review about it. I must say that so far I have no issue with it, but I have been using it for 2 weeks only.
When you say unusable, do you mean it was down or simply too slow? I read a lot of complain about the speed of the storage, but nothing about the stability of their cloud.
I live right near OVH in Quebec, most of the data center was retro-fitted inside an abandoned factory. It's also apparently the biggest data center in the world and features it's own hydro-electricity station (although I'm not sure if that still holds true). Amazon AWS is moving in there too. They have a little youtube clip of the project!
"What we really miss for all of those providers is some kind of transit map per location, where we can see what transit providers are connected in each location."
Location of DE-CIX, primary european internet exchange (according to themselves the one with the largest throughput worldwide, but AFAIK that's disputed)
And why european primary internet exchange point is here? Because this city is an international financial center https://news.ycombinator.com/item?id=13798598 I've been there. Men and women in suits leaving work at 8 pm. Heartquaters of some big German banks etc. International buisness center.
the underlying explanation of all the other answers is: "its on the Rhine and quite north". The Rhine is historically the most important river of the area, so big cities and populations sprang on it. And, strangely, the Rhine is/was an important connexion to the UK, hence the importance of Frankfurt even on a quite large scale.
Today it still makes sense to go there because Germany is the wealthiest country around, and from there you can serve the entire Paris-Brussels-Amsterdam-Frankfurt area, which includes a lot of people: http://kids.britannica.com/comptons/art-143547/Population-de...
edit: before I get punched by an autist, Frankfurt not ON the Rhine, but it has an incredibly quick access to the Rhine without being subject to its floods.
I'm glad someone included the two leading French hosting providers in their benchmark. I have ~10 servers hosted by both OVH and Online (not Scaleway yet) and I believe they do a very correct job for a terrific price.
BTW: both have more expensive offerings if you need strongers SLAs. What I don't like with OVH's offering is that they have 3 different brands (OVH, SYS and KS) and you need 3 different accounts, control panels, and even invoicing is not consistent over the three brands (which means more work when doing my accounting every month...).
I will second this. I have two of these very cheap VPSes from OVH mentioned in the article, one in France and one in Québec, and in my experience they have been very reliable and have good performance for their price. In my experience, for "hobbyist" use cases having more RAM is more important than having a 1 Gbps connection - most of the time your stuff will not be that popular to the outside world, but you might want to experiment with different tools which will explode if you only have 512 MB of RAM (as Vultr offers on their new 2.50$ plan).
I will also second the comment on the confusing offerings of OVH, though. Their three brands are one thing, the other thing is that even their main OVH website looks totally different (with different products being offered) depending on which country you select on the top right. I have the feeling they are working on this right now (their different country websites are beginning to look more and more similar), but it can be really confusing. Also, chances are high that you will end up in some French configuration menu eventually. (I can understand that they will find this a bit unfair, as all the other players of course can just develop their websites in their native language and don't have to worry about this. But still it's not that professional.)
On the other hand, for a German test I find it interesting that they didn't include netcup, which also offers very cheap and high-quality VPSes from Germany (although they suffer from the same confusion regarding their websites - when you switch to English the products are different and for some reason much more expensive than on the German version).
And then, of course, there is also time4vps.eu if you want to go really cheap.
For us the requirement is hourly billing and being able to get a working machine within a minute, so no Netcup :). Also, only the network test was performed mainly in Frankfurt. All other tests were Frankfurt, Strasbourg, Haarlem (Location 1) and New York, NJ, NJ, Canada, Paris (Location 2)
Ah, okay, I overlooked the "spin up a new virtual machine within seconds" in the article, sorry for that, and the requirement of multiple locations makes sense too! That way the choice of providers makes much more sense to me (as it also excludes Hetzner, for example).
The article is very detailed and does a great job, but they don't mention DigitalOcean load balances.
Load balancers are a must have infrastructure puzzle piece these days. DO did a fantastic job on their lb offering and for $20 a month they are quite powerful (http and tcp) with configurable health checks and failover.
I've been using a Linode load balancer for three years now (it's also $20/mo), in front of a bunch of vps's.
Allows me to have zero-downtime upgrades by switching servers out of the rotation, and also not be concerned if individual servers die or are rebooted (which is very rare anyway).
My browser tells me it won't connect to this server: NS_ERROR_NET_INADEQUATE_SECURITY
Qualys gives the site an A, but says of some browsers: "Server negotiated HTTP/2 with blacklisted suite" so I imagine this is the cause of the problems?
A while ago when I was trying to sign up for an OVH account to test their VPS offering, they asked for government id/passport and proof of address in order to activate the account.
I'm ok with uploading those documents when signing up for an account on financial-related online services (btc exchanges, online debit card, etc), but I would never comply with that just for testing some cheap VPS. Why would they need those info when other vendors don't is beyond me.
For me IPv6 is irrelevant, because in it's current state it's barely working. Most big providers don't care about their routes so I would disable IPv6 anyways to prevent problems.
I agree about additional IPs and API/CLI tools though.
I have used all of them to cut cost. I run some several personal side project all them and I can say that vultr.com is the best among these provider considering price per computation power.
1. OVH: OVH is great, cheap and very reliable. I have used a $3.49 instances there. The only thing with them is a very complicated control panel to use. But the disk space is kind of expensive to get extra space.
2. DO: DO is the worst I have used. They are lock my account forever and I'm unable to get any data out. The issue is I have tried to create two account that using same credit card. I tried this to take advantage of their free $5 credit. It's my fault I know. But at least they should merge the account, not suspending me.
3. Scaleway: Scaleway is great but not very consistent, sometime they seems very fast. But then suddenly the instance is not reachable for a short amount of time(30-60seconds). It happens several times a week. Doing anything there will require shutdown servers which may take more than 10 mins. Such as shutting down server to create snapshot.
4. Vultr is the best among these. Its offer great, consistent performance(CPU plus network). I used them to run monitoring agent which fetch thousand of websites to monitor them without any issue. Their control panel is the best too.
Re #2, anyone who tries to abuse a company for a lousy $5 deserves what they get. It looks to them like a stolen credit card being used fraudulently. For $5, you're not worth 10 minutes of a customer service rep's time trying to determine whether you are a fraud risk.
"OVH" and "great" in one sentence. Good the roulette ended up in good spots for you, like it did for me for some time. They can be rather cost effective for private toy projects.
But. I can confidently say I would seriously consider quitting my job if forced to use OVH in a work context.
Linode on the other hand actually do things like respond to support requests, often in a helpful way even.
I wonder if anyone even tried to get a OVH box. I did and they asked for "proof of address" and my ID. Scaleway, Linode and DO didn't ask for it. OVH would have been interesting, if they didn't had this requirement. In my opinion it's a NO GO because of this.
I've been using Vultr for a couple of years and have been satisfied with the service. I initially went with Vultr because at the time it was one of few providers that offered FreeBSD and the lowest cost. In contrast to comments in the article I've found Vultr tech support to be responsive. Haven't needed it much since they maintain a good database of tech info which has answered almost all questions.
I'd be curious to see how the numbers from wholesaleinternet.net stack up - auto-provisioned bare metal hosting for the same (monthly) price as cloud VMs.
Scroll down and you will see they peer with Google Fiber, Google Cache, CloudFlare, NetFlix, Yahoo, etc. and others.
I used them for a client about 18 months ago I think, no problems and could get continuous 80Mbps over many hours to/from the servers the client rented.
after seeing the announcement about vultr in another thread, I took a closer look at ipv6 support and it looks like DO and scaleway do not provide a proper /64 for IPv6 -- or even a /112, which means you can't use openvpn.
unless it has changed or varies by datacenter, DO issues a /124 by default -- I've confirmed this on my own droplets just now. issuing a /64 is "proper" because it is RFC[1].
Super late reply, but I just wanted to correct myself. I don't have a /64 on DO. I was confusing my DO config with something else. I'm sorry about that. You are correct.
What's the target market for this "showdown" ? Hopefully not an online business, as all of these providers are the cloud hosting equivalent of shared servers from the early 00's. I've used DO and Linode and both have serious reliability problems that you have to engineer around yourself, and I can't imagine the other providers are any more robust.
Linode was too pricy for what they provided back when I tried them, too. At this point I would only use these providers for a personal website, but I would choose the cheapest, and get an identical second provider and dual host for redundancy.
Even if they were more reliable systems than they are now, the bigger they are the more they attract attacks on their core. Only the giants like Amazon and Google, or more professional (read: expensive) providers have the resources to deal with it.
I thought several minutes about answering you, but here we go...
I have several DO droplets running, one is running for 4 years now, zero downtime. I heared about some problems but that were no massive outages, but rather single host systems failing, which is no problem, since you should built your application that way that one VM failing is no problem. It happens everywhere, on Bare Metal, on AWS, on Google Cloud, etc.
Linode cut their pricing (or rather improved their plans) and started offering a $5 plan, I don't see how they are too pricey compared to the competition.
You pay a massive amount of money on AWS to be protected against DDoS, otherwise they will just bill you the bandwidth costs (that are very very high compared to those companies compared in the blog post).
> What's the target market for this "showdown" ? Hopefully not an online business, as all of these providers are the cloud hosting equivalent of shared servers from the early 00's. I've used DO and Linode and both have serious reliability problems that you have to engineer around yourself, and I can't imagine the other providers are any more robust.
OVH has openstack support in some regions and that is the current defacto non aws / Google / Azure cloud competitor.
And, all of the big providers have their own issues - cloud computing is hard, and no one has it 100% yet.
> Even if they were more reliable systems than they are now, the bigger they are the more they attract attacks on their core. Only the giants like Amazon and Google, or more professional (read: expensive) providers have the resources to deal with it.
While, yes they may not have the capacity themselves to swallow a large ddos, there is service providers who do, and any iaas worth there salt will have a contract with one or more of them.
I've never had problems with linode... yes there are the occasional hardware problems that require restarts, but certainly better than AWS in uptime. For a while my VMs were going on a year+ of uptime. What issues did you see?
To start? Customer data exfiltration, credential compromise, mandatory kernels (security problems), system downtime, network issues, ddos, and unusual performance problems. They may have a couple lower cost options now, but you'll probably still pay out the nose to upgrade them as before.
> Even if they were more reliable systems than they are now, the bigger they are the more they attract attacks on their core. Only the giants like Amazon and Google, or more professional (read: expensive) providers have the resources to deal with it.
You do realize that OVH weathered DDoSes largee than 1100Gbps in the past, without issues?
And that, like Scaleway, it belongs to a large french ISP with a nationwide network, own backbones, and they are even developing and using their own hardware?
I'm not familiar with those at all, I can only speak to the two I spoke of.
But it's a bit strange for an ISP to develop its own hardware, and ddos prevention has nothing to do with system management or cloud hosting feature sets. Furthermore, those two are the least mature, slowest, tiniest, and most limited providers in the shootout on every metric except perhaps network performance in France.
> But it's a bit strange for an ISP to develop its own hardware
For an ISP that, 5 years ago, was the largest hosting provider on the planet, and is still an ISP in several countries with their own backbones, custom hardware is not unusual.
You don't complain when Google or AWS develop custom hardware, or when Level3 develops custom hardware.
Why would OVH be different?
> and ddos prevention has nothing to do with system management or cloud hosting feature sets.
No, but it's something you complained about, although OVH managed to successfully handle the second-largest DDoS known to date without downtime.
It's very obvious you're not familiar with them at all.
Scaleway is shit, I use it since the closed beta (I live in France) and even if they are good to host simple websites with no big trafic their service is very disappointing.
First they surfed on the ARM bare metal hype for a while and then changed everything to sell x64 VPS. They lack a lot of features as they don't provide backup system or even snapshots.
Their staff think of themselves as fucking gods and do not give a shit about customers request. They have been out of order many times and don't communicate at all about it.
You are definitely just a running billfold in their eyes, never spend money for them.
Their business plan was using the revolutionary ARM CPUs to make cheap offers with real hardware. That was clear, that's what they sold us in the beginning.
Know they are stuck with it because of their lack of knowledge: the C1 server doesn't even support IPV6!
Stop kidding about "snapshots". They don't offer snapshots at all. Period. You should shutdown your server through the fucking dashboard before you can click the snapshot button. Snapshot is an instant backup made on a running server, otherwise it's useless.
Tell me what you want but I know what I've seen for 3 years with them. Now I and my company are running DO servers, which is way better for the price.
I've not used Scaleway but for what it's worth Digital Ocean's snapshot feature only works offline too (and takes forever to run). I'd also made the same complaints as yourself that an offline "snapshot" isn't really what I would class as a "snapshot". A "clone" would be a more apt description.
Anyhow, semantics aside, I seriously wouldn't recommend DO for any serious work. It's fine if you freelance in Wordpress or other off the shelf products but if you have any serious work to do then just don't even waste your time with DO as it's solutions are slow, inflexible and, in my professional opinion, immature when compared to other leading cloud providers.
They do, it's just not very good. Each VPS gets one IPv6 address (instead of a /64 block as you might expect), and the address is also not tied to the server if you ever power it down and then up again.
I'm the founder of SSD Nodes, Inc., which is a bootstrapped SSD-based hosting provider for startups that I've been working on since 2011. Some of our clients have posted benchmarks showing great performance, such as 800MB/s+ throughput and 292K IOPS: https://lowendbox.com/blog/ssdnodes-high-ram-ssd-vps-startin...
If I would be able to downvote you, I would. My benchmark/review is about hourly billing providers where I am able to spin up a server within a minute, which is not the case for your company. You are simply one of the 10000 companies that offer regular Virtual Private Servers.
Thanks for taking the time to respond, really appreciate it. You mentioned hourly billing, but the OVH plans you reviewed are monthly[0], and all the pricing you listed on the site is monthly.
We're trying to go in a different direction from hourly, and instead offer very deep discounts for annually. So customers can get 8GB RAM for $6.49/month ($77.99/year) and have stellar performance with a provisioning time of about 10 seconds.
OVH offers hourly and monthly billing. The hourly billing is a bit more expensive but it is there. The pricing is monthly to get a better overview and be able to compare it better.
I understand your direction and it makes sense, but please don't advertise in a post that is reviewing providers with a specific use case (hourly billing) that you don't offer. Thanks.
Hi, I really liked your post and your comments about your experience with each provider, but I totally disliked your answer and the one below. I didn't see anywhere that your benchmark was only about hourly billing providers so I find your reply disrespectful. Maybe you should update the title or the text if you want to take such stance?
Moreover, to me, regular VPS == openvz and not hourly billing (openvz would also disqualify ssdnodes).
If your review is supposed to be about hourly billing providers, shouldn't you actually mention that in the review? Because you didn't say a single thing about that in the review. Neither did you say in your review that it was about being able to spin up a server within a minute.
As it is, your original response comes off as more than slightly rude, and your subsequent response was very rude.
Interesting. Why did you choose OpenVZ? OpenVZ makes me think of fly-by-night, oversold VPS providers that are barely even adequate for a personal site. Another concern is that with OpenVZ, any Linux kernel privilege escalation vulnerability can be used to escape the virtual server. So I hope you stay on top of kernel security updates. But of course, that requires the host to be rebooted.
Hey there, those are great questions! First to answer your security questions. We're using KernelCare, which is like Ksplice. It keeps all our host kernels updated with no reboots needed.
OpenVZ, when used properly, provides us with massive performance gains along with the flexibility of "live scaling." Since we're providing containers, our customers can scale up to a larger plan with zero downtime. Their RAM and disk resources are available immediately after choosing the next package (additional cores require a reboot).
Scaleway is part of the Iliad family of companies, which you probably haven't heard about if you're outside of Europe. This includes Online, a networking provider and Scaleway's parent, and Free, which is a huge French ISP with a fairly hackery internal culture.
Online makes their network map public:
http://map.online.net/
Based on the names, it looks like it's 100% Cisco with a mix of Nexus 9000s, Catalyst 4500e edge, and 4900 ToR switches.
Scaleway is one of the only cloud providers I've seen that is using NBASE-T technology. The Intel Avoton SoCs used by their C2 instances support 2.5Gbps link speeds using an appropriate external PHY.
Their C1 instances are Marvell Armada 370 XP parts, datasheet here:
http://www.marvell.com/embedded-processors/armada-xp/assets/...
These are using 1 gig connections. The instances I've spun up connect to a gateway with a Cisco MAC address.
The C2 instances connect to gateways with MAC addresses registered to Freebox SAS. Figuring this out took some digging. It turns out that Scaleway/Online is manufacturing their own 2.5G capable switches. Because Free, the ISP, already has an OUI registered and was the only company in the conglomerate which made hardware, it looks like Online had Free make some custom switches to deploy in their datacenters.
They're also making their own BIOS for the C2 instances. dmidecode reports the following cutesy BIOS data:
Manufacturer: Online Labs Product Name: SR Version: (^_^) Serial Number: 42
The root devices for C1 and C2 instances are both NBD. Performance was better than I expected; I get wire rate I/O throughput on both (>90MB/sec on C1 and >250MB/sec on C2). Of course, the disk bandwidth is shared with the Internet connection on both instances, so there might be contention for some applications.