Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean Introduces Kubernetes Product (digitalocean.com)
278 points by fatterypt on May 2, 2018 | hide | past | favorite | 96 comments



This is a great development, but I'll have to wait and see how reliable it actually is. I've had a few droplets running with them over the years, and that has been rock solid (years of uptime on one droplet, no problems whatsoever), but we recently started using Spaces for a commercial product and it has been a catastrophe. There are connectivity issues leaving the service mostly unavailable on a regular basis, and the status updates about it aren't particularly timely.

While trying to migrate away to GCS, synchronizing data (using gsutil) has proven practically impossible. The API is incredibly slow to list objects and occassionally responds with nonsensical errors.

(Every once in a while a random "403 None" appears, causing gsutil to abort. We could probably work around that by modifying gsutil to treat 403 as retry-able, but since overall performance is so awful and we can regenerate most data, we decided to give up.)


Yeah, DO Spaces is all around awful. Deleting is extremely slow as well. We had to write special code because DO cannot delete 1000 objects at a time (takes like 2 minutes for the api call to succeed, if it succeeds at all). To the extent that we had to just resort to delete entire buckets. The UI also keep crashing when there are many objects :(


I’ll assume Spaces works like S3 in where updates and deletes are eventually consistent, not immediately consistent:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction...


I recently had to delete a multi-TB S3 bucket and learned that S3 isn't great at deleting tons of files either. The AWS Console just hangs forever. I let it go for hours before finding another solution.


It sounds like you’ve already resolved this, but for the benefit of any others that stumble upon this, my solution for deletion of a large bucket is to set a lifetime rule with a short TTL, after which the objects are deleted.

Set that rule, and come back to a beautifully empty bucket 24 hours later, after Amazon’s gnomes have takes care of the issue for you.


That's what I ended up with as well.


Yeah, S3 is not flawless but DO spaces had problems with just 5000 objects.


I use rclone as a wrapper for gsutil. Works much better. It’s rsync for the cloud.


Ditto, droplets are great and stable (and, somewhat surprisingly, CPU-optimized ones seems to perform better on average than the equivalent from AWS). I've tried to make it work with Spaces for the last 6 months (NYC3), but it's just a disaster, I could barely sync the data back to AWS S3 last week (you have to do it from a droplet in the same region, otherwise your chances are close to 0%). Downloading/uploading large objects is perfectly fine, but something's inherently broken at the metadata layer, so listing objects is either ridiculously slow or you get timeouts and weird errors like 'limit rate exceeded'.

TL;DR: droplets are good; avoid Spaces like the plague.


Pricing is key.

Right now I spin up GCE based clusters as the head and version migration is free and I only pay for 2 non-preemptible adequate nodes. The rest scale and preempt somewhat cheaply as needed.


Eddie from DigitalOcean here.

You'll only need to pay for your worker nodes and we'll handle upgrades for you.


Any implementation details? I would absolutely like my nodes distributed not just across different machines but also different racks. This will be key to having k8s reliability. Especially for the master nodes and wherever I've etcd running.

If my k8s goes down then it needs to be because the entire datacenter is down not a machine or a rack.


Then why does your marketing for early access say "get a free cluster." Is that implying that Digital Ocean will pay for the worker nodes for early adopters?

EDIT: https://www.digitalocean.com/products/kubernetes/ "Sign up for early access and receive a free Kubernetes cluster through September 2018."


Jamie from DigitalOcean here. Yes users won’t pay for their workers, Block volumes or Load Balancers in early access until the end of September 2018.


I've found DO load balancers cannot reliably handle tls termination over 100 connections per second, and fail completely above around 300/s. Are there plans to make the load balancers more robust as part of this change? We had to switch to DNS load balancing because DO's solution simply could not scale.


Hey Eric, load balancers are getting an upgrade in the near future. Keep your eyes peeled this week!


That is great to hear! A few things I'd really like to see:

* Ability to retrieve host health status in API

* Better throughput guarantees, especially with TLS

* Ability to serve from unhealthy nodes if all nodes are unhealthy

* Load Balancer Health Monitoring


I hope that there's no automatic upgrades. Kubernetes 1.10 was not fully backwards compatible with 1.9 for example.


Jamie from DigitalOcean here. It will be an option as to whether you want automatic upgrades (including what times of day you’d like us to do upgrades), and we’ll go through our own internal process and testing to minimize any backwards compatibility issues.


Could we entice someone to come do a tech-talk about it at Pivotal NYC? We're on the same avenue, after all. (And think a lot about this too)


Jamie from DigitalOcean here. Yes, definitely! Most of this team is remote but myself and a couple of others are NYC based. My email is jwilson@digitalocean.com if you'd like to get in touch.


Could you please share the details of your setup ? I am not familiar with Google Cloud but being able to scale up cheaply with pre-emptible instances sounds great.


It is just about six commands to create all the pools.

  gcloud container clusters create flk1 \
   --cluster-version 1.XX? \ #put your version here
   --machine-type g1-small \
   --disk-size 20 \
   --preemptible \
   --enable-autoupgrade \
   --num-nodes 1 \
   --network flk1 \
   --scopes storage-rw,compute-rw,monitoring,logging-write

  gcloud config set container/cluster flk1
  gcloud container clusters get-credentials flk1

  gcloud container node-pools create small-pool-p \
    --cluster=flk1 \
    --machine-type n1-standard-1 \
    --disk-size 20 \
    --preemptible \
    --enable-autoupgrade \
    --num-nodes 1

  gcloud container node-pools create small-pool \
    --cluster=flk1 \
    --machine-type n1-standard-1 \
    --disk-size 20 \
    --enable-autoupgrade \
    --num-nodes 2
I remove the original small node once the pools are created.

And then other pools for higher usage nodes.. The smallest cluster I have is about $75/month.

Pre-emptible nodes seem to recover quite well and I always keep at least one container per pod/etc. on the non-pre-emptible just in case.


It works very well and has controls so you don't schedule important workloads on preemptable instances: https://medium.com/google-cloud/using-preemptible-vms-to-cut...


Super nice. Been running k8s on DO using Rancher, but a native solution will be really awesome.

I'd like to see it as tightly integrated into e.g. GitLab like GKE is.


For the curious: k8s is a specific kind of contraction -- a https://en.wikipedia.org/wiki/Numeronym. Other examples include i18n = internationalization.


Rancher 2.0 was just announced today, 100% Kubernetes-based https://news.ycombinator.com/item?id=16976934


This will come at the right time for us, we've recently been spinning up our own k8s cluster on DO using kubeadm and flexvolume plugins, and are progressively migrating micro- and monolithic services in ascending order of criticality.

> tightly integrated into e.g. GitLab

Seconded, currently we set project integration up manually but since we're still ramping up that's not a cause of concern yet.


Sweet! Skycap also released the container delivery pipeline today. Can’t wait to use them together with DO Kubernetes. https://blog.cloud66.com/deploy-your-applications-to-any-kub...


Good for them! They beat AWS to it. Amazon's managed Kubernetes service is in the preview stage but should also be launched soon.

How do people like Kubernetes as a production ready solution to deploy containers? I've been using Docker for a while now, just starting to mess with k8s.


> Amazon's managed Kubernetes service is in the preview stage but should also be launched soon.

Same as DO?

"DigitalOcean Kubernetes will be available through an early access program starting in June with general availability planned for later this year."

I'm not sure which will be first.


I had originally heard Q2 for public launch of EKS Fargate, but officially Amazon is just saying:

> * AWS Fargate support for Amazon EKS will be available in 2018.

https://aws.amazon.com/fargate/


I resisted Kubernetes for a long time, but now that I've started using it, I think it's definitely the way to go.


I have some questions, mainly around networking and the like. Is there somebody from DO that I can get in touch with directly? Thanks.


Second on the network. My concern is around inter-node traffic and if it's segmented or like their current private networking.


Hi, Jamie from DigitalOcean here. We will have VPC support on DigitalOcean by the time we go live. But if you want to talk in more detail, my email is jwilson@digitalocean.com


We launched https://www.KubeBox.com beta today. You get a fully managed cluster, control plane and nodes. And you can start with a single node cluster with 8GB RAM and 2vCPU for $36/month. Additionally you can get Rancher auto-installed for managing projects, users, groups, permissions and workloads.

We are in early beta but if you are interested, please sign up and we will activate your account asap.

If you want to talk, we are at KubeCon Europe, contact @geku on Twitter.


@Linode -- you guys seeing this ? Don't make me migrate away after 2 years.


If you enjoy having hypervisors disappear for 12 hours without notice, go ahead.

Until then, I'd say Linode is your better bet :).

EDIT: A little more information, I had two VMs go offline abruptly around 1am one night. It took 3 hours for Digital Ocean to even acknowledge a problem existed (I had opened a ticket), and that was only after I started poking their twitter account. It was at least 12 hours before they brought it back online, it was never acknowledged in any mass ticket. If you are unlucky enough, you can have the same thing unfortunately happen to you. This is my second experience of having such an outage at Digital Ocean and is, as a result, the reason I still only use DO as a testbed and nothing more.

EDIT2: Another pretty bad example of Digital Ocean: https://status.digitalocean.com/incidents/8sk3mbgp6jgl.


I'm actually looking at migrating away from Digital Ocean. I had a recent incident that took over a week to solve with 48+ hours between tickets where support wasn't even reading the previous ticket. They claimed that someone looked at the hypervisor the system was on and found nothing, but as soon as that occurred my issue was resolved. One of the worst support experiences I've ever had where I was asked for the same information multiple times after waiting days to hear back, and was even asked if my issue started during an outage that was days after my initial report. Completely unacceptable.


Hey gravyboat - I'm Zach, Director of Support here at DigitalOcean. I'm very sorry that we didn't provide helpful or timely support. This certainly isn't the type of experience that's typical, and definitely not what we design for.

Can you do me a favor and shoot me an email so that I can investigate further? zach@digitalocean.com

Please see this as my personal commitment to our entire userbase that I'm happy to hear from you as well if your experience was not perfect.


Sure I've sent you a follow up email. Thanks for the response.


And the new support system is utterly useless Javascript SPA that breaks if you refresh, or use back buttons, and is just a terrible experience to use. I get mostly NOOP responses to my tickets these days. Given how complicated Kubernetes can be, I'm terrified to see how they'll handle support for when they hit some of the more difficult problems.


Hi Operyl - I appreciate your candid feedback on this.

As with anyone else, please feel free to email me directly if you want to follow up: zach@digitalocean.com

Regarding the actual system, its functionality, etc, we are planning an update. If there's anything that you want us to include in the system, or support systems that you enjoy using, please let me know.


Honestly, I miss the simple support system Digital Ocean had when it first launched, built straight into the panel, instead of another buggy service with tons of javascript. Less is more. The tier1 support needs much more training and information to effectively handle support as well.


I was also a big fan of the old support system that was accessible straight from the interface and not a third party tool. Granted the few times I had to contact support at that point I received very fast and thorough responses so this experience may have made me biased.


I hear you on the simplicity. There are a few things that we want to hit upon with an update, including: single login, better navigation/search, integrated look/feel, etc.


architect your app around failure and loss of compute nodes - if you aren't you aren't doing cloud right.


It's not that I didn't architect around the problem, that's besides the point. It's the fact they continuously have poor communication. My projects are relatively small, and were interrupted as a result of the downtime. With some effort (about 15 minutes) I brought them back up. Like I mentioned before, I use this is a testbed (the "beta" for my application) so it wasn't _terrible_. But it was still annoying.


I had been a Linode customer since 2008 and recently migrated to DigitalOcean about 6 months ago. So far my DO experience has been flawless. I needed basic, reliable block storage and at the time, Linode had promised "it's coming" for years. I beta-tested the Linode block storage and it was not stable for me. At the end of the day, Linode has rock solid stability, but they are unfortunately too slow to develop new features, thus I simply had to move on.


Linode user since 2009 here. Their support and network are rock solid [1].

I really hope they step up their game, but I won't be moving away anytime soon.

[1] The only major issue I had was a massive DDOS that took down their Atlanta data center around Christmas, couple of years ago.


Linode user for the last 10+ years here, I'm migrating some sites to DO after checking it out recently(again), the UI is better and it seems DO innovates faster these days, and the stability might have improved too(need more time to verify this).

Linode has been solid for me, only one bad hardware crash in the last 10+ years, but DO has more functions, good UI, larger disk,etc. I am trying it again.


DO support has gone majorly downhill (I have a comment about this that goes into it) in the last year or two with more issues cropping up here and there on hypervisors that go unattended for absolutely no reason. I still haven't found a good alternative but I'm looking.


Linode's support has been excellent though I rarely needed that. for me it's all resolved on their irc channel really quickly.

DO support is of concern but I never used that yet. I hope Linode can match up DO's offering(pricing and features) then I will likely stay.


Second that. I was emailing back and forth all last week due to some tricky bind-related rDNS issues on an ipv6 issued 64 block.

While I do wish they built out features faster (their new beta console which seems to have actually dropped features, and things like CertBot plugins for LetsEncrypt wildcard DNS updates), the support is fantastic.

They are always are polite and ready to help, even if the issue turned out to be a misconfiguration problem on my end.

Long live Linode.


So, Kubernetes basically won? Nomad and Swarm don't stand a chance, but what about Mesos or DC/OS?


Everything I'm seeing right now in the market is saying that.. yes Kubernetes has won, or IS winning.


Mesosphere is still a player, but they have also recently added Kubernetes support in DC/OS.

https://kubernetes.io/docs/getting-started-guides/dcos/


Kubernetes is the Git of container orchestration. It doesn't matter if any other products are better when k8s has, like, 99% of the mindshare!


Sigh.

You like BSD? Let the world have Linux.

You like Hg? Let the world have Git.

You like DC/OS? Let the world have Kubernetes.

But they all lost.


I don't know if nomad is going for the same problem is it? It seems a great deal simpler to use


Does digital ocean have any equivalent of aws' RDS? Or do I have to manage my own database server?


Jamie from DigitalOcean here. A database as a service offering is on our roadmap and in discovery, and we hope to have more information about this in the near future.


Fantastic. Any plans for a managed DB offering (Postgres) so the whole stack can be managed?


Good, I have been debating between running my own k8s on DO vs GKE. I'm glad I don't have to build my own cluster. I think I'm going to do both for now tho. If DO is mature and stable I'll kill the GKE cluster.


My only wish is for DO to allow customers to bring-your-own IP blocks. Vultr has this, but seems less robust to run a business on.


Check out packet.net if you haven’t already, they offer that and some other networking features you don’t see typically from VPS providers.


Packet is nice, but way more expensive than DO.


Why?


How are you guys sorting out networking?


Jamie from DigitalOcean here. VPC is coming to DigitalOcean and the cluster will live within a VPC. Past that there are a lot of details! Is there something specific you're interested in?


I'm currently running multiple Kubernetes clusters using StackPoint in combination with DigitalOcean. This has been working very well. Could someone tell me how the new DO Kubernetes service compares to StackPoint?


You're paying for master nodes with StackPoint, right? Each droplet you start has the same cost structure whether it's running your "user-land" slave workloads or if it's there just for running your cluster.

The big payoff (for small clusters like mine anyway) is that masters won't be charged, like other managed Kubernetes offerings from Azure and Google. I don't know enough about StackPoint to compare it to a service I haven't even seen in beta yet, but I can tell you that much.

I know that StackPoint is supposed to be "like a managed" experience. Maybe one of the DigitalOcean guys who has been responding in this thread can speak to the technical details of the new offering.


> You're paying for master nodes with StackPoint, right?

No, you pay a monthly subscription (starting at 50$/month). The service allows you to create/update clusters easily. I'm not sure, but I think you can create as many clusters as you want with a 50$ subscription (at least I never hit a limit). The procedure to create a new cluster looks something like this, if you use the web interface:

* click "add cluster"

* select cloud provider (DO, AWS, GKE, etc.)

* configure master nodes. E.g.: 2 master nodes @ 2G Ram, running in region NYC1.

* configure worker nodes. (same procedure as with master nodes)

* submit

If you choose DO, you get a cluster that works with DO load-balancers, DO block-storage, etc out of the box.

If a new version of Kubernetes is released, you can hit the "update cluster" button.

They have an API for all the stuff too.

I chose StackPoint in combination with DO, because it felt least bloated and least locking-in.

Now that DO introduces the Kubernetes service, I can imagine that I won't need the StackPoint subscription any more.


That's very interesting, thanks for sharing. $50/mo is a bit for a hobbyist, but not much if you're operating at any serious kind of scale!

But I mean, in addition to the StackPoint subscription, you do also pay for the master node droplets when you use it, as well as paying for the worker droplets, right? You won't be paying for those masters anymore with the managed offering, from any of the cloud vendors I've heard of announcing a managed offering. I have to imagine this is because they can do (or plan to do) multi-tenant APIs under the hood.

(Even if you get a pool of worker nodes and the pool is on machines that are exclusively yours, it seems unlikely that your constellation of masters is ever going to be exclusively yours unless your bill says "dedicated masters" and you've paid something for it... and that's fine, as long as it's done right! I obviously can't afford to give myself as many masters as a multi-tenant system can allocate a share on for me. We will all wind up getting more resilient systems out of the deal, and for much cheaper, in this arrangement I think.)

I'm definitely signing up for this preview, I hope it will include an API for creating/upgrading/tearing down clusters! I can't imagine it will do anything but obsolete StackPoint for DigitalOcean customers.

Then again, maybe the bigger value provided by StackPoint is actually that you can take this K8S cluster orchestrator with you to a different cloud if you need to move. It is obviously going to be a harder sell though, when all of the major vendors are coming out with their own managed k8s offerings that enable cost savings. Next to $50/mo, enough masters to make your cluster resilient against localized failures on a 24/7 basis are... pretty costly, right?

It's really going to come down to, are the managed offerings as good, better, etc than the ones you can install yourself with a tool like kops (or are they as good as the ones that a service such as StackPoint can help you install for yourself?)

I wonder, did you try installing Kubernetes for yourself before you tried StackPoint? If so, what distro(s) did you try and which ones did or didn't make the cut?


> But I mean, in addition to the StackPoint subscription, you do also pay for the master node droplets when you use it, as well as paying for the worker droplets, right?

Yes, you pay for all of them, but therefor you get full control of the entire cluster.

> You won't be paying for those masters anymore with the managed offering, from any of the cloud vendors I've heard of announcing a managed offering.

Interesting, I didn't know about that. Not sure if I prefer this though. Might be another "surface" for the cloud providers to lock you in.

> It's really going to come down to, are the managed offerings as good, better, etc than the ones you can install yourself with a tool like kops (or are they as good as the ones that a service such as StackPoint can help you install for yourself?)

I guess things around kubernetes will slow down soon (hopefully) and I'll probably switch to something like kops/playbooks/etc. But right now things are still moving too fast for my taste, so I'm happy to abstract away as much as possible.

> I wonder, did you try installing Kubernetes for yourself before you tried StackPoint? If so, what distro(s) did you try and which ones did or didn't make the cut?

Yes, I experimented with different approaches for Kubernetes, Openshift and Rancher and I tested several cloud providers. In the end I found it wasn't worth the effort to learn and configure the whole thing from the ground up, since everything was constantly changing, like I said. Even if you have your cluster ready there is still a lot of work to be done for the deployment pipelines, cluster backups, etc.. For now I'm happy that creating/destroying a cluster is a matter of hitting a button, but I'm also excited to see what the future brings. Kubernetes is definitely one of the most amazing projects I've come across so far.


> > You won't be paying for those masters anymore with the managed offering, from any of the cloud vendors I've heard of announcing a managed offering.

> Interesting, I didn't know about that. Not sure if I prefer this though. Might be another "surface" for the cloud providers to lock you in.

If you want a serious HA-FT kubernetes cluster that is spread across and resilient against failures in a single AZ, and you don't have something like Stackpoint or a managed K8S offering to configure it for you, there is a pretty serious amount of work (and decent number of computers required) in order for you to get your cluster there.

That being said, I don't know how many "hosted, managed" K8S offerings there really are in GA right now to compare.

I'm counting GKE on GCP, AKS on Azure, IBM's new managed k8s offering, AWS/EKS (which is still in preview) and Digital Ocean's offering announced yesterday (which is still pre-beta.) As far as I know, all of those offerings will give you as many masters as you need to make a resilient cluster for free, and you only pay for the workers.

(Except for the offerings that are in preview mode, then I guess you just don't pay for any of it for now...)

> Platform - Certified Kubernetes - Hosted (21)

I guess there are also quite a few I haven't looked at yet. Those are just the platforms with hosted offerings.

https://www.cncf.io/certification/software-conformance/

I personally used kubeadm for my toy-sized single node cluster, and it's great, but I'm also still on 1.5!


I have no experience setting up Kubernetes so at the start of the year I looked at setting up a cluster on AWS. There were lots of extra expenses for a small ‘learning cluster’.

I just signed up for early DO access - can’t wait!


Too bad they didn't list the pricing, it would be nice to know how much it will cost, once released.


Hey KenCochrane, I’m the Product Manager on this product at DigitalOcean. VonGuard is right, you only pay for the worker nodes (based on our Droplet pricing, there’s no premium) and we take care of the master. Our standard pricing lives here: https://www.digitalocean.com/pricing


Right now, GKE charges $18/month for a load balancer on top of node costs, which is costly for small scale/personal projects. Will DigitalOcean have anything similar?


Jamie from DigitalOcean here. Currently we'll deploy our DigitalOcean Load Balancer on your behalf, which is $20 a month, but we are also investigating other options. If you have any thoughts on how this should work, or what specifically you'd be looking for, I'd love to hear them.


Speaking personally, I'd rather opt out of the Load Balancer altogether and instead have a floating IP automatically set up across the workers. Ingresses are easy enough to set up so that would complete the picture.

I think having the Load Balancer option is important for simplicity, but I feel a lot of DO customers (such as myself) opt to use DO for optimizing cost as well. It's a balance.


Will be possible to run it in a single node scenario?


Yes, you’ll be able to spin up a single node cluster.


Will there be a minimum size for a worker node?


Currently it’s our $5 droplet (1GB RAM, 1 vCPU), but if you have a use case for smaller nodes I’d love to hear about it!


Awesome! The TechCrunch article [0] shows "16GB -> 192GB" in the screenshot so I just wanted to confirm.

[0] https://techcrunch.com/2018/05/02/digital-ocean-launches-its...


What about a competing product to AWS's lambda?


Jamie from DigitalOcean here. Kubernetes on DigitalOcean is the first step for us to enable more managed services like Lambda. You will be able to deploy projects like OpenFaaS or Fn very simply, but currently you would still need to determine your node pool for capacity.


DING DING DING

If you guys offered a lambda competitor with similarly competitive compute/bandwidth pricing I'd be all over it in a second.


You could run Kubeless on k8s on DO


Excellent thanks for sharing!


It's free. Just pay for nodes.


Sounds like DO is going to have the cheapest k8s service out there!


For some free lab time until the new commercial offerings arrive, try play with k8s: https://labs.play-with-k8s.com/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: