Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pricing is key.

Right now I spin up GCE based clusters as the head and version migration is free and I only pay for 2 non-preemptible adequate nodes. The rest scale and preempt somewhat cheaply as needed.



Eddie from DigitalOcean here.

You'll only need to pay for your worker nodes and we'll handle upgrades for you.


Any implementation details? I would absolutely like my nodes distributed not just across different machines but also different racks. This will be key to having k8s reliability. Especially for the master nodes and wherever I've etcd running.

If my k8s goes down then it needs to be because the entire datacenter is down not a machine or a rack.


Then why does your marketing for early access say "get a free cluster." Is that implying that Digital Ocean will pay for the worker nodes for early adopters?

EDIT: https://www.digitalocean.com/products/kubernetes/ "Sign up for early access and receive a free Kubernetes cluster through September 2018."


Jamie from DigitalOcean here. Yes users won’t pay for their workers, Block volumes or Load Balancers in early access until the end of September 2018.


I've found DO load balancers cannot reliably handle tls termination over 100 connections per second, and fail completely above around 300/s. Are there plans to make the load balancers more robust as part of this change? We had to switch to DNS load balancing because DO's solution simply could not scale.


Hey Eric, load balancers are getting an upgrade in the near future. Keep your eyes peeled this week!


That is great to hear! A few things I'd really like to see:

* Ability to retrieve host health status in API

* Better throughput guarantees, especially with TLS

* Ability to serve from unhealthy nodes if all nodes are unhealthy

* Load Balancer Health Monitoring


I hope that there's no automatic upgrades. Kubernetes 1.10 was not fully backwards compatible with 1.9 for example.


Jamie from DigitalOcean here. It will be an option as to whether you want automatic upgrades (including what times of day you’d like us to do upgrades), and we’ll go through our own internal process and testing to minimize any backwards compatibility issues.


Could we entice someone to come do a tech-talk about it at Pivotal NYC? We're on the same avenue, after all. (And think a lot about this too)


Jamie from DigitalOcean here. Yes, definitely! Most of this team is remote but myself and a couple of others are NYC based. My email is jwilson@digitalocean.com if you'd like to get in touch.


Could you please share the details of your setup ? I am not familiar with Google Cloud but being able to scale up cheaply with pre-emptible instances sounds great.


It is just about six commands to create all the pools.

  gcloud container clusters create flk1 \
   --cluster-version 1.XX? \ #put your version here
   --machine-type g1-small \
   --disk-size 20 \
   --preemptible \
   --enable-autoupgrade \
   --num-nodes 1 \
   --network flk1 \
   --scopes storage-rw,compute-rw,monitoring,logging-write

  gcloud config set container/cluster flk1
  gcloud container clusters get-credentials flk1

  gcloud container node-pools create small-pool-p \
    --cluster=flk1 \
    --machine-type n1-standard-1 \
    --disk-size 20 \
    --preemptible \
    --enable-autoupgrade \
    --num-nodes 1

  gcloud container node-pools create small-pool \
    --cluster=flk1 \
    --machine-type n1-standard-1 \
    --disk-size 20 \
    --enable-autoupgrade \
    --num-nodes 2
I remove the original small node once the pools are created.

And then other pools for higher usage nodes.. The smallest cluster I have is about $75/month.

Pre-emptible nodes seem to recover quite well and I always keep at least one container per pod/etc. on the non-pre-emptible just in case.


It works very well and has controls so you don't schedule important workloads on preemptable instances: https://medium.com/google-cloud/using-preemptible-vms-to-cut...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: