Google Cloud is not far from this. Basically instead of "hyper" you are typing "gcloud".
Google Cloud is far more complicated but its tools so far are pretty good.
I couldn't find how you do custom networks with Hyper. Also as a Java + Postgres shop 16 Gigs memory (L3) is just not enough.
Per second also seems overkill. Google Cloud has per minute. It doesn't seem to make sense for "effortless". If you are that interested in saving money like that (ie margins) it seems you wouldn't be using a heroku like PaaS?
For me easy deployment is a small part of the story for a compelling PaaS. What I want is really easy metrics, monitoring, notification, aggregated log searching, load balancing, status pages, elastic stuff, etc. Many cloud providers provide this stuff but it is often disparate costly addons/partners/integrations that are still not terrible easy to work with.
IMO it is actually harder to get all the diagnostic cloud stuff vs the build + deployment pipeline.
EDIT:
As mentioned in another comment my company tried to use Docker but it would take to long to make Docker images so we just prefer VMs. That is it seems with something like Hyper you save on deployment times but your build times get worse (unless I'm missing some recent magic that you can do with docker now).
EDIT again:
We didn't have Docker cache (because of some issues) so please ignore my slow docker build time comments. Apologies.
Having a CLI doesn't mean they are close. In Google cloud, you still work with VMs, cluster, schedulers. In Hyper, you work only with Docker, everything is container native!
Per-second is perfect for Serverless, Data mining, CI/CD, etc. It is simply not cost effective to go with per hour/minute rate.
I admit I'm a little behind on Docker but I thought google provided that with Kubernetes [1]?
I work with JVM and servlerless is just not worth it for the JVM (not yet but maybe someday with better AOT). Thus I know very little on instant serverless deployment. I'm sure it is useful though.
Kubernetes lets you do stuff similar to this, but you still have to manage the infrastructure and the platform.
It seems like with Hyper, you literally are just deploying an image to be run in a container. You don't have to worry about configuring and managing a Kubernetes or Swarm cluster. Probably not worth it for very large companies, but for startups and hobby projects, this greatly lowers the barrier to entry.
Yeah but is still not as managed as I want. You still need to populate a Kubernetes cluster which is 3 node minimum. Instances show up in your instances list and you still have to be careful to pop your instances in different zones/region for availability.
In an ideal world I just want to run containers in a region with a LB in front, I don't care on which Kubernetes cluster they are. That the use case hyper.sh seems to address (but I didn't test it to be honnest).
Once you've got the GKE cluster stood up (two clicks or so), you don't need to care which cluster you are on. The gcloud CLI remembers whatever you set.
It's very hands-off. And if you ever do want to take more direct control, you've still got the option of doing more or all of it on your own.
Let's say you have two images: web and db. Web containers ask for high cpu, but small disk. DB requires big mem and disk.
With GKE, you either have different instance types for different container sizes; or you launch the BIG&TALL VMs for all.
The same story applies to public/private network as well. Point is that in GKE, there are two layers to manage: VM and Containers. In Hyper, the container is the infra.
That's a terrible argument considering Hyper gives you no fine-grained control over instance types. You just get a linearly-increasing allocation of CPU cores and RAM.
Right, but the point is that __someone__ has to set that up. Internally, you could use Kubernetes to build something very similar to this. But, then you have to support that yourself and need to hire people to manage that. As Hyper advertises, this takes away from your concern as a software developer: software development.
Hyper allows for the creation of a minimum viable product that you can move around. I can start on Hyper and then move to pure AWS/Kubernetes/mesos/swarm when and if I determine it makes sense to have people spending time managing the AWS infrastructure and handling deployments.
I haven't used Hyper, but this idea is really cool in principle. I'm excited to see how well they actually do it. It really seems like the Heroku of containers.
To be honest I haven't tried the docker run stuff yet and still use VMs. It does take time to provision (for me like 30 seconds). I also don't do much serverless stuff (as mentioned in other comments) so my opinion is pretty crappy at best :) .
I wasn't sure how fast Hyper was when I commented but I suppose it is fast (I missed the 5 second subline twice).
One of the big issues to why we don't use Docker is that making Docker images is really slow for us! So while we would get fast deployment/provisioning we would have to pay for it in longer build times. I'm curious how others speed up Docker image building?
I ask because, on Triton, cloning a ZFS dataset should be very fast, because it is a zero copy operation. It basically consists of copying the metadata for the data set attributes and root directory. So in principle, Triton could perform competitively.
For long running containers, true. But I want to manage some of my data processing as a bunch of individual components that may have very short runtimes. I don't feel like paying for 10 minutes as a minimum makes sense if I only need a machine for 90s.
Their recent examples of using it as a highly parallel build server make more sense there. Do you want to pay for 10 minutes every time you trigger a 1 minute build job?
Well I can't speak for all IaaS but both gcloud, digital ocean and even Rackspace can make a VM in less than 5 minutes which is how long it was taking to make docker images on a good day for us.
Besides we don't always blow away a VM for all services (ie the ones that don't need a cluster of nodes). We reuse them (yes this is eschewed but we get super fast deploys). I suppose this could be said for docker as well though.
Also with Docker our build artifacts would be much bigger since instead of a executable Jar we would have images. The IO of transferring images from the CI server can shockingly take some time.
Sure, you can create a blank, empty VM in less than five minutes... but the point of creating a Docker image is that it has everything pre-installed, ready-to-go.... you're not even remotely comparing apples-to-apples.
Large images aren't an issue anyway, since the base layers will just be cached...
I said I'm comparing with what I know and experienced (and this is for a JVM shop). I'm sure we could have gotten Docker to probably to be faster especially given your passionate comments (and it appears after googling there have been improvements in docker build time).
But I just ran gcloud to create a VM with Java and copied a Jar in under a minute. I just can't figure out a way to get docker to that speed. Are you creating images and copying that fast? We must me do something massively wrong with docker.
EDIT:
I found out the reason... It appears we had some issues with the Docker cache and had to disable it (I don't know the exact details why yet). Please disregard my comments on slow docker building. Apologies. I wish I could delete my comments and feel a little bad about potentially spreading incorrect information...
Google Cloud is far more complicated but its tools so far are pretty good.
I couldn't find how you do custom networks with Hyper. Also as a Java + Postgres shop 16 Gigs memory (L3) is just not enough.
Per second also seems overkill. Google Cloud has per minute. It doesn't seem to make sense for "effortless". If you are that interested in saving money like that (ie margins) it seems you wouldn't be using a heroku like PaaS?
For me easy deployment is a small part of the story for a compelling PaaS. What I want is really easy metrics, monitoring, notification, aggregated log searching, load balancing, status pages, elastic stuff, etc. Many cloud providers provide this stuff but it is often disparate costly addons/partners/integrations that are still not terrible easy to work with.
IMO it is actually harder to get all the diagnostic cloud stuff vs the build + deployment pipeline.
EDIT:
As mentioned in another comment my company tried to use Docker but it would take to long to make Docker images so we just prefer VMs. That is it seems with something like Hyper you save on deployment times but your build times get worse (unless I'm missing some recent magic that you can do with docker now).
EDIT again:
We didn't have Docker cache (because of some issues) so please ignore my slow docker build time comments. Apologies.