This is positioned to be competing product to netlify and vercel. However, it doesn't make sense to host it yourself as core benefit of host a static pages and node js apps on netlify is use their CDN infrastructure. It would actually cost you more to host self host it than using alternatives. Also this is definitely not a PaaS substitute. It is great attempt at a netlify alternative, but fall very short from being a production grade platform. Just an analysis, don't burn me please.
Atm, it's not aiming to be a production-grade system. Indie hackers, hobbyists, probably do not need that to start working on their side project. Also, I'm thinking of provide a hosted version of it, but not know. I want to concentrate on the features now.
When I saw it described as a "heroku and netlify alternatives", I assumed it was meant to be a production-grade system, that's what those words mean to me.
If those are wrong expectations, you might consider changing your marketing.
I don't know what up with people these days. There seems to be a trend people are building heroku clones. Heroku is more than buildpack and dynos. Just because you have a github integration does not mean you do not have the right to call yourself a heroku alternative. Whole point of heroku is getting started without any setup. Now, If I have to acquire a VM, setup the infrastructure and then setup one these "alternatives", the onus is on me. I am responsible for 80% of the task heroku does. At that point, I would rather own it 100%. There is lot of stuff that goes on behind the scene goes into Heroku or Netlify. Managed Kubernetes with a PaaS experience comes very close of heroku alternative however, why would you switch to something like that when you are content with Heroku. Kubernetes setup will become a beast in itself.
I think heroku's alternative business plan should be suing all the companies/people calling themselves publicly calling themselves heroku alternative or a better heroku and delivering subpar projects.
If lots of people are having expectations that you didn't meant to give them and don't want to meet, that could get inconvenient for everyone including you, regardless of whether you want to blame it on a "them problem". So it might make sense to adjust the marketing to better set expectations. But that's your call, you can also just deal with it knowing that it's their fault for misunderstanding you.
Worth noting that Netlify's CDN recently had issues. I do think Netlify is nice, but "production grade" sounds like something that might require two providers.
Netlify are on multiple providers, but due to the DNS spec being old ( no CNAMEs at root) many providers respecting it without providing CNAME flattening, in some cases you have to give use a regular old IP as an A record, hence the impact of the GCP load balancer being down.
If one was using CNAME flattening, there was zero impact.
Why not 3? It is about how paranoid you want to be about uptime. Uptime and production grade are two different things. There is certain uptime guarantees that a cloud provider will give. Production grade is durability of the product itself.
Algorithms that tolerate Byzantine failures generally require t<n/3, i.e. at least 4 nodes to tolerate 1 byzantine fault. This is in fact required for primitives as simple as reliable broadcast. Systems that use leader election such as Raft are not Byzantine tolerant. They tolerate t<n/2, meaning that 3 nodes does indeed buy tolerance to 1 crash, but it doesn't support anything else than crashes.
Couldn't you pair it with, say, Cloudflare? Perhaps something like that is coming functionality within the product, but it's easy enough to do yourself for now.
Why not AWS for that matter. There is difference between IaaS and PaaS. This is more in the realm of PaaS. You cannot compare with infrastructure providers per say, in my opinion.
Not true. For hobby projects, using netlify and heroku is often cheaper, agreed. But for serious organization it might be cheaper. Heroku's standard 2x dynos pricing is $50 per month [1]i.e 1GB mem, whereas a droplet in digitalocean is $5 per month for the same resources[2] with 1000GB storage. Lets add loadbalancer to the mix additional $10 per month, domain and SSL certs are negligible prices, max would be $20 per month. It comes at the price of engineering time.
What does it take to host this on Cloudfront or Cloudflare? Are they not the same thing as Vercel? What is the difference if I hosted a website on those two former providers vs Vercel?
Im not sure I understand the need for this. Isn't the point of Heroku a platform as a service to abstract out having to host it yourself? Wouldn't someone just host the app at that point?
One possibility is an organization that wants ordinary application developers to have a Heroku-like brainless git-push-to-deploy experience, and also has a dedicated infra team that has the bandwidth to manage a self-hosted PaaS.
A fully internal "click to deploy" solution would actually be very useful in a lot of situations I've dealt with in the past. Abstract away more of the devops/sysadmin stuff for me, please!
Also a Dokku fan. I don't think this project is related or uses it under the hood ( https://github.com/coollabsio/coolify/search?q=dokku ), although it's clearly similar. The marketing page only mentions Node.js support, but the Readme mentions using Buildpacks (https://github.com/coollabsio/coolify ) although it still isn't clear if it supports languages other than Node JavaScript. I would assume though if the goal is to support buildpacks and be a true Heroku replacement it will soon support everything just like Dokku.
Definitely very cool and something to keep an eye on as it develops!
I doubt if someone wanting to run an app is the target audience. A lot of companies would love to have something like Heroku that could be hosted internally. A platform team hosts it and development teams consume it. As it stands, they are stuck hand rolling their own poor implementations. Lots of wasted person-hours are happening in this space due to a lack of good, stable solutions that won't disappear (I don't know if this one qualifies).
I have not had hands on experience with nomad, but as I understand it, it still requires a lot of plumbing. Something as simple as service discovery isn't, and requires a completely separate consul cluster. Similarly, there is no secrets management, which requires a vault cluster, which in turn requires a it's own consul cluster. I'm not sure what config management options there are.
So now you're stuck with 3 consul clusters, a vault cluster, whatever you choose for config management, and a nomad cluster. Feels like you didn't gain much from the simplicity of nomad.
In addition to that, knowing Hashicorp's pricing, I bet that set up would run you north of a million a year for enterprise setup.
Nomad relies on Consul for Service Discovery and K/V storage, and Vault for secrets, indeed ( Vault can use a variety of backends, including an integrated Raft-based one, Consul, object storage, etc.). One tool that does one thing well, which integrates with other tools that their thing well.
I vastly prefer having three simple Raft-based clusters to manage than the "everything and the kitchen sink" approach Kubernetes takes, with results like base64 encoded for "secrets".
And as someone doing both, Nomad+Consul+Vault are drastically easier on day one and day two. They're also usable outside of Nomad ( you can have bare metal machines outside of Nomad using Vault secrets and Consul for SD and K/V), and you can link multiple regions together.
You do indeed need some basic config management to configure the clusters. Ansible seems to have won that race sadly, and there are available playbooks.
> In addition to that, knowing Hashicorp's pricing, I bet that set up would run you north of a million a year for enterprise setup
They've changed up their pricing structure and there are more tiers and add-ons, so i doubt it ( basic Vault cluster was in the 5 figured range per year), if you need support. It's not like enterprise support for Kubernetes would come cheap either, especially if you do it the recommended way with multiple clusters and all that.
> Nomad relies on Consul for Service Discovery and K/V storage, and Vault for secrets, indeed ( Vault can use a variety of backends, including an integrated Raft-based one, Consul, object storage, etc.). One tool that does one thing well, which integrates with other tools that their thing well.
You say nope but then you prove that Nomad relies on Consul as I had mentioned, meaning, you need a Consul instance behind the scenes. If the Nomad setup recommendation is anything like the Vault recommendation, the recommendation will be to run two clusters, one for services discovery, and one for K/V storage for Nomad. I've setup an enterprise Vault instance, and their enterprise architect recommended separate instances. Which is totally fine, but it does mean two Consul clusters + a Nomad cluster.
From my experience with Consul and Vault, it is not as simple as you say it is. A team of 3 Engineers took 3 months to set up an enterprise grade cluster. There was a little bureaucracy at the time, so I can't really blame it on that. If I recall correctly, the integrated Raft-based clustering was being worked on and we were made aware of it because there was some pushback from management on separate 2 Consul clusters for K/V and SD, but I never got to see it to fruition and utilise it, so for us it was Consul. Other backends were discouraged at the enterprise level, they never really made it clear if they'd fully support us if we went with a different backend, leading me to believe that at best, they'd prefer you use Consul over something else. I mean why wouldn't they? They'd rather you pay them extra for a Consul cluster.
If Nomad is anything like my experience with Vault/Consul, then unfortunately you are still stuck with the setup I mention earlier, that is 1 Vault cluster, 1 Nomad cluster, and 3 Consul clusters (1 K/V for Vault, 1 K/V for Nomad, and 1 for service discovery). For sure having separate individual tools that does 1 thing may have their advantages, but I fail to see how this is that "much more simpler" than Kubernetes. At best this is marginally simpler.
Vault has integrated storage since multiple versions, and Nomad can very well use a single Consul cluster for both SD and K/V. ( And honestly i can't recall having two Consul clusters being recommended, and the proposal we had from Hashicorp included a Consul Enterprise cluster for Vault as part of Vault's pricing.)
So, you need three clusters - Vault, Nomad and Consul for SD and KV for Nomad. Two of which, Consul and Nomad, can run on the same machines ( it'd be suboptimal security-wise to have Vault there too).
You don't have to deal with operational aspects of consult, nomad and vault of that matter if you choose the managed kubernetes cluster of the cloud provider. If you are talking about container orchestration onpremise - the experience I had with kubernetes is terrible. There 1000 things that can go wrong. Onpremise, I would recommend using k3s which is super simple to setup or microk8s with comes with ubuntu 20.04. I am not sure what benefits you would get with nomad as compared to the lightweight kubernetes distros. One of the benefits of these lightweight solutions is that, you will basically find a helm chart for any serious application out there where as in nomad you might have to figure it out how to deploy. For instances deploying cassandra, you will find a helm chart, but nomad, you might find blog that does it or figure it out yourself. To my best of knowledge that how it is, may be I am wrong?
microk8s and k3s aren't fit or designed for production, they're mostly for testing/experimenting.
And even with managed Kubernetes, there's still a lot of complexity remaining ( GCP had to come out with a more managed service, GKE Autopilot, to address some of that), but you still have the evolution of APIs to keep track of every update you make, and there are still dozens of services that are updated each update, and each one can go wrong ( even if it rarely does).
> . One of the benefits of these lightweight solutions is that, you will basically find a helm chart for any serious application out there where as in nomad you might have to figure it out how to deploy. For instances deploying cassandra, you will find a helm chart, but nomad, you might find blog that does it or figure it out yourself. To my best of knowledge that how it is, may be I am wrong?
Indeed, and that's the main disadvantage for Nomad IMHO, the ecosystem is much smaller so there aren't that many ready-made equivalents to Helm charts and operators. Depending on how many of those you need, k8s can save you a lot of time.
I don't agree with you. What aspects of microk8s or k3s makes it experimental? It used to be the case, in case of k3s it is one of core offerings from rancher. Same goes for microk8s. Purpose of autopilot is something else, if you are talking about pure orchestration bare minimum kubernetes is actually not a bad option. API will keep evolving but basic objects like deployments and statefulsets that you need for paas like experience are quite stable.
> K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
> [microk8s] Low-ops, minimal production Kubernetes,
for devs, cloud, clusters, workstations, Edge and IoT.
Microk8s started as an easy alternative to minikube for local dev.
k3s started as a simplified version of k8s for testing/experimenting on RPis, etc.
Today both seem to focus on IoT/"edge". Both do clustering and HA of the control plane though, so are in theory usable in production.
However, why would you use either of them in production? Yes, it's easier than vanilla k8s, but still has a lot of moving parts, and to top it off, it's a specific flavour by Rancher or Canonical of moving parts ( e.g. microk8s uses dqlite for storage instead of etcd). So you might stumble on specific to the platform edge cases, and you still have a big part of the k8s complexity to deal with ( in the case of microk8s it tries to abstract some of the complexity with wrappers, but when they fail, you're screwed).
> API will keep evolving but basic objects like deployments and statefulsets that you need for paas like experience are quite stable
Stable now, but Ingress was in beta for quite some time, and the when the beta API gets deprecated, you have to adapt.
> Kubernetes aims to provide all the features needed to run Docker-based applications including cluster management, scheduling, service discovery, monitoring, secrets management and more.
> Nomad only aims to focus on cluster management and scheduling and is designed with the Unix philosophy of having a small scope while composing with tools like Consul for service discovery/service mesh and Vault for secret management.
I'm still relatively green, so there are likely a bunch of Kubernetes nightmare scenarios I haven't encountered, but I recently stood up microk8s to provide workers for Jenkins and GitLab CI, and I thought the ergonomics of it were great— easy to get going, easy to deploy stuff with the integrated helm3, easy to access the dashboard and get metrics out of it.
I'm sure there's still a gap to be bridged there between that and a PaaS which you literally just add as a git remote. But I don't think it's huge.
Microk8s is great for local development, but it's operational complexity doesn't come close to run k8s in a production environment.
Habing been on both sides of the isle, in my opinion, K8s has great ux for consumers, but for is a nightmare for ops teams who maintain it. For a self-hosted version anyway.
That's good to know— I've heard others say that it's fine if you just have a cluster of a few nodes, private cloud, that kind of thing, particularly if it's "throwaway" compute like CI workers, as opposed to something genuinely high availability.
Now, all that said, Canonical certainly advertises microk8s as being production-ready, production-grade, and suitable for use in production environments, for example in [1]. It definitely seems like it's meant to be far more serious than, say, minikube, which explicitly is just for local development.
Can you speak to specific limitations with microk8s, or point to resources which go into more depth on this?
Great question, I have been research around the same topic for past 6-8 months. The problem is they have been advertising as production grade very recently. I am not sure what limitation you will hit. Having said that, k3s running in production has same issue that you will see in managed kubernetes cluster in a cloud provider.
MicroK8s and K3s are actually touted as orchrestrator for production too. I actually know atleast one organization that uses k3s in production. Ofcourse it is a nightmare at scale, but running a services without kubernetes at that scale is more nightmarish.
I've learned this too, and that's why at my company (https://primcloud.com) we're building obviously a PaaS for those who want Heroku/Netlify experience, but also building it in a way that we plan to package it up and offer an enterprise solution where you can install it on your own infrastructure, like GitHub Enterprise. This allows you to have the same experience but be in full control.
Haha not trying to shamelessly plug, but it's hard to talk about the subject including features we're building without actually mentioning it.
Yes its container orchestration. We're built on top of kubernetes.
Our idea of the enterprise version is just deploy your own kubernetes cluster, then install our helm chart or whatever and it bootstraps and sets up the platform on your cluster.
Turns out there's an awful lot more to do to 'run your app' than finish writing it and push to github which was the workflow that heroku promised.
I absolutely want to be able to write small personal projects and have them deploy on my cheap server in a sensible way by simply pushing to my git repository.
At the moment I'm using caprover to do this, and it's so much better than doing it myself, but I think there's plenty of space to make this experience better.
I applaud the effort here, it looks like a worthy project. I like the idea of effortless git deployments & databases. Easy upgrades, backups, analytics etc.
But as others have pointed out, to call it a self-hostable Heroku & Netlify is indeed missing the point. It’s a bit of an oxymoron right?
The benefit of those services is in their CDN network, and the fact that they provide the platform and you don’t have to maintain your own server. Things I do not get with maintaining my own VPS.
Doesn’t mean I’m still not interested in this project, it looks pretty nice! But I would approach the marketing differently.
IMO you'd pay for Heroku because of the ability to have one vendor deal with your full stack, including Postgres, and having the ability to roll back / scale up with relative ease. You pay for that ease and support. (Everyone on the support team was a dev too)
For some teams it makes less sense than others. You can also probably find combinations of vendors to deal with most of the above.
bonus feat for heroku for a lot of teams is that their infrastructure bill comes from exactly ONE provider that includes your appservers, database, caches etc. this eases biling/budget things in some companies by an order of magnitude or two.
Looks promising. Judging by the screenshots, this just solves what I didn't like about Dokku - the need to configure the application from the terminal, instead of a convenient web panel.
Will it be possible to use Golang? Or use the Dockerfile from the repository to build the container and run it? This way you can even compete with Portainer.
It will support other languages, not just Node.js. About Dockerfile, I do not see any problem with supporting that case as well, but not in my top priorities now. :)
Interesting, that cli focused deploy / management is what I like about dokku. It fits well with into the rails dev cycle which rarely leaves the terminal / text editor.
Looks nice from screenshots on the Github page. It seems to be very similar to Caprover (https://caprover.com/) with less features but a slightly nicer design.
I've used Heroku for over 7 years. Heroku's selling point is essentially devops-as-a-service. That's why they can get away with charging so much compared to the hosting competition. A "self-hostable Heroku" doesn't make sense, at least to me. With this, I'd have to do devops, like every other hosting platform. Granted, I did only scan the home page, so perhaps I'm missing something.
Something like this would be pretty handy in an org where I need to run my own servers for various reasons (e.g. I need to be on-prem, or pretending to be on-prem via some VPN connection to AWS or something) and still want devs to be able to have a Heroku-like deployment experience.
This seems like the sweet spot to me. Often times there are smaller internal services which need to be hosted and at a small to medium sized org there's a good chance there won't be proper budget allocated for devops, k8s and the likes. Would be much easier for said org to pay a few hundred (few thousand?) a year and not worry about it. Or make it free and bill them for support when they want to scale up... A bit easier to charge money when the customer is making money in the process.
Think about it this way. Let’s say you’re a company with 500 engineers. Heroku is still a fantastic abstraction to provide to product teams, but the economics may make sense to have a 5-person team owning and maintaining that infrastructure over paying Heroku to do it. This makes even more sense if you’ve got your own data centers and need data locality.
Building such a thing is not that a high effort, reliability, extensibility and scalability are the things which are not easy to implement. For a small scale like hobby projects it definitely makes sense to use such stuff but not for things where you want to press a button to provision new nodes in different regions with e.g. GKE Autopilot or Render.com. Like things where you rely on a bulletproof CDN e.g. from Vercel or Netfliy.
Domain through FreeDNS (or GoDaddy, or Route 53, or Namecheap, or whatever), and cert through Let's Encrypt? Both of those steps are pretty dead simple individually, though it's admittedly nice to knock out both at once.
That's essentially what I'm doing for my own website, as one example. Granted, that's on a VPS instead of a home NAS, but the idea's the same; neither a domain registrar nor an ACME provider necessarily care.
Surprised by the number of snarky comments on here pointing out the obvious. "Self-hostable X" always involves some kind of compromise and people interested in self-hosting are aware of this.
The massive popularity of things like cPanel shows that there is most definitely a market for people who want some assistance setting things up, but don't want to go down a fully managed route.
Very nice. Especially kicked you've used Svelte/Routify to for the management app. I've been looking for a reference app for this setup, so thanks for that :)
For comparison, the one I use is called CapRover, and works great for my small use cases. It's like dokku with a web interface, and can do rebuilds from git webhooks.
Another vote for CapRover, has many addons easy to also add also your docker based apps.
Not a full production system but nice out of the box interface and monitoring.
If it removes its single points of failure it might even turn out to compete with the big guys.
It should only require you to provide source to any changes you make to Coolify itself (or code that is integrated with Coolify) if you make a public deployment of Coolify (eg. you launch a Heroku competitor).
Private (ie. accessible to employees or contractors only) deployments of Coolify shouldn't trigger the requirement. I've seen some difference of opinion as to whether giving access to a supplier triggers the requirement, giving access to a client does generally require you to provide source to them.
Code merely deployed by Coolify should not be affected in any case.
there's no difference between "private" deployment and "public" deployment. AGPL requires that if the client of the service asks for the source, you are obliged to give it to them.
So if you have a private network, but give a client/customer access to it, and they ask for the source, you will have the obligation to give it to them.
It will more than a PaaS. Planned to have services integrated, that needed for your application, like analytics, error reporting, feedback management, etc.
Currently scaling is not possible, but most parts are built scaling in mind. I would like to support Kubernetes later on, but first Docker Swarm, which is way simpler.
It looks nice but I couldn't envisage using it personally. I love netlify and it's whole CI/CD integration, but more importantly it's free for 100GB/month. After that I'd just pay (as 100GB/month for a static site seems a fair amount of virtual footfall)