He and several other maintainers, community helpers, distribution team members, and sponsors go a long way to making this project tick. Thanks for all you do!
Caddy is my server/proxy of choice. It replaced Traefik, but I have one complaint. It looks like the documentation if very outdated. The examples in the official docs or elsewhere on the internet never work, and I always find myself ging to docker hub and then browsing the source for hints of the correct configuration. This happens especially when configuring letsencrypt or zerossl and the name of the properties keep changing. There are other examples, but this is my main gripe.
Is there a place where we can contribute to more up to date docs?
Could you be more specific about these complaints? What examples don't work? We can't work on improving the docs if we don't get specific and actionable feedback. The docs are found at https://github.com/caddyserver/website if you want to propose any changes.
It‘s in this state since v2 has been released. You will also find either trivial examples for the simple Caddyfile format or the very complex json format. Very often I have to try for a very long time whether a json configuration is available for Caddyfile - basicslly googling 30+ min.
Could you be more specific about these complaints? What examples don't work? We can't work on improving the docs if we don't get specific and actionable feedback. The docs are found at https://github.com/caddyserver/website if you want to propose any changes. You can also come ask for help on the forums https://caddy.community if you can't find what you're looking for.
I'm running it on an RPi that's hosting various home automation and hobby projects, all serving HTTP to the loopback interface only, with Caddy acting as a reverse proxy, and I've never even had to think about HTTPS. Adding a new service is adding a single line to my config file, and the rest happens automatically.
Finally, no more "trust this self-signed certificate" even for toy projects, no more port forwarding, copy-paste-ing dubious PEM and CRT files across services and Docker containers etc.
Caddy's defaults usually mean your config ends up being 1/4 or less of the size of Nginx ultimately. Often the 'reverse_proxy' directive's defaults are good enough, if the app you're proxying to is reasonably modern.
I unfortunately can't remember if there is some header/footer I'm omitting here – this file is managed by Ansible for my setup, but I believe it should effectively be the full Caddyfile.
Caddy is such a delight! I can't reccomend it fast enough when somebody mentions they are on nginx.
Small random feedback: I like the idea of JSON configuration but most of the common docs/recipes are documented only with the Caddyfile format. I couldn't figure out the full JSON schema that was expected so I wound up generating a Caddyfile string instead for my programatic control. Maybe there is some translation tooling that could help?
I hear ya. The JSON config is definitely not trivial. I wrote our JSON docs and strove to made them easy to follow. You can traverse into the module structure piece-by-piece here: https://caddyserver.com/docs/json/
There is also a Caddy plugin by @abiosoft that can generate a JSON schema for your custom Caddy builds, which can then be used by IDEs to give you autocomplete and validation: https://github.com/abiosoft/caddy-json-schema
I also sometimes recommend writing your config by hand in the Caddyfile, then using `caddy adapt` to get the JSON equivalent. (It might not always be the prettiest JSON, since the adapter is only so smart.) But then you can fine-tune the JSON a little easier, possibly. Hope that helps!
I'm a heavy nginx user - there's not much that I haven't already done with it and I know the documentation quite well. Why would someone like me change to Caddy? I can see there are some NGINX Plus features that are free in Caddy.
I wouldn't put effort into changing if you have no troubles with nginx. Caddy has some neat features like live-configuration and built in LetsEncrypt. Plus the JSON configuration is a great idea, once I get it working.
The real reason is the thing you highlighted: "I can see there are some NGINX Plus features that are free in Caddy." Sadly it seems that NGINX is now crippleware. Personally I find it risky to depend on open source organizations who refuse to accept important features to the project, so they can sell those features as proprietary.
You could also see it another way - nginx has to make money to fund further development. By taking features that are not critical to 90% of users, and making them paid features, you help ensure the longevity of the project. This could be a good thing.
Back in the days, there was an issue with caddy regarding having to pay for it if you bundle some plugins with it. I can't remember now - must have been years ago. This is why I ultimately landed at traefik and nginx as my go-to setup.
And to be clear, only builds produced by the official Caddy website used to be commercially licensed (not anymore). But the code has always been open source, and you could always build from source for free for commercial use.
Pretty nice, handles routing to your containers and manages LetsEncrypt for you. Response times are super fast too (<= 100ms for an ASP Web API application which uses Postgres via Entity Framework).
I used to use this https://github.com/nginx-proxy/nginx-proxy (used to be under jwilder's GitHub account) which was also a good tool, and then used behind CloudFlare for free SSL certificates, but the Caddy container with automatic LetsEncrypt is fantastic also.
I just used Caddy for the first time for https://lists.sh and was super impressed by the automatic TLS. The config was also 2 lines of code. Great work!
Thank you! I've been having a blast with the workflow. I think I'm going to make a sibling service that does the same thing but a more traditional blog platform with markdown support.
I wish Caddy went all in on the caddyfile and dumped the JSON. The meagre examples are split between the two formats, it’s tough trying to figure out how to do something that’s not straightforward. Even worse when you find out some config isn’t supported by caddyfile and have to export to a giant JSON mess, I look at the result and think this is not what I signed up for.
JSON will always be the underlying config. The reason is that it's a 1:1 map to the Go structs. Also, plenty of users appreciate being able to change their config programmatically via the admin API.
We're doing our best with the Caddyfile to cover every usecase. If there's something you're struggling with, open an issue if it's clearly missing, or ask on the forums, and we'll get you sorted.
I use Caddy in my production systems with one exception, websockets. Yes it works, yes it is easier to configure compared to nginx. But there is this one bug that I can't really put my finger on that is hard to debug and create an issue for.
Under high load, it seems there are messages that are dropped and then the connection gets lost. It even affects the origin host because it keeps the connections alive and they keep piling up. Killing Caddy brings the upstream service back to life.
With nginx I am not having these issues and the websockets are more stable. It is just harder to set up and needs multiple moving pieces for autossl instead of it being taken care of by Caddy.
Can you open an issue with more details? I don't think this has been reported before. So that's why it hasn't been fixed yet. But we'd like to look into it.
Love caddy. We use it 100% on a whole bunch of servers and products. It’s the only tool that can do what it does.
But that JSON config file?
It’s a mistake. And I’m
Being charitable and nice here.
Especially the part where there is an api to update the JSON file.
Jesus Christ Matt. Just use the caddyfile already. It’s easy, easy to read by everyone. Why make a config file hard to grok or follow by multiple team members?
JSON is way easier to work with for "machines". The Caddyfile is a user-facing layer on top of that. The Caddyfile doesn't map as nicely to something usable at runtime. That was the entire point of the Caddy v2 rewrite; the v1 Caddyfile was inflexible and was turning into spaghetti.
Like Matt said, I don't understand the problem here. Just use the Caddyfile. What problem do you have with it specifically? If there's a feature you're missing or aren't sure how to use, open an issue or ask for help on the forums.
Loved the JSON file. I was a newcomer to Caddy and started using it when v2 just came out. The JSON config is really simple to write--as long as you have a huge screen :) Never really bothered with the Caddyfile syntax.
Also thanks for fixing the slow JSON schema docs pages, really helped a lot.
Glad you like it! I spent months designing the Caddy config and how the JSON structure would look/work.
The docs pages took longer to make than the core of Caddy 2 did, so I'm glad you like those too and find them helpful. That performance fix was a breath of fresh air for me too.
Anyway, nice to hear from another happy JSON consumer. I know several large companies using the JSON exclusively as well.
... I don't get it, why not just use the Caddyfile yourself? Lots of people need the JSON config. But if it doesn't help you, you don't even have to know or think about JSON when you use the Caddyfile.
Caddy doesn't support using the complete domain name for a site without messing with the configuration. By not supporting this out of the box the maintainer is causing a ton of sites to be faulty.
Hey, as a nginx user for many many years, I don't understand why would I want to switch. Can you please give a few examples why it made sense for you? Especially with kubernetes environment I have a hard time trying to justify this.
In my experience Caddy is easier to work with on small projects. Easy to setup tls and routing and lets you focus on the project itself. On more advanced and performance sensitive use cases I still prefer HAProxy.
Unfortunately it still does not come with some good rate limiting oob, I found
https://github.com/mholt/caddy-ratelimit
which is "work in progress" only - not clear if that is ok for production.
I really like the slickness of caddy, but I do not understand how a web server without rate limiting can be used for anything serious, you will need that for any website with some exposure.
Not sure if anything else is missing in comparison to nginx, I stopped research at this first very important point. I think streaming would be my next item on the list of things that you might realize nginx is not that bad at...
Wow the new features sound great. Does tailscale run a local DNS server or provide AAAA or SRV records that the new dynamic upstream could connect to directly? The combo of dynamic upstream and dynamic HTTPS certs automatically from tailscale would be really nice. I'd love to just run tailscale and caddy locally on all machines and instantly have 100% proper browser HTTPS and DNS name access to all my tailscale services.
If tailscale doesn't provide it, anyone know of a DNS server that's caddy-like in simplicity and dead simple ease of running on localhost with minimal config, secure by default, etc?
Basically this is more about grabbing the TLS cert from Tailscale's daemon. How you handle the request is up to you. It's not related to the dynamic upstreams feature.
Is Caddy scriptable? I don't know much about it. But ever since I discovered nginx-lua-module and OpenResty it's been pretty much a game-changer for me. Curious if other popular webservers have done similar? Not really talking about things like mod_perl/php. Although I guess those are somewhat the same thing.
Kind of (indirectly); we don't have an interpreter built-in, but the JSON config is managed natively through an API: https://caddyserver.com/docs/api
So yeah, you can write scripts that control Caddy while it's online.
The nice thing about the API is you can code in whatever environment or language you prefer: no need to learn Caddy's scripting language or anything like that. (The config is JSON, so nearly every language can output it.)
True, that's one reason we haven't prioritized scripting too. Writing a plugin that is natively compiled in is just as easy and can perform even better.
It's a reputation you have built, and your future actions will be interpreted with that in mind. Just like some of will continue to remember what Microsoft has done to Linux.
I'm sorry, but that's ridiculous. Reputation is built from patterns. There was no pattern there. It was a single isolated event. And people choose to keep bringing it up.
Sounds like it would allow automatic handling of either failed backend nodes or adding new capacity over time without restarting the proxy, but I could be reading it wrong.
Basically this, yeah. You can add or take down backends without having to keep web server config in sync and reload it. (Though config reloads are pretty light anyways.)
Does it have implications for long-lived connections like SSE and websockets? It seems like restarts would have been disruptive to those but now might not be?
Possibly; the long-lived connections eventually have to end when reloads (note: restarts and reloads are different) happen, to free up resources -- otherwise they leak. (The grace period is configurable and can even be disabled.)
This bit us in a recent project where we have very long ongoing downloads, where even a very high grace period wouldn't really help.
I wonder whether it's technically generally feasible (possibly depends on net/http2's capabilities) to handover the connection from the old to the new server instance, comparable to what systemd does in terms of FD passing for socket-activated services.
The problem is that existing connections are inside of async functions (goroutines) that have references to the old config. AFAIK, there's no way in Go's stdlib to kill those goroutines while keeping the connection and handing it to another fresh goroutine.
To be clear, the new config is running at the same time as the old is still tearing down; all new connections get handled by the new config. Listeners are shared and transferred between servers.
We're all ears, though, if someone with deep knowledge of Go's internals can point us in the right direction.
Nginx is written in C and optimized to avoid dynamic memory allocation, whereas Go's net/http stack performs a lot of allocations, which I think are them main source of the observed performance difference (together with different approaches regarding socket handling and polling, I think). There are libraries like fasthttp that can significantly speed up Go HTTP performance, but they have their own set of tradeoffs.
So yeah if you worry about peak throughput on a CPU-bound system Caddy probably isn't the right choice though I'm not sure nginx is, either. A hardware-based load balancer, at least as a pre-stage is probably more sane in those cases. For everything else I would always trade ease of usage for a bit of performance. I would e.g. never dare to touch the nginx codebase, but writing a plugin for Caddy doesn't intimidate me at all.
After testing my app with it in local dev I noticed what the benchmark above says: requests per second is about half Nginx. So i switched back to Nginx.
"Project Conncept is an experimental layer 4 app for Caddy. It facilitates composable handling of raw TCP/UDP connections based on properties of the connection or the beginning of the stream."
Yeah, if you're looking for a package, you'll want to stick to the Cloudsmith .deb since it's bound to Caddy's GitHub release workflow so guaranteed to stay up-to-date: https://cloudsmith.io/~caddy/repos/stable/packages/
P.S., from the guidelines:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
That's correct. See https://caddyserver.com/docs/install for our recommended installation methods. Only the .deb via Cloudsmith is fully automated, every other method requires some humans to publish the release, for a variety of reasons.
Where does Caddy live in a Kubernetes world -- or does it? (I'm not the most knowledgeable with either, so genuinely curious how these pieces all fit together).
I think just like any other server, but with fewer things to set up (HTTPS, certificates, etc). We also have an ingress controller in the works: https://github.com/caddyserver/ingress
(Not a Kubernetes user, so my answer is probably not the best.)
Caddy at its simplest form is an HTTP server. So you could use it to front end your application that otherwise isn't well suited for taking direct HTTP requests. Caddy would bring you other features like TLS support.
However I think in K8s world Caddy would make the most sense as an Ingress Controller. There is even a project as such: https://github.com/caddyserver/ingress
All traffic would terminate first at Caddy. Handling TLS, HTTP1/2/3, etc. Then passing it back to your application service/pod.
In addition to what others have said, I think it’s a handy option for a lightweight http backend or proxy when you need it to glue something together. For example, quickly fronting an s3 bucket, or serving static content generated by another container in the same pod, etc.
"Note that DNS is limited in size per request, even when performing DNS TCP queries.
For services having many instances (more than 500), it might not be possible to retrieve the complete list of instances for the service.
When DNS SRV response are sent, order is randomized, but weights are not taken into account. In the case of truncation different clients using weighted SRV responses will have partial and inconsistent views of instances weights so the request distribution could be skewed from the intended weights. In that case, it is recommended to use the HTTP API to retrieve the list of nodes."
I'll say that it's unlikely we'll work on this unless a sponsor funds the development. The dynamic upstreams feature was funded by a sponsor requiring improved SRV support (we did have SRV support before v2.5.0, but it was rudimentary and insufficient for many usecases).
Yeah, lego's API and unreliability were the ultimate driving factors behind writing my own ACME stack. Actually, lego was originally commissioned for the Caddy project -- but then Caddy's needs evolved to having to support enterprises with tens-of-thousands of domains, and lego crippled under that responsibility due to diverging priorities. Lego is primarily used by Traefik now, and Caddy uses CertMagic+ACMEz to provide more robust, reliable certificate management that scales well.
I can guarantee CertMagic+ACMEz is the best cert management available in the ecosystem. It won't let you down.
He and several other maintainers, community helpers, distribution team members, and sponsors go a long way to making this project tick. Thanks for all you do!