Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Caddy v2.5.0 (github.com/caddyserver)
269 points by francislavoie on April 26, 2022 | hide | past | favorite | 112 comments



Francis definitely gets to claim the "Show HN" on this one, as he authored more commits to Caddy during this timeframe than I did: https://github.com/caddyserver/caddy/graphs/contributors?fro...

He and several other maintainers, community helpers, distribution team members, and sponsors go a long way to making this project tick. Thanks for all you do!


Caddy is my server/proxy of choice. It replaced Traefik, but I have one complaint. It looks like the documentation if very outdated. The examples in the official docs or elsewhere on the internet never work, and I always find myself ging to docker hub and then browsing the source for hints of the correct configuration. This happens especially when configuring letsencrypt or zerossl and the name of the properties keep changing. There are other examples, but this is my main gripe.

Is there a place where we can contribute to more up to date docs?


Could you be more specific about these complaints? What examples don't work? We can't work on improving the docs if we don't get specific and actionable feedback. The docs are found at https://github.com/caddyserver/website if you want to propose any changes.


I agree. I really liked Caddy, when I used it a while back, but the documentation did stand out as being particularly in need of some love.


It‘s in this state since v2 has been released. You will also find either trivial examples for the simple Caddyfile format or the very complex json format. Very often I have to try for a very long time whether a json configuration is available for Caddyfile - basicslly googling 30+ min.


Could you be more specific about these complaints? What examples don't work? We can't work on improving the docs if we don't get specific and actionable feedback. The docs are found at https://github.com/caddyserver/website if you want to propose any changes. You can also come ask for help on the forums https://caddy.community if you can't find what you're looking for.


Thank you all. Caddy is an absolute blessing!


Caddy is absolutely amazing.

I'm running it on an RPi that's hosting various home automation and hobby projects, all serving HTTP to the loopback interface only, with Caddy acting as a reverse proxy, and I've never even had to think about HTTPS. Adding a new service is adding a single line to my config file, and the rest happens automatically.

Finally, no more "trust this self-signed certificate" even for toy projects, no more port forwarding, copy-paste-ing dubious PEM and CRT files across services and Docker containers etc.


I'm definitely going to put Caddy on my next NAS build, sounds like a delightful proxy to setup.


Setting that up with nginx and acme.sh is pretty easy as well.

One benefit I found with using nginx is that services are more likely to have nginx proxy example configs, and sometimes these are non-trivial, eg.:

https://docs.mattermost.com/install/config-proxy-nginx.html

I guess with caddy you're on your own, which doesn't look like fun.


Caddy's defaults usually mean your config ends up being 1/4 or less of the size of Nginx ultimately. Often the 'reverse_proxy' directive's defaults are good enough, if the app you're proxying to is reasonably modern.

But if you need help, we're very active on https://caddy.community/ to answer questions :)


Check out the config adapters:

https://caddyserver.com/docs/config-adapters

Now you can use your nginx config with Caddy!


here is my example config for a caddy reverse proxy with automatic tls:

sub.domain.com { reverse_proxy localhost:8080 }


Would you mind sharing a (redacted if you like) copy of your config file in a Gist? Purely out of curiosity - your setup sounds really neat.


Sure, here's a redacted example:

https://gist.github.com/lxgr/303b1a3cd87005edb43b91545c5306b...

I unfortunately can't remember if there is some header/footer I'm omitting here – this file is managed by Ansible for my setup, but I believe it should effectively be the full Caddyfile.


That's really useful, thanks!


Caddy is such a delight! I can't reccomend it fast enough when somebody mentions they are on nginx.

Small random feedback: I like the idea of JSON configuration but most of the common docs/recipes are documented only with the Caddyfile format. I couldn't figure out the full JSON schema that was expected so I wound up generating a Caddyfile string instead for my programatic control. Maybe there is some translation tooling that could help?


Thanks for recommending it!

I hear ya. The JSON config is definitely not trivial. I wrote our JSON docs and strove to made them easy to follow. You can traverse into the module structure piece-by-piece here: https://caddyserver.com/docs/json/

There is also a Caddy plugin by @abiosoft that can generate a JSON schema for your custom Caddy builds, which can then be used by IDEs to give you autocomplete and validation: https://github.com/abiosoft/caddy-json-schema

I also sometimes recommend writing your config by hand in the Caddyfile, then using `caddy adapt` to get the JSON equivalent. (It might not always be the prettiest JSON, since the adapter is only so smart.) But then you can fine-tune the JSON a little easier, possibly. Hope that helps!


Thanks! Somehow you knew that I was struggling with JSON config of a custom Caddy build (for the Cloudflare extension)

caddy-json-schema looks great, I'll try this next time!

not sure if I totally missed `caddy adapt` or if I had troubles with it.


A few complete, fully working example JSON configuration files for common use cases would be really useful as well.


Agreed. (Although, for now, you can run caddy adapt on your Caddyfile to get the JSON you're looking for.)


I'm a heavy nginx user - there's not much that I haven't already done with it and I know the documentation quite well. Why would someone like me change to Caddy? I can see there are some NGINX Plus features that are free in Caddy.


I wouldn't put effort into changing if you have no troubles with nginx. Caddy has some neat features like live-configuration and built in LetsEncrypt. Plus the JSON configuration is a great idea, once I get it working.

The real reason is the thing you highlighted: "I can see there are some NGINX Plus features that are free in Caddy." Sadly it seems that NGINX is now crippleware. Personally I find it risky to depend on open source organizations who refuse to accept important features to the project, so they can sell those features as proprietary.


You could also see it another way - nginx has to make money to fund further development. By taking features that are not critical to 90% of users, and making them paid features, you help ensure the longevity of the project. This could be a good thing.


Back in the days, there was an issue with caddy regarding having to pay for it if you bundle some plugins with it. I can't remember now - must have been years ago. This is why I ultimately landed at traefik and nginx as my go-to setup.

Is this still the case?


No. See https://github.com/caddyserver/caddy/issues/2786

And to be clear, only builds produced by the official Caddy website used to be commercially licensed (not anymore). But the code has always been open source, and you could always build from source for free for commercial use.


Thanks for the clarification! So they changed it 3 years ago. I will definitely consider it in future projects.

> But the code has always been open source, and you could always build from source for free for commercial use.

Back in the days there were some workaround projects that build for you. Well caddy was relatively new then so I abandoned it for my projects.


Single binary is nice, live config changes, let’s encrypt is built in.


And memory safety! No C at the edge.


nginx has a pretty decent security record, though.


OpenSSL does not, however. (see Heartbleed)


It seems that this command will translate the Caddyfile to json.

./caddy adapt


I've been using this Docker image in a Docker Compose configuration to run some apps on a VPS

https://github.com/lucaslorentz/caddy-docker-proxy

Pretty nice, handles routing to your containers and manages LetsEncrypt for you. Response times are super fast too (<= 100ms for an ASP Web API application which uses Postgres via Entity Framework).


Best kept secret of Caddy+Docker.


Not a secret now haha!

I used to use this https://github.com/nginx-proxy/nginx-proxy (used to be under jwilder's GitHub account) which was also a good tool, and then used behind CloudFlare for free SSL certificates, but the Caddy container with automatic LetsEncrypt is fantastic also.


I just used Caddy for the first time for https://lists.sh and was super impressed by the automatic TLS. The config was also 2 lines of code. Great work!


Just wanted to say this is a cool little project. I hope you get some interesting lists on there!


Hey thanks so much! I'm having a blast publishing lists and discovering what others are posting.


very cool interface! login thru public key + ssh and lists contributed via scp


Thank you! I've been having a blast with the workflow. I think I'm going to make a sibling service that does the same thing but a more traditional blog platform with markdown support.


s/deciminate/disseminate/


Thank you! I’ve made the change and deploying shortly


I wish Caddy went all in on the caddyfile and dumped the JSON. The meagre examples are split between the two formats, it’s tough trying to figure out how to do something that’s not straightforward. Even worse when you find out some config isn’t supported by caddyfile and have to export to a giant JSON mess, I look at the result and think this is not what I signed up for.


JSON will always be the underlying config. The reason is that it's a 1:1 map to the Go structs. Also, plenty of users appreciate being able to change their config programmatically via the admin API.

We're doing our best with the Caddyfile to cover every usecase. If there's something you're struggling with, open an issue if it's clearly missing, or ask on the forums, and we'll get you sorted.


I should add I love the auto SSL, I have no idea why nginx hasn’t added this feature yet. I’ve used lua-resty and it isn’t exactly trivial to set up.


I was expecting this to be posted by mholt. :)

I use Caddy in my production systems with one exception, websockets. Yes it works, yes it is easier to configure compared to nginx. But there is this one bug that I can't really put my finger on that is hard to debug and create an issue for.

Under high load, it seems there are messages that are dropped and then the connection gets lost. It even affects the origin host because it keeps the connections alive and they keep piling up. Killing Caddy brings the upstream service back to life.

With nginx I am not having these issues and the websockets are more stable. It is just harder to set up and needs multiple moving pieces for autossl instead of it being taken care of by Caddy.


Can you open an issue with more details? I don't think this has been reported before. So that's why it hasn't been fixed yet. But we'd like to look into it.


Will do, thanks. It has just been this phantom issue that I haven't had a chance to compile enough convincing evidence to open an issue with.


Love caddy. We use it 100% on a whole bunch of servers and products. It’s the only tool that can do what it does.

But that JSON config file?

It’s a mistake. And I’m Being charitable and nice here.

Especially the part where there is an api to update the JSON file.

Jesus Christ Matt. Just use the caddyfile already. It’s easy, easy to read by everyone. Why make a config file hard to grok or follow by multiple team members?

I don’t get that part.


JSON is way easier to work with for "machines". The Caddyfile is a user-facing layer on top of that. The Caddyfile doesn't map as nicely to something usable at runtime. That was the entire point of the Caddy v2 rewrite; the v1 Caddyfile was inflexible and was turning into spaghetti.

Like Matt said, I don't understand the problem here. Just use the Caddyfile. What problem do you have with it specifically? If there's a feature you're missing or aren't sure how to use, open an issue or ask for help on the forums.


Loved the JSON file. I was a newcomer to Caddy and started using it when v2 just came out. The JSON config is really simple to write--as long as you have a huge screen :) Never really bothered with the Caddyfile syntax.

Also thanks for fixing the slow JSON schema docs pages, really helped a lot.


Glad you like it! I spent months designing the Caddy config and how the JSON structure would look/work.

The docs pages took longer to make than the core of Caddy 2 did, so I'm glad you like those too and find them helpful. That performance fix was a breath of fresh air for me too.

Anyway, nice to hear from another happy JSON consumer. I know several large companies using the JSON exclusively as well.


... I don't get it, why not just use the Caddyfile yourself? Lots of people need the JSON config. But if it doesn't help you, you don't even have to know or think about JSON when you use the Caddyfile.


Caddy doesn't support using the complete domain name for a site without messing with the configuration. By not supporting this out of the box the maintainer is causing a ton of sites to be faulty.

https://github.com/caddyserver/caddy/issues/1632


After more than 10 year nginx, I switched to Caddy and never regretted a moment.


Hey, as a nginx user for many many years, I don't understand why would I want to switch. Can you please give a few examples why it made sense for you? Especially with kubernetes environment I have a hard time trying to justify this.


In my experience Caddy is easier to work with on small projects. Easy to setup tls and routing and lets you focus on the project itself. On more advanced and performance sensitive use cases I still prefer HAProxy.


That's awesome!


Unfortunately it still does not come with some good rate limiting oob, I found https://github.com/mholt/caddy-ratelimit which is "work in progress" only - not clear if that is ok for production.

I really like the slickness of caddy, but I do not understand how a web server without rate limiting can be used for anything serious, you will need that for any website with some exposure.

Not sure if anything else is missing in comparison to nginx, I stopped research at this first very important point. I think streaming would be my next item on the list of things that you might realize nginx is not that bad at...


Really glad caddy is finally getting some level. It's such a great example of having great defaults, while not drowning you in magic.


Wow the new features sound great. Does tailscale run a local DNS server or provide AAAA or SRV records that the new dynamic upstream could connect to directly? The combo of dynamic upstream and dynamic HTTPS certs automatically from tailscale would be really nice. I'd love to just run tailscale and caddy locally on all machines and instantly have 100% proper browser HTTPS and DNS name access to all my tailscale services.

If tailscale doesn't provide it, anyone know of a DNS server that's caddy-like in simplicity and dead simple ease of running on localhost with minimal config, secure by default, etc?


You can learn about the Tailscale feature here: https://tailscale.com/blog/caddy/

Basically this is more about grabbing the TLS cert from Tailscale's daemon. How you handle the request is up to you. It's not related to the dynamic upstreams feature.


I remember being a newgrad at my startup it was a huge pain to get HTTPS up and running, until I ran into Caddy. Loved it!


Glad you enjoy it!


I'm seriously considering moving from Nginx to Caddy. There seem to be so fewer footguns involved in configuring it.


Caddy is one of the rare pieces of software I can recommend without caveats. It just makes my life easier, no drawbacks.


Is Caddy scriptable? I don't know much about it. But ever since I discovered nginx-lua-module and OpenResty it's been pretty much a game-changer for me. Curious if other popular webservers have done similar? Not really talking about things like mod_perl/php. Although I guess those are somewhat the same thing.


Kind of (indirectly); we don't have an interpreter built-in, but the JSON config is managed natively through an API: https://caddyserver.com/docs/api

So yeah, you can write scripts that control Caddy while it's online.

The nice thing about the API is you can code in whatever environment or language you prefer: no need to learn Caddy's scripting language or anything like that. (The config is JSON, so nearly every language can output it.)


It's also very easy to write plugins for Caddy: https://caddyserver.com/docs/extending-caddy


True, that's one reason we haven't prioritized scripting too. Writing a plugin that is natively compiled in is just as easy and can perform even better.


Apache has built-in lua scripting support nowadays.


Did they get rid of the infamous Caddy-Sponsors header?


Yes, like the day after that post on HN, years ago.

You have to realize how painful and annoying it is to see this get brought up every time we have a post here.


It's a reputation you have built, and your future actions will be interpreted with that in mind. Just like some of will continue to remember what Microsoft has done to Linux.


I'm sorry, but that's ridiculous. Reputation is built from patterns. There was no pattern there. It was a single isolated event. And people choose to keep bringing it up.


How useful is dynamic upstream which is new to me?


If your backends are registered via SRV records (e.g. Consul), this will be very useful for you! It also works similarly for A/AAAA records.

This feature is modular, so other methods of retrieving backends can be implemented as well.


Sounds like it would allow automatic handling of either failed backend nodes or adding new capacity over time without restarting the proxy, but I could be reading it wrong.


Basically this, yeah. You can add or take down backends without having to keep web server config in sync and reload it. (Though config reloads are pretty light anyways.)


Does it have implications for long-lived connections like SSE and websockets? It seems like restarts would have been disruptive to those but now might not be?


Possibly; the long-lived connections eventually have to end when reloads (note: restarts and reloads are different) happen, to free up resources -- otherwise they leak. (The grace period is configurable and can even be disabled.)


This bit us in a recent project where we have very long ongoing downloads, where even a very high grace period wouldn't really help.

I wonder whether it's technically generally feasible (possibly depends on net/http2's capabilities) to handover the connection from the old to the new server instance, comparable to what systemd does in terms of FD passing for socket-activated services.


The problem is that existing connections are inside of async functions (goroutines) that have references to the old config. AFAIK, there's no way in Go's stdlib to kill those goroutines while keeping the connection and handing it to another fresh goroutine.

To be clear, the new config is running at the same time as the old is still tearing down; all new connections get handled by the new config. Listeners are shared and transferred between servers.

We're all ears, though, if someone with deep knowledge of Go's internals can point us in the right direction.


I like the idea behind Caddy, but concerned about performance compared to nginx. Anyone got any real-world production benchmarks?

One I found from 2020 shows nginx can handle more RPS in less time for HTTP/2 and HTTP/3: https://caddy.community/t/caddy-v2-http-2-http-3-benchmarks/...

A proxy benchmark from 2021 also shows nginx perform better than Caddy: https://github.com/NickMRamirez/Proxy-Benchmarks


Nginx is written in C and optimized to avoid dynamic memory allocation, whereas Go's net/http stack performs a lot of allocations, which I think are them main source of the observed performance difference (together with different approaches regarding socket handling and polling, I think). There are libraries like fasthttp that can significantly speed up Go HTTP performance, but they have their own set of tradeoffs.

So yeah if you worry about peak throughput on a CPU-bound system Caddy probably isn't the right choice though I'm not sure nginx is, either. A hardware-based load balancer, at least as a pre-stage is probably more sane in those cases. For everything else I would always trade ease of usage for a bit of performance. I would e.g. never dare to touch the nginx codebase, but writing a plugin for Caddy doesn't intimidate me at all.


Also curious to see production performance.

After testing my app with it in local dev I noticed what the benchmark above says: requests per second is about half Nginx. So i switched back to Nginx.


I utterly adore your project. Thank you so much.

Are there plans for SMTP or IMAP proxying?

Do you do SNI proxying too?

I would love for Caddy to be my one stop shop for all TLS termination.


"Caddy L4" aka "Project Conncept" might be what you're looking for:

https://github.com/mholt/caddy-l4

"Project Conncept is an experimental layer 4 app for Caddy. It facilitates composable handling of raw TCP/UDP connections based on properties of the connection or the beginning of the stream."


Yep, that module is exactly how to do it!

But Caddy's http server can also do basic logic around SNI if I recall correctly.


I'd rather use something that can be packaged and maintained properly by distributions: https://repology.org/project/caddy/versions

I use distributions because I care about security and I'll be downvoted to oblivion for saying this.


Yeah, if you're looking for a package, you'll want to stick to the Cloudsmith .deb since it's bound to Caddy's GitHub release workflow so guaranteed to stay up-to-date: https://cloudsmith.io/~caddy/repos/stable/packages/

P.S., from the guidelines:

> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

edit: and an official response https://news.ycombinator.com/item?id=31170881


That's correct. See https://caddyserver.com/docs/install for our recommended installation methods. Only the .deb via Cloudsmith is fully automated, every other method requires some humans to publish the release, for a variety of reasons.


Pulling packages from sources outside of your distribution is very risky for security, availability and even legal compliance.

No thanks.


How long does it usually take for first-party packages to be updated, e.g., https://copr.fedorainfracloud.org/coprs/g/caddy/caddy/ ?


We've notified the package maintainers, so it's in their hands now. It's not automated, it needs a human to do it.


FYI, COPR is updated with v2.5.0 (except openSUSE, needs extra attention)


Where does Caddy live in a Kubernetes world -- or does it? (I'm not the most knowledgeable with either, so genuinely curious how these pieces all fit together).


I think just like any other server, but with fewer things to set up (HTTPS, certificates, etc). We also have an ingress controller in the works: https://github.com/caddyserver/ingress

(Not a Kubernetes user, so my answer is probably not the best.)


Caddy at its simplest form is an HTTP server. So you could use it to front end your application that otherwise isn't well suited for taking direct HTTP requests. Caddy would bring you other features like TLS support.

However I think in K8s world Caddy would make the most sense as an Ingress Controller. There is even a project as such: https://github.com/caddyserver/ingress

All traffic would terminate first at Caddy. Handling TLS, HTTP1/2/3, etc. Then passing it back to your application service/pod.


In addition to what others have said, I think it’s a handy option for a lightweight http backend or proxy when you need it to glue something together. For example, quickly fronting an s3 bucket, or serving static content generated by another container in the same pod, etc.


Does anyone use caddy in production? At what loads?


Yes. High loads, and high stakes (hundreds of thousands of dollars worth of transactions per hour, is one metric I do know).


Not super duper high load but we use Caddy to https encrypted dashboards to 25k users.


The Dynamic Upstreams feature looks good, but it doesn't seem to be as nicely integrated with Consul as Traefik is.


Could you elaborate on what you mean? For now we only ship with support for SRV and A/AAAA, but anyone can write a Consul implementation if they like.

I'm not sure how that would differ from SRV though, since Consul supports SRV anyways. I'm not a Consul user though.


Quoting the Consul docs [1]:

"Note that DNS is limited in size per request, even when performing DNS TCP queries.

For services having many instances (more than 500), it might not be possible to retrieve the complete list of instances for the service.

When DNS SRV response are sent, order is randomized, but weights are not taken into account. In the case of truncation different clients using weighted SRV responses will have partial and inconsistent views of instances weights so the request distribution could be skewed from the intended weights. In that case, it is recommended to use the HTTP API to retrieve the list of nodes."

[1] https://www.consul.io/docs/discovery/dns#service-lookups


Cool, sounds good.

I'll say that it's unlikely we'll work on this unless a sponsor funds the development. The dynamic upstreams feature was funded by a sponsor requiring improved SRV support (we did have SRV support before v2.5.0, but it was rudimentary and insufficient for many usecases).

If someone needs this and can spend some time developing it, essentially it's just a module which implements this interface: https://pkg.go.dev/github.com/caddyserver/caddy/v2/modules/c...


Why isn’t caddy using the lego library for ACME and does their own stuff? It seems lego has better support for all kind of providers.


Because Lego maintainers wouldn't budge when Caddy needed changes made to increase ACME reliability. Matt wrote his own implementation https://github.com/mholt/acmez and started using that in Caddy soon after. There's a deeper explanation here: https://github.com/caddyserver/certmagic/issues/71


Interesting. I will have to try it out. From my experience lego is not very reliable so I understand their point.


Yeah, lego's API and unreliability were the ultimate driving factors behind writing my own ACME stack. Actually, lego was originally commissioned for the Caddy project -- but then Caddy's needs evolved to having to support enterprises with tens-of-thousands of domains, and lego crippled under that responsibility due to diverging priorities. Lego is primarily used by Traefik now, and Caddy uses CertMagic+ACMEz to provide more robust, reliable certificate management that scales well.

I can guarantee CertMagic+ACMEz is the best cert management available in the ecosystem. It won't let you down.


love caddy, its all i ever wanted




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: