Hacker News new | past | comments | ask | show | jobs | submit login
Lcl.host: fast, easy HTTPS in your local dev environment (anchor.dev)
247 points by todsacerdoti 9 months ago | hide | past | favorite | 98 comments



Warning: This checks if it’s running the latest version and refuses to run if it isn’t up to date. They just released v0.0.16 15 minutes ago and the update hasn’t hit Homebrew yet, so it has completely disabled itself and won’t run. There doesn’t seem to be any option to skip the version check.

So don’t use this unless you don’t mind it breaking randomly whenever there’s an update.


Sorry about that, we're working on switching this to a warning and not an error, that slipped by us before release. After the next update, it will only show a warning if you're not on the latest release.


We just released a fix for the version error, this will be the last one you see, we promise!


How about being able to disable the check entirely? I really dislike tools phoning home unless asked.


Thanks. The upgrade command v0.0.15 gives didn’t work for me. In fact the only method I found that would successfully update was to uninstall it, then untap your tap, then reinstall.

Also, I echo the other people saying that the typeface you chose for the website is very difficult to read.


When I first announced Caddy, our website downloaded everything as .gz due to high traffic load -- a lesson I learned very quickly and a mistake I never made again.

This probably falls in the same boat. :)


Sorry, maybe it's too early in the day but I don't get what the lesson was. Could you explain?


(IMHO) the lesson is that sometimes you're excited to tell the world about what you built, that you forget about some "other stuff"

i.e. I don't think that the authors of this tool wanted to have developers be forced to use the latest version of their tool.

It's probably that they didn't think about it when they wrote the code, but now they know and hopefully this gets fixed in the next release?


Yeah, exactly. A lot goes into shipping, and sometimes things like this can be overlooked, especially on a small team that needs to deliver broad platform support right out of the gate.

There are some bugs that are just hard to find until they're out there.


I just don't understand what the thing about "downloading everything as .gz" means. It's not like a gz is a rare file format, it seems like a totally reasonable format to download something in.


Having once made a similar mistake myself, I assume what they mean is that it was downloading everything as a .gz—that is, the browser was asking users “where would you like to save index.html.gz?” instead of showing the homepage. (This happens when you precompress a static site for performance, but forget to tell the server that gzip should be negotiated as a Content-Encoding instead of a Content-Type.)


To clarify, every page load (.html file) was downloaded as a .gz file instead of being served as an HTML file and displayed as a web page in the browser.


When we first release cdnjs.com we used tools to check if the DNS had fully propagated... but it hadn't. The shame of a cdn being down in several places across the globe.


That's an absurd tactic I've not seen since last time I used Firefox.


you're completely free to run firefox from 5 years ago and it will not refuse to run.


No, you are not.

Mozilla bricked SSL certificates that are mandatory for everything, including Browser Extensions.

There is a flag for about:config to unbrick it. Problem is though, that this lasts less than a second because of the remote settings service running in a loop. If you block that services domain with a host firewall (like opensnitch), Firefox will do an endless for loop using 100% CPU load trying to request the shavar and other services domains.

So, effectively, you cannot run an old Firefox version, and especially not a version that uses the old bundled mozilla certificate (which is all provided download variants).

That's what the previous commenter was (likely) referring to.


Just for the fun I went to https://ftp.mozilla.org/pub/firefox/releases/53.0/linux-x86_... and downloaded Firefox 53 from 2017.

Font are ugly but it loaded HN just fine.


> you're completely free to run firefox from 5 years ago

> Firefox 53 from 2017

not the same. I was talking about post-quantum Firefox releases, which added the mentioned dependencies on Mozilla's SSL certificate.

(The grandparent's comment was also about post-quantum, obviously)


The grandparent post is about some malicious tactic used by Mozilla to prevent you from using old Firefox releases. I think it's pretty clear that there isn't such intent - if there was, the very least Mozilla would have done is preventing you from downloading old versions from their own servers. Having a few specific releases that broke over time doesn't mean anyone is trying to stop you from using an older version of Firefox. It's basically called software.


> Having a few specific releases that broke over time doesn't mean anyone is trying to stop you from using an older version of Firefox.

Snakeoil certificates in Enterprise licensing might disagree with this statement, which in my opinion is pretty identical to Mozilla's approach to have control over "who is allowed to use when" of their software.

It's also not a few specific releases over time, it's all releases after Browser Extension signatures were introduced as being mandatory, which made the rest of the Browser rely heavily on their certificate management servers.

I'm not saying it's malicious intent, what I am saying is that this could've been implemented in a much better manner, which wouldn't rely on a centralized certificate signing service.


Yeah they don't _have to require SSL_ but I think we can agree that is a worse thing given the benefits of securing extensions and validation of the server hosting them


Gross. Thanks for the heads up!


Not affiliated with them, but it seems this was a mistake. https://news.ycombinator.com/item?id=39768685 -- maybe give it another shot?


They fixed this. It's now a warning.


I like the interactive setup. I think this is solid but if you want something even faster and easier to use, try my project localias [0]. The parent project, lcl.host, has some annoying restrictions:

> This CA has some restrictions though: it can only issue certificates for subdomains of lcl.host and localhost, but that’s all you need for local development.

Localias, on the other hand, lets you use any custom domain you'd like. And if you use a domain ending in .local, it will broadcast over mDNS so that you can easily connect to that server from any other device on your wifi network (like your phone.)

Localias also allows you to share your configuration with your entire development team by committing a .localias.yaml file to the root of your git repo. This makes sharing links with each other super convenient.

Always nice to see another competitor in the space; if you're interested in this, please check out Localias as well!

[0] https://github.com/peterldowns/localias


Neat! I hadn't seen this before, and it uses Caddy :D


Yes! Thank you for making Caddy, it’s wonderful software and it was really easy to extend!


>This CA has some restrictions though: it can only issue certificates for subdomains of lcl.host and localhost, but that’s all you need for local development.

This sound like a security feature, not (just) an annoying restriction. Though an attack model for a local CA is a bit flimsy.


This looks fantastic! Does it support node extra ca cert etc? I’ve had that issue with mkcert in the past and it’s easy to fix but another thing to keep track of in these already complex dev setups if you’re doing local https.


I don't understand what you mean, what is "node extra ca cert etc" and what is the issue with mkcert?

Localias wraps Caddy to handle all the cert provision; I believe Caddy uses mkcert. I haven't seen any bug reports about yet, but if you give it a try and run into an issue I would be happy to help fix it.


This would be perfect if combined with Traefik's method of config via docker tags.


The Caddy Docker Proxy Module enables Caddy to act as reverse proxy for docker containers via labels: https://github.com/lucaslorentz/caddy-docker-proxy


Whoa, thanks. I had my eye on Caddy for a local reverse proxy but can't be doing with maintaining another big config file. With the labels method the reverse proxy becomes "set and forget" and you're probably going to be editing bundled compose files to get rid of port conflicts anyway (or using override like https://blog.gpkb.org/posts/multiple-web-projects-traefik/)


Oh man, I was up and running in my project in less than one minute. Thanks!


I'm glad it worked for you and was easy to set up. If you run into any trouble or have any feature requests, please file an issue on github!


It surprises me how few people dev/test against HTTPS, given that it isn't exactly hard to setup manually (with tools like this making it even easier). Just point a wildcard DNS entry at 127.0.0.1 or some other useful address if your dev copy is actually not that local, and chuck a web server there acting as a proxy to what-ever apps, with a LetsEncrypt wildcard cert. It isn't zero work, but saves time in the long run as soon as you hit unexpected issues caused by small differences between dev and prod.


You don't even need to mess with a wildcard from Lets Encrypt, just use https://github.com/FiloSottile/mkcert


I've always been a bit wary of a local trusted CA, especially with the signing cert on the same dev box as the certificates it signs which is how I've seen things done a lot. It feels like opening up a trust issue that could allow an uncooperative entity to play games with me… Maybe that is just paranoia from the practical jokes played back in CompSci at Uni!

Admittedly an external attacker getting close enough to sign a cert using such a CA, in order to trick me into something, means they probably have such high access already that they don't really need the CA to do that or worse, so perhaps it is unnecessary caution.


The way I handle this in the dev tooling I put together is to run a totally separate browser profile that (1) trusts the certificate and (2) can only connect to localhost. It also launches with a totally different colour scheme.

With chrome, that's something like:

    google-chrome
        --user-data-dir="${HOME}/.config/ourlocaldev/google-chrome"
        --install-autogenerated-theme=85,63,9
        --host-rules="MAP * 127.0.0.1, EXCLUDE localhost, EXCLUDE fonts.googleapis.com, EXCLUDE fonts.gstatic.com"
        --ignore-certificate-errors-spki-list="0oKw9nasIS7qRQD1CYXe5bmi22/mnHjZP++f6G+VM88="
        "https://my.dev.thing"
This will launch a separate copy of chrome with a fresh/separate profile. It will have a different colour (RGB values set in there) so it's visually distinct when it's running and I don't get the windows mixed up. Any request made from the browser will be rewritten to connect to 127.0.0.1 (except a couple google font domains).

The danger if someone got their hands on my local key/cert is basically nil. They would only be able to MITM connections from this one specific browser window to localhost. And that browser is incapable of connecting to anything besides localhost. I can never accidentally open my banking site in there. Also fresh profile so no saved passwords, credit cards, or anything else.

(As an added benefit, I don't really need to worry about reconfiguring URLs for projects. If I open "testing.mysite.com" in that browser, it will force the connection to localhost, so I can just run my services at our test URLs and steal configs as-is from the testing environment. Taking it further, I then have a controller set up in k3s/Rancher Desktop that rewrites the service on all annotated ingresses to point to its own service, which runs nginx, which the controller then configures to proxy the requests on to the local service or the actual upstream testing service depending on whether the local container is running. It also configures CoreDNS to point the upstream URLs at the same proxy. End result is that from the browser or anything running in k3s you can hit our testing URLs and it will hit your local container if it's running or fall back to the testing environment if not.)

If you want to try the browser thing, you can generate the fingerprint for a certificate with:

    echo "" |
    openssl s_client -connect 127.0.0.1:443 -prexit 2>/dev/null |
    openssl x509 -pubkey -noout -in /dev/stdin |
    openssl pkey -pubin -outform der |
    openssl dgst -sha256 -binary |
    openssl enc -base64


This is an awesome trick, thanks, I'm copying you.


I figure you could create the CA, have your browser trust it, create and sign your localhost cert, and then nuke the CA private key so no other carts may be signed.

It'd be annoying if you need to make a new localhost certificate, but totally manageable.


I think this is primarily FUD down by SSL cert companies.

If you have appropriate permissions on the private keys, it would require the same level of access to read the private key as it would for the attacker to create their own CA and install it on your PC.

My general rule of thumb is to use private certificates unless a) users interact with it directly, cuz they won't install my cert, or b) financial or other highly sensitive data flows through it. I'm not convinced that commercial CAs are more secure, but for the price of an SSL cert, it's worth it to have it not be my fault if something happens.


You don't even need that. See https://simpatico.io/devops/deploy.sh#generateSelfSignedCert() and also #generateRootCA()

Then point your server to the output files. If you want, you can also modify `/etc/hosts` to point a "production name" to localhost (something I actually don't do and never wrote a script for). Far fewer moving parts than the OP. (parts of Simpatico uses subtle.crypto and so requires https to run even locally)


> It surprises me how few people dev/test against HTTPS

For dev at least it’s mostly because web browsers treat localhost as a special domain that gets the HTTPS treatment even when loaded over HTTP.

I have set up local HTTPS certs before now, can’t remember exactly what required it. But I still load most web projects on localhost over HTTP just out of habit.


I've always thought the need for local HTTPS was unnecessary for at least 90% of the projects and brought no real value other than making the developer "feel" better.

localhost with HTTP should be sufficient for most things. It's when you start doing stuff like "app.localhost" like what was mentioned in the article which, despite it having "localhost" is not the same as http://localhost.

Develop basic in local and do all the hostname + https on actual server where you can easily do something like letsencrypt route DNS to it.


There are a number of APIs, such as geolocation, that only work over https. It's also very common to accidentally generate non-https urls for assets, which is a bug most people would rather catch in local dev and not production when the browser refuses to fetch them.

I use traefik to serve my apps in dev, which generates self-signed certs by default if you don't hook it up to ACME. Slightly more cumbersome because it means clicking through a warning screen and disabling verification in CLI tools (actually those are where I do drop to plain http). I'm sure this lcl.host product smooths this process considerably, but when it comes to local dev tools, I have an absolute requirement that they actually run locally and don't tether me to some cloud service, so I'll be sticking with traefik regardless for now.


> There are a number of APIs, such as geolocation, that only work over https

And localhost

https://developer.mozilla.org/en-US/docs/Web/Security/Secure...


Might have changed, but last time I tried to use the Clipboard API it didn't work on localhost without https


I would be very surprised if that were the case. Localhost gets special privileges in most browsers


It is hard to set up. Creating a cert and trusting it is easy – a Powershell script can do that in 3 lines.

But then, trouble begins. You have to configure every server to use the certificate (or use a proxy) and every client to accept the certificate. Sometimes the proxies eat headers, or they have trouble with WebSockets and hot reloading or whatever.

We also use IPs instead of domains to find our locally running prod servers so that our customers don't have to configure anything DNS on their maybe-offline WiFi network. We also have to test with external devices, such as iPads. How do you automate getting your certificate onto them?

..And then your IP changes and all is lost again.

Yeah, if you have a 50 line Svelte "app" that you serve as site it's easy. But who does that?


> It is hard to set up.

Really?

> You have to configure every server to use the certificate

A couple of lines in apache/nginx/other config? The same lines each time. You probably don't want your dev box to be doing things required for LE renewal (allowing external in for HTTP(S) validation, making public DNS changes for DNS01, …) but I have a small container doing that and other boxes pull the resulting cert from there (a small cron task, again the same in each instance).

> and every client to accept the certificate

> external devices, such as iPads. How do you [get] your certificate onto them?

I suggested using a public name and an LE cert or similar. Clients will trust without extra effort.

> We also use IPs instead of domains

I did say “more devs don't” not “all devs don't”, there will of course be special cases. Though using addresses rather than names just feels broken.

> And then your IP changes and all is lost again

That is an argument against using address based config rather than name based config, not an argument against using HTTPS!


I never work locally. Always remote. Dev.xxx.com, stg.xxx.com etc


Whether you dev/test on localhost or a remote instance (or a local instance with a public name etc.), is separate from whether your dev/test instances present via HTTPS.


Its a bit easier to use letsencrypt on a remote host. In less than 5 minutes i can setup a remote host with domains and certificates attached to it. Localhost with certs and hostnames is always a pita. Thats why i always work on a remote box. For my local editors it doesnt matter. Just attach remote disks to my local machine.


I think the relevant part here is that remote hosts have public IPs. I'm assuming you're doing HTTP-01 verification on Lets Encrypt, which does basically require a public IP.

FYI, you can get Lets Encrypt certs easily on non-public hosts by using the DNS-01 challenges. They rely on setting particular DNS records rather than HTTP responses, so they don't rely on public IPs


I don't see OrbStack mentioned here much, but it's completely replaced Docker Desktop for me. Aside from having a better UI, faster and uses less battery, it also gives local https with custom domains for free [0].

[0] https://docs.orbstack.dev/features/https


OrbStack was mentioned often with all the Docker Desktop / Docker open-source organization drama in the last year or two. But that was when it was free for all, now it's paid for commercial licenses which is less attractive for organizations to switch over. We're all on Colima. But the local https feature is a nice little feature for sure.


If you don't need a GUI, the following combo works pretty well:

- https://github.com/abiosoft/colima

- https://github.com/peterldowns/localias


It looks nice, but it is MacOS only, and it is really expensive at $96 per year! Probably why it doesn't get mentioned often.


How is this different from something like Caddy which supports https://localhost instead of something like https://lcl.host


localhost gives you a different security context in your browser than using a full domain name. Typically to use a full domain name locally you'd either need to mess with /etc/hosts (which can bite you later) or mess with DNS. lcl.host just makes things work out of the box.


Let me add a bit of clarification here, since you specifically asked about https://localhost

Using http with localhost is a mostly security context, but it has some quirks[0]. Using https://localhost will give you the full security context but then you're typically stuck either dealing with certs manually or using a proxy, like caddy. lcl.host should simplify your setup.

We're big fans of caddy by the way and even sponsor the project. Matt's doing amazing work over there.

[0] https://web.dev/articles/when-to-use-local-https


Thanks! That reminds me we need to get your logo on our new homepage. I'll shoot you an email.


I want to point out that Anchor.dev sponsors the Caddy project and we're very grateful for that! Anchor devs have also made code contributions to Caddy and CertMagic.

Anchor has a neat product. They're making internal TLS more accessible to more developers who don't necessarily need a separate web server. They're doing local TLS just about as best as possible from what I can see.

IMO this has more utility than mkcert -- which is a great tool by Filippo and Caddy shares some of its underlying library for its own auto-trusted internal CA -- because, like Caddy, Anchor fully automates the certificates instead of just generating them and needing a cron job. It's more hands-off and higher-level, allowing you to get more done with less effort.


Some things I learned about trusted localhost HTTPS:

* Windows is the easiest... by far. There is only one trust store and its extremely easy to access at different levels of trust. Firefox has its own trust store so you can either add your certs to both the Windows store AND the Firefox trust store or flip a config in Firefox to tell it to use the Windows trust store like everyone else.

* Linux is a challenge because you have to add your certificates to the OS trust store and then each browser has their own trust stores.

* MacOS is pretty close to impossible, at least fully automated. If the cert is not registered with a third party of the OS's choosing the cert will not be trusted in the browser. The way around this is to manually add your localhost cert chain to the MacOS keychain.

If anybody wants an example here is something I wrote a ways back in JS (but please be warned its specific to my application:

* Build the certificate chain - https://github.com/prettydiff/share-file-systems/blob/master...

* Install the cert by OS type - https://github.com/prettydiff/share-file-systems/blob/master...

That second sample also installs pcap so that I can serve on localhost over ports 80/443.


> * Linux is a challenge because you have to add your certificates to the OS trust store and then each browser has their own trust stores.

I could be wrong, but I could have sworn Firefox trusts the OS' certificate store. Maybe it's just been too long since I've done it.


Firefox has its own cert store, even on Windows. You can make it defer to the OS trust store with an option in the about:config.


Yeah, I know about that part, but I could have sworn that Firefox on Ubuntu trusts the intersection of it's trust store and the OS trust store (with Firefoxs store having priority).

Were you using the package manager packages? I'm a little surprised if the distro packages don't configure Firefox to use the OS trust store. I would not be surprised if the binaries Firefox provides directly don't trust the OS package store. They probably shouldn't, given that the path to the OS trust store is configurable. I think Ubuntu and Fedora use slightly different paths.

Seems like a security nightmare to try guessing at what directory has the OS trust store. Better to leave it to the package maintainers to specifically customize it for their distro's patterns.


Off topic but I find this font for the body very hard to read.


Why modify the trust stores with the personal CA? Is there any risk to just publishing a globally valid wildcard cert for *.lcl.host, since it always resolves to 127.0.0.1 anyway?


We install the CA certificates into the trust stores so that the certificates are trusted by your browsers and clients, otherwise they will (rightfully!) get connection errors. We also set the CAA records for all lcl.host subdomains to anchor.dev, so no public CA will issue certificates for *.lcl.host. The only valid certs for lcl.host subdomains you will encounter are for your account's CAs. If we gave everyone a cert+key for *.lcl.host, besides the security concerns, we'd have to keep redistributing them every ~45 days, but with lcl.host you can setup ACME to automatically renew certs before they expire.


Some sites have tried this before, but I dont think they stay online long. The certificates are "leaked" when they are shared, so the CA will revoke them.

I think a better approach is to get a domain name and a Let's Encrypt certificate. There's lots of tooling for this, and it matches production. I built https://www.getlocalcert.net/ to act as free, Let's Encrypt compatible subdomain service specifically for these sorts of challenges.


Cool. I was hoping something like this existed, and glad to see you got it into the public suffix list. I'd been considering doing something like it for some time.


"Try for free", no mention of pricing anywhere.. I'll stick to what I'm already doing that's free.. :D


I have another personal solution [0]. It's a DNS server that also gets a wildcard certificate and make it available with a secret. This is definitely in the convenience over security relm, but it resolve any pattern prefix-123-123-123-123-suffix.example.com to the enclosed ip (e.g. 123.123.123.123). It will resolve to 127.0.0.1 or any other ip happily. Now you just need to use the associated cert and enjoy https. Works great from with k8s ingress, caddy, node.. You don't have to fiddle with your trusted store and it works for everybody. I took inspiration from https://nip.io/ for the dns part.

[0] https://github.com/jpambrun/dnsssl


I've used this service in the past, http://local-ip.co/ which provides downloadable private key and certs and they run a dns service that resolves to any IP address.


We built something similar for docker-compose based projects: https://gitlab.com/hukudo/ingress


I’m skimming the docs looking for how “in containers” is handled and so far I can only see a one liner in the release notes. One problem in local environments I keep having and building tricks for is for services inside containers getting certs for other containers. From within the container, resolving to 127.0.0.1 isn’t helpful as that’s the internal loopback not the host.


we're going to say more about how lcl.host works between containers in the future since it ends up pulling in Anchor's package features, but I can give a quick rundown of what we've done in the past with docker-compose: start a service in container A and expose port 44300, and configure the service with an ACME client to provision a `service-a.lcl.host` certificate. The clients in that container won't trust the cert, but that no problem, since your system/browser will trust the cert if you've run `anchor lcl`. In container B, install an anchor built package for the language of the server, and setup the HTTPS/TLS client to use the set of CAs in that package. Now app B can connect to `service-a.lcl.host:443300` over HTTPS/TLS.


“Clients in that container won’t trust the cert”. Yeah, there’s the trick.

“service-a.lcl.host:443300“ so when inside the container, won’t that resolve to 127.0.0.1 which is the container internal loopback interface not the docker host’s interface? Hence trying to connect to itself not its sibling.


right it's the loopback, but I believe docker-compose can forward loopback ports to the host (and then back into the other container) using links, but i'm fuzzy on the details and may be misremembering.


In my experience a lot of developers ignore the concept of dev/prod parity like this because a different team (ex. Deployment) ends up dealing with the integration issues it causes. That being said, its also traditionally a balance between effort and output - so I can see a "make it easier" tool like this helping lower the barrier to actually get devs to use it.


I've just got me a domain and am using letsencrypt. The hostnames resolve to LAN addresses. Still unsure what I'm doing wrong.


Just use orbstack, it gives you https and hostnames.

Shameless plug: https://github.com/jrz/container-shell in combination with orbstack. Isolated dev environment, easy to use, local tools, https on https://xxxxx.orb


I just use cloudflare tunnels (cloudflared) - don't have to install any certificates, it's all handled by cloudflare. Yes, it exposes globally, but that's often convenient to share a link to my dev with colleagues. And it has been fast enough. Downside is that you need internet connectivity.


Can you not use Letsencrypt locally? I think I actually saw something about that recently here on HN.

What is the difference between this and LE if true?

I would like to use a .test domain. I use that currently and just click "proceed anyway" whenever the warning pops up and It's pretty often lately.


Congratulations on shipping!

Just yesterday, I needed it for a hackathon, but had to switch to a cloud IDE instead.


I was working on something similar with https://github.com/cpendery/wock. Never finished it up, but this looks promising


I'm on Windows 11 and not sure if it is just me but your font is too hard to read. I thought it was just the blog but no, it's everywhere even on the landing page.


They lost me at "Signin to Anchor.dev".


I've been using mkcert for ages now for local development. Free, open source, and works offline.

Why use any service for this?


Any reason why these tools usually don't support windows ? Is it harder on Windows ?


Any plans to add it to the public suffix list?


Were some bugs that show only in the https version of the app?


These tend to manifest themselves when you have mixed content on the page or you're using CORS. For CORS the issues can surface due to running different rules for dev vs prod/stg.


HTTP/2 quirks, e.g. header capitalization.

I think I've seen some different cache behaviors.

Secure cookies (and maybe same/cross site stuff?)


The inline code formatting is making my eyes bleed....


How do I get this working if we are 100% docker?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: