Hacker News new | past | comments | ask | show | jobs | submit login

In all honesty, if you write go programs, provided you don’t need CGo, you can write bare metal services that can bootstrap and run as pid 0. Try doing that with .Net or Java (ok, you technically can, but it’s a challenge). Go + Linux Kernel and maybe musl is really all you need to run a go web service. I wouldn’t recommend for production but it will get you to 5mb image sizes.



> I wouldn’t recommend for production but it will get you to 5mb image sizes.

Former co-worker did this, and we still have tons of docker images in production that lack basic debugging functionality like... a shell. Alpine Linux is only like 8 MB more :(


I prefer to run my service containers without a shell or much of an operating system. YMMV but in this corporate world where every container is scanned for every CVE and some binaries are deep inspected my list of vulnerabilities on a more full fledged container or VM can become a task that spirals out of control. The context being that many (maybe even most) of those CVEs probably don't pertain to how I use the software or package and that it's far easier to patch than file for an exemption.

Thus, I spend the investment up front in getting log streaming working, my logs are concise, I implement application monitoring, and I demand host monitoring from the platforms I use. If I check all of those boxes, I generally don't have anything I need to do with a shell.


I fully agree with you about vuln scans, but as a counterpoint there have been dozens of times when I've saved hours of debugging with a well-applied strace or tcpdump. Logging and monitoring are great and necessary, but they'll only catch things you thought of ahead of time; using them to debug something ongoing is basically printf() debugging where compile time = the full length of your CI/CD pipeline.


Depending on how you run your containers, you should be able to run a debug container in the same namespace as your target container. That way you can keep your images lean and bundle all the debugging tools in a different image, which you run only when you need to.


Yeah, that's definitely a valid take depending on your setup. If I have those kinds of problems with a container then I generally jump into the underlying VM or metal to use those tools, but that also implies a lot of knowledge around how a host system incorporates container networking, which arguably makes hard troubleshooting even harder. On headless systems they usually come with some sort of privileged "admin" container, so the setup is the same.

Second to that is that I have dev stages that are built with containers that do have those tools, and generally if I run into those kinds of problems I see them in dev first.


>tons of docker images in production that lack basic debugging functionality like... a shell

That's a commendable security practice. A whole class of vulnerabilities is mitigated (and others are much harder to exploit) if you don't add unnecessary junk to your images, like a shell.

It's also endorsed by Google via distroless: https://github.com/GoogleContainerTools/distroless


If you have root access on the host machine you might get away with the host tooling (depending on your issue).

I (as a devops engineer) did that because (rightfully) rhe developers i was working with at the time didn’t include some troubleshooting tools (like tcpdump) and the inages were running as non root anyway.

Look up the manpage for nsenter, it’s all you need really.

Btw, tcpdump in production, hunting the correct network interface on a kubernetes cluster node… fond memories:)


why do you need a shell in production?


I don’t always debug my code, but when I do, it’s in production. /s


This is the wrong question. The better question is something along the lines of "why do you think you need a shell?", and the only answers you can have there are making attacker's lives easier and theoretical cases where you may need to run shell commands against a service in anger. I don't personally think it's worth it.


debugging of course.


You’re debugging in production, I highly doubt running bare metal services is in your wheelhouse. Your services should be logging to another system. If you need a shell to debug, you don’t need bare metal Golang. Go with Alpine (pun).

While novel, you really need to have the engineering excellence in your org to be able to struct log to another system, blue/green deployments, etc.

One place where I worked where did bare metal orchestration with containers, we had tooling around seeing which services were failing, where to go look at logs (and filter them), and we threw away the ssh keys to the AWS ECS hosts to force you into a CD deployment model. You’ll never get Sherlock Holmes access to production. Not even to run SQL queries against your production database.


Please tell us where do you work so we know where not to apply.


but Sherlock Holmes access to production is sooooo fun…


That's go on top of Linux kernel. Not bare metal.

But I love small systems like this. Today's software ecosystems have grown to ridiculous sizes. "Back to basics" is refreshing no matter the purpose.


BOOTBOOT to be precise but yeah, however you can do it all the way down to the asm bridge if you wish.

https://github.com/icexin/eggos




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: