Hacker News new | past | comments | ask | show | jobs | submit login

I still don't get why multi user. I haven't seen a multi user scenario that makes any sense for any Linux deploy in ages. Is this a shared prod server? Is it a shared Dev server? Why would those be multi user? For prod, why isnt it cattle where you don't ever SSH onto the server? For Dev if you can't just run it all on your local machine why not do something like shipyard.build

Somebody tell me what im missing.




Not docker specific, but there are still lots and lots of servers out there running multiple services. Especially in my home-lab! Each service runs under a service-specific user, sometimes with extra hardening applied by systemd. It's a tried-and-true method for gaining some semblance of security boundaries on a shared server without the additional administration overhead of kubernetes or similar.

A more relevant use case to industry might be a CI machine that you want to get better utilization out of. Easy, just start of multiple CI runners under different users. "Just use Kubernetes" I hear someone screaming? Well, sometimes you need CI on macOS.


I run a multi-user Linux server. It's IBM POWER9 hardware and several of my friends in the open-source community who care about testing on something other than x86, or who are interested in different architectures or playing with its unique capabilities such as quad-precision floating point implemented in the hardware appreciate having shell access to it.


Let's say you have some centralized monitoring of various host metrics. The ingestion process needs some amount of privileged access. But you don't want to give it full root. Meanwhile, your actual service probably needs no privileged access, save for some secrets which should also be inaccessible for the monitoring agent process. You may want to run both of these as containers.

I guess you may be using SELinux but even so, users and groups are natural parts of expressing and enforcing such constraints.


IMO, Kubernetes is going to be easier for an end user and more performant than rolling your own rootless container platform.


>> Why would those be multi user? For prod, why isnt it cattle where you don't ever SSH onto the server?

I hate the cattle not pets analogy, do you know how well taken care of most cattle are?

If your operating at google/fb/apple scale then yes you can have this approach. There are lots of business that DON't need, want or have to scale to this level. There are lots of systems where this approach breaks down.

Docker is great if you're deploying some sort of mess of software. Your average node/python/ruby setup with its run time and packages + custom code makes sense to get shoved into a container and deployed. Docker turns these sorts of deployments into App Store like experiences.

It doesn't mean is the only, or a good approach... if your dev env looks like your production env, if your deploying binaries then pushing out as a c-group or even an old school "user" is not only viable it has a lot less complexity.

As a bonus you can SSH in, you have all the tools available on a machine, you can tweak a box to add in perf/monitoring/debug/testing tools on there to get to the source of those "it only happens here" bugs...


> If your operating at google/fb/apple scale then yes you can have this approach. There are lots of business that DON't need, want or have to scale to this level. There are lots of systems where this approach breaks down.

I used to work at Yandex which is not Google but had hundreds of thousands of servers in runtime nevertheless. So definitely cattle.

Still the CTO of search repeatedly said things like "It's your production? Then ssh into a random instance, attach with gdb and look at the stacks. Is it busy doing what you think it should be doing?"

Dealing with cattle means spending a lot of time in the barn not in the books.


The "it only happens here" bugs are a symptom of mutable infrastructure.


Boundary conditions, behavior under load, behavior under dynamic load, behavior using real world networks, with real word latency...

If you dont give a shit about your customers (google, FB) where errors at scale dont matter then yes you can believe that.

Most of us dont have that luxury or dont want it.


Aren't most HPC clusters multiuser?


We have over a dozen of production servers. Both our DevOps engineers and developers have access to these servers in case something needs to be fixed or configured.


Shared webhosting is still done with multi users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: