Hacker News new | past | comments | ask | show | jobs | submit login

That was put in strong words, but there are some assumptions that are fairly incorrect.

About packaging:

> Notice that output files of one software package are also inputs (as the filesystem) to the other software packages.

Yes, and no. If you're serious about producing the package to distribute it, rather than to install locally, you're going to build it in an isolated environment that has a completely controlled set of dependencies available. For deb packaging that's provided by pbuilder. This is comparable to what happens in nixos at build stage.

About deployment:

> Our tool needs to: - figure out what kind of package manager we're using on our Linux distribution

That's not a dynamic thing. This is solved using plugins and the only operations you have are install (including chosen version/upgrade) and uninstall. There are a lot of system-specific options you probably care about, but they come up because you're most likely running a general purpose system. This means desktop user or developer will want 'apt-get install mysql' to do everything and give you a running server at the end, while people doing automated server deployment will want it to install the binaries and stay far away from the config files or restarting anything.

> The problematic part of such system is the fact our tool had to connect to the machine and examine all of the edge cases that machine state could be in

If you're running at scale, no, you're not connecting and not figuring stuff out. You'll want to run everything locally. There should be no edge cases either. If something doesn't work, go back to your dev environment and figure out what needs to happen. (or to put it another way, why are there edge cases if all servers are set up from the same description to begin with?)

> No dependency hell. Packages stored at unique $PREFIX means two packages can depend on two different openssl versions without any problem. It's just about the dependency graph now.

And managing security patches is a bit harder suddenly. You have multiple applications possibly using multiple versions of the same library.

> Source and Binary best of two worlds.

Only if you can turn the source installation off for the whole system. I don't want package installation to trigger a compiler run on a live system. Ever.

> Rollbacks. No state means we can travel through time. Execute --rollback or choose a recent configuration set in GRUB

This is not true for any non-trivial system. Can you rollback your database? Maybe, but it may not be able to read the files anymore. Can you roll back your language runtime? Maybe, depends if your code is using new features. Can you rollback a library? Depends what has been built on top of it already. Rolling back the binaries is what existing packages already provide and it's the easiest part of the rollback.

So after all this thing about separating yourself from the OS assumptions and various states, etc. we end up with "mkdir -p ${cfg.stateDir}/logs" - why is that embedded into the package at all? What if I'm running on a R/O system and log over network?

The article does raise some great points, but then describes a system that either still has the same problems or trades them for something equally bad. Also we still need something to tell all the new servers what the "cfg.stateDir" and other inputs should be. And it will need to know that it's running on nixOS. There are tools that can do that: chef, puppet, salt, ansible, ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: