Hacker News new | past | comments | ask | show | jobs | submit login

I think I find myself in a minority that thinks "sudo apt-get install nginx" is much simpler and who doesn't care about edge cases. If there's an edge case, something is wrong with my machine and it should die.



Do you run multiple servers/server configurations? I found my mindset was the same as yours until I was stuck managing a cluster of servers for the first time.

Being able to online a new server and have it automatically install all the required software, setup all the configs just by its hostname is a beautiful thing.


Why not both? Puppet and Ansible are relatively simple. (Puppet assumes a bit more programming experience - their config files are full of rubyisms)

They sit on top of the package manager. Ansible is more about doing commands en masse, Puppet is more about ensuring a consistent state en masse.

What happens if you want to ensure that your server farm is all running the same version of nginx? What if you want to ensure the configuration files are all in a consistent state?

You can script it yourself if you really want to, but it's a solved problem at this point. Puppet's mission in life is to notice when a server has deviated from your specified configuration, report it, and haul the box (kicking and screaming if necessary) back into compliance.

Manual scripting doesn't scale beyond 100 servers or so. You don't have enough hours in the day.


Ansible can be used as a distributed command runner but it's also a configuration management tool like Puppet (and much more).

Its "playbooks" are by default meant to be idempotent -- you can run them over and over ensuring that a system is in a consistent state.


How are you going to manage the configuration of nginx?

What happens when the configuration of nginx needs to be slightly different on each server?

What happens when the configuration of nginx needs to change?

What happens when you need to install a custom version of nginx that's not in your OS repository?

What happens when you need more than one instance of nginx running on the server?


works great but doesn't scale well, is all.

rdist was a good method in the old days which scaled a little better than one-off, but we're in the pull vs push world now.


How does it not scale? Apt scaled to however many millions of machines run Debian and its derivatives.

If you have a really large deployment, you can set up your own repository, or a mirror of existing repositories. If that is still not enough, you're Facebook.


It's not Apt that doesn't scale, it's using apt-get manually on a large number of machines that's the problem.

By the time you're done putting all your apt-gets and your config files and whatnot in some shell script to automate it all away, you've reinvented a poor clone of puppet/ansible/chef.


I think what he meant by do not scale well is that sshing into 1000 of your servers to manually run apt-get just don't work.

apt-get is fine, it's just that at some point you need an automation tool to trigger it. A lot of chef recipes rely on the underlying package manager, that's fine.


Typing apt get install across a bunch of machines doesn't scale for the person doing it. The repository will scale.


My company runs 120 machines on AWS. For technically defensible but for this purpose irrelevant reasons, there are CentOS and Ubuntu machines in our stack. How do you propose I provision nginx on exactly the ones I want it on? How do you propose it do it for me so I'm not getting paged at 3AM?

Simple solutions regularly fail when you add zeroes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: