Hacker News new | past | comments | ask | show | jobs | submit login

Yeah but when you run a playbook it's running from a single machine which is calling out via SSH



Not necessarily from a single machine. It's pretty easy to divide your network and control the divisions from git clones f your Ansible files.

Ultimately you could have a git clone for every machine and only ever run it against localhost.


Yes. The host will run 100% CPU to handle the hundreds of SSH connections.

I've been re configuring 300 to 800 hosts many times a day, never had a problem. I think it would take a few thousands hosts for the performance to be noticeably slow and I am really not sure that other tools or systems could take it much better.


I know our SREs once screwed the config for sshd, and considered themselves very lucky that they had puppet on the machines and could push a fixed configuration (if they had used exclusively ansible, that'd be the end of it - no way to connect or to deploy new configuration)

[edit] To clarify - ansible is great, and we use it. Just saying that, as everything, it still has (sometimes subtle) downsides in various scenarios. If it works well for you - great, but maybe others really were bitten by it.


There's nothing stopping you from having a sshd instance dedicated for use just by ansible, on a different port/different network, on every node. Now if that's simpler or more complex I don't know.

But "have two ways in" is a basic principle of sys admin (typically via traditional network and some out of band console access).


When i worked with physical machines, they had embedded management systems, which were on a physically separate network to the machines' main interfaces, ran a little embedded SSH server, and would (amongst other things) give you a console on the machine.

Simpler machines should still have serial consoles, and you can get those on the network via a terminal concentrator or a serial-to-ethernet adaptor.

I would love it if Ansible could control machines over an interface like that, rather than via SSH. Then you wouldn't even need to run SSH on machines which don't need it, which is most of them.


Well, teach your sysadmin to use the system configuration tester when they edit a system configuration file.

Nothing to do with ansible really, except that ansible allows to prevent that easily.


> Well, teach your sysadmin to use the system configuration tester when they edit a system configuration file.

Wrong. Teach your sysadmin not to overload a single service with different functions (debugging channel, user-facing shell service, running remote commands, file upload, and config distribution channel), especially not the one that should not be used in batch mode, without human supervision.

When you write an application, you don't put a HTTP server in database connection handling code, but when it comes to server management, suddenly the very same approach is deemed brilliant, because you don't run an agent (which is false, because you do, it's just not a dedicated agent).


Are you advocating running multiple sshd instances in this case?


Good heavens, no! You'd only have two different instances of the same service that is difficult to work correctly with.

For serving as a debugging channel and user-facing shell access, SSH is fine (though I've never seen it managed properly in the presence of nodes being installed and reinstalled all the time). But for everything else (unattended):

* you don't want commands execution, port forwarding, or VPN in your file server

* you don't want remote shell in your daemon that runs parametrized procedures -- but you do want it not to break on quoting the arguments and call results (try passing shell wildcards through SSH)

* you don't want port forwarding and remote shell in config distribution channel; in fact, you want config distribution channel itself to be reconfigured as little as possible, so it should be a totally separate thing that has no other purpose whatsoever

* you don't want to maintain a human-user-like account ($HOME, shell, etc.) for any of the above, since they likely will never see a proper account on the server side; you want each of the services to have a dedicated UID in /etc/passwd, own configuration in /etc/$service, own data directory, and that's it

Each of the tasks above has a daemon that is much better at them than SSH. The only redeeming quality of SSH is that it's there already, but it becomes irrelevant when the server's expected life time gets longer than a few days.


Yes, because everybody knows that testing eliminates all bugs.

(it's not that testing is useless - far from it; but I thought the HN crowd knows better than to respond to issues with "that's because you didn't do enough testing!")




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: