Hacker News new | past | comments | ask | show | jobs | submit login
Fucking Shell Scripts (fuckingshellscripts.org)
385 points by danso on March 14, 2014 | hide | past | favorite | 173 comments



Have to say I'm a bit puzzled by the claim of Ansible being "blow your brains out" difficult.

In many cases, Ansible is even easier than shell scripts. I wrote a post about this a few months ago: https://devopsu.com/blog/ansible-vs-shell-scripts/

I completely understand where the sentiment is coming from though. I wrote a comparison book on Puppet, Chef, Salt, and Ansible a few months ago and am currently finishing the 2nd edition ( https://devopsu.com/books/taste-test-puppet-chef-salt-stack-... ). Even for an experienced sysadmin, using Puppet and Chef to do even a trivial project (replacing a ~10 line shell script) took a painful couple of days. Why? They're overly complex and have confusing broken documentation (that mostly haven't been corrected even 6 months after I gave them a full breakdown on the issues). Salt was pretty smooth, but Ansible was downright easy.

Using a shell script to set up a server generally indicates that it will then be managed manually afterwards (sadness and despair!).

A huge advantage of using a configuration management (CM) tool is that they're "idempotent". Idempotency basically means that you can run the directives over and over again safely.

An idempotent command will verify that the system is how you defined it and will only make changes to bring the system back into alignment with what you defined. That means you can define your system in the language of the CM tool and use it not only for initial system setup, but also for monitoring, updating, and correcting a server's configuration over the life of the server.

A CM tool can ultimately act like a self-healing test suite for your systems - neat!

Your systems are the "app" that your app runs on. They're the foundation. Not using a CM tool is like not having any tests for your app. Sure, it might seem faster at first, but you'll pay for it in chaos, slowness, bugs, and misery later.

Shell scripts are a great first step, but if you're serious about your systems, you need to be using a CM tool. Modern ones like Ansible are simple, easy, and have great docs so there's little excuse left for not using one now.


> A huge advantage of using a configuration management (CM) tool is that they're "idempotent". Idempotency basically means that you can run the directives over and over again safely.

They're idempotent only as long as the configuration stays the same. If you removed this "apt: package=XXX state=present" line, XXX is not going to be magically uninstalled, but you may have a nasty surprise the next time you attempt to provision a second server with the same configuration and realize you're missing a runtime dependency. That's what I find fascinating about NixOS/NixOps: you can describe the entire state of the machine in a single configuration file (if you use the declarative style of package management). Of course, it won't help you if:

- you already have a large number of non NixOS systems

- you need many packages not present in NixOS

- you'd like polished package management (the Nix package management still has a way to go before it reaches yum/apt ease of use)


Until Nix becomes more popular, I've been thinking about ways to build an audit tool that would work with CM tools to handle this case:

http://stevenjewel.com/2014/03/puppet-undo-and-puppet-audit/


Are there VPS providers that offer NixOS by OOTB?


No need for that. The configuration management of NixOS is baked right into the installation procedure. This makes it super simple to install a fresh machine completely from the configuration.nix file that you've developed over time.


Well, with an OpenVZ VPS, the most common variety these days, you'd have to have it offered by the provider.


Ahh, I would never use a VPS that didn't let me use my own OS install image.


I don't have any information about it, sorry.


If using a cloud, another often overlooked option is to just tear down and replace the instance rather than try to update/correct its configuration. In that case, a shell script or Userdata script will be more than enough and represents a single source of truth for how a server will be built (never upgraded).


That's a great point. It's one reason Docker is so exciting - to be able to replace a running server in a sane and organized way is a killer feature.

For many cloud environments, it'd be costly (in time and energy, not necessarily $) to replace all 1000 virtual servers for an update. With Docker, you can essentially do that in a trivial manner. I'm still learning Docker and my understanding is still a bit weak, but it's an exciting development in this regard.


In my experience (playing with ansible, on and off, for quite a while) limitations or bugs in the tool often lead to your ansible scripts to become .. shell scripts.

The more raw/command/shell modules you have to use, the uglier the whole approach seems to me - and maybe not worth the trouble in the first place. If it can't be done 100% 'right', should I bother?

I put my efforts on hold so far. Ansible improved greatly (especially the documentation) lately, but .. it still feels cumbersome and hackish for my usecases.


Writing ansible modules is really easy however. They're essentially just executables (usually python, but really any language), that take in JSON, and output JSON.


> The more raw/command/shell modules you have to use

Yeah, that's a problem. It's gotten a lot better with the 1.5 release.

Example: we just made the light-speed jump from 1.1 to 1.5. While re-doing a role yesterday, I noticed we now have a module for ec2 snapshots.

So _next_ week I'm replacing a 50-line bash script with a five-line playbook. Which will be run by Jenkins.


I used to work for a little company called The Internet Marketing Center. I'm not sure who taught you how to market this way, but I definitely giggled a little when I saw your site using IMC style long copy to sell tech ebooks. I may even buy a copy :)


Thanks Gary :)

I got most of my inspiration from Kathy Sierra and Why the Lucky Stiff. They're waaaay better than me in this regard, but they taught me that most brains love a little whimsy. I try to add a little since systems engineering can slip into dryness pretty quickly if you're not careful.

Little jokes like the borg cow can lighten things up: https://devopsu.com/newsletters/ansible-weekly-newsletter.ht...

An added benefit is that whimsy helps filter out trolls. Adorable puppies, kittens, squirrels, cows, etc have a way of turning trolls away or at least softening them up a little :)


I remember this genius edu. researcher (Alan Kay) talking about his dog: "...and my little dog, Watson..." I wondered why he would emphasize the diminutive nature of his adult dog. The closest guess I came to was: Even though this guy had no kids, his mind/body still is "programmed" to respond to baby-like features. That might explain why your cute pictures calm the trolls. Then again, I'm no neuro-socio-biologist.


> Have to say I'm a bit puzzled by the claim of Ansible being "blow your brains out" difficult.

Well not blow your brains out difficult but "learn a new syntax, behavior, rules, depend on a new package" difficult if a few shell commands is all you want to do.

I can see where they are coming from. I can go a long way just using shell script to configure (and yes, you can make them idempotent too).

From your site:

> You already have a religious passion for a particular CM tool

Aren't you doing the same against Chef and Puppet then? It sounds like. Oh "comparison" -- yeah just use Ansible.


Good point: "if a few shell commands is all you want to do."

Agree totally. If you're doing something tiny, then a few shell commands are what is needed, not a CM tool.

I'm speaking mostly about serious systems that businesses run on.

Ansible is not for everyone. Each tool has strengths and weaknesses. I generally push Ansible because it's the easiest to get started with, but can also scale to 10K+ nodes. If something simpler/easier comes along, I'll recommend that instead.

I suspect some combo of Docker and Ansible to ultimately be the simplest set up (in the near-term), but I'm actively learning that and not confident enough in it to be able to suggest it to newbies.


I'm all Ansible & Docker on my personal projects / servers, what a blast! http://gerhard.lazu.co.uk/ansible-docker-the-path-to-continu...

More focus on the "why", less on the "how" http://thechangelog.com/ansible-docker/


Well, I'm a Salt user and I was communicating with Matt during the writing of his book. During that time he certainly gave Salt a solid, fair evaluation. He also spent a lot of time hanging out in #salt IRC asking key questions, and answering some as well.

So I would suggest that it's natural and OK for someone's evaluation process to yield a favorite (apparently in Matt's case, it's Ansible), without it becoming a "religious passion."


I use a Bash script to set up Arch with Btrfs on LUKS and optionally enable SSH [1]. Assuming you have a base machine running, config management tools are great and quickly start to make sense. If you're starting from scratch with a blank physical machine, I think the answer is still shell scripts and then add on CM afterwards.

1: https://github.com/atweiden/pacstrapit


"I think the answer is still shell scripts and then add on CM afterwards."

I'm genuinely curious. Why? Ansible, in particular, provides the same value (easy) but has existing modules to do a bunch of the things you'd have to script.


you gotta setup the ssh user, modify/install ssh keys, install python if it's not there, possibly adjust firewalls, setup the hostnames, possibly tell the machine where to find the private dns servers, etc.


Isn't that what Redhat's kickstart (and it's equivalents in other distros) is for? Give you the minimum base you've decided your org wants systems installed with? It's trivial to add users, keys, initial firewall config, etc.

If you deploy more than a few systems a year and don't have a PXE boot environment (or at least the equivalent of a network accessible kickstart config to be manually selected) or a golden image to deploy from, then I can see how CM tools may seem a pain, because you haven't tackled the initial manual pain point yet, the actual install.

I haven't really used any CM tools, so maybe I'm getting the point where kickstart would traditionally leave off and ansible or chef would take over slightly wrong, but I can't see it being all that complex to automate configuring one after install.


No, you obviously have console access, all you need is the ansible scripts and ansible. If you want to push from a central repository, then yes, you'd have to have a way for ansible to reach the box. But we're taking as a given that there's a way to run commands on the box here (and get script files via the network).

There's a difference if you want to run the scripts automatically on install (non-interactively).


Actually, to use Ansible you need only ssh access (with password or public key) to root user. Everything else can be done easily in a simple Ansible role.


In fact you can specify the user, you can have a dedicated user with sudo for instance.


In my experience, there's a huge amount of work involved in just getting Ansible (and other CM systems) to work right in the first place, especially if you're relatively new to CM.

Sometimes you're working on a scrappy prototype and it's just quicker to use shell scripts initially and then go back to solidify the setup into CM scripts afterwards, once you know what you actually need.

Of course that approach will fall to hell if one never gets back around to making the CM scripts.


Well, now you obviously already have a shell script that does what you need, so anything else is going to be more difficult to use. That aside, I'm not sure that's much easier than using Ansible, however.

For Ansible, you'd need python and Ansible. I'll assume the official iso already has python (it's a full iso after all) -- for ansible you'd need to install it to ram (eg: (optinally virtualenv) and pip install), or modify the iso (say with[1]). Then you'd need a (optionally set of) script(s). Then run that script through ansible. So essentially everything could be the same, but with a lot of the logic of your current script handled by ansible.

[1] https://wiki.archlinux.org/index.php/archiso


I have not started a system from scratch for a few month, but as far as I remember, ansible depends only on python and ssh.


I agree. If someone claims Ansible is hard, they should go RTFM. Then if that doesn't work, they should go back to grade school. Ansible is easier than shell scripts by virtue of not having to write shell code. Then once Ansible Galaxy is out, config management will be trivial.


Ansible is easy compared to Puppet/Chef specifically because it doesn't use Ruby. That doesn't make it good.


Coauthor here. Well, this is a surprise! This project was mainly borne out of our frustrations with Chef and then Ansible for configuring our servers.

A few thoughts: 1) This project is incomplete and is more of a concept than anything. We wrote it literally in an afternoon, and were giggling like schoolgirls the whole time.

2) There is value in what Ansible provides. It does a lot of things for you. FSS did not replace ansible where we work.

All that being said however, I feel like this (or a project like this) does have merit, if executed properly. I feel that once you get used to how FSS works, it might be a good solution for you. We intentionally tried to keep it as simple as possible.


It's pretty funny, but it's an example of where we do NOT want to be headed.

The Unix philosophy is a set of tenets based on assumptions that held in the 1970s and 1980s, but don't really hold today. One of these tenets is, "make the implementation as simple and as correct as possible; it is better for an implementation to be simple than to be correct."

This may have been true in an era when every site had to roll a homegrown solution for pert-near everything that didn't come with the base OS, but in this era of open source it is far more important for the implementation to be correct, because the Right Thing can be written once and everybody can use it, and save themselves the accumulated hours of frustration incurred by simple-but-subtly-incorrect implementations.

This is why the Linux world is standardizing on systemd -- to get AWAY from Fucking Shell Scripts and towards a more deterministic, declarative model of what we want done. In the case of configuration management, what you want is a tool that accepts a description of what the system configuration should be, diffs that against the current configuration, and enacts a plan of changes to get from point A to point B automatically. NixOS seems to be a good step in this overall direction.

Fun fact: I used to do instancing of robotic control computers running a specialized version of Debian with Fucking Shell Scripts (FAI, to be exact: http://fai-project.org/ ). It was pure hell. We would have killed for a more deterministic solution.


Simple is almost always Correct.

Conversely, complex is almost always incorrect.


"Simple enough is almost always correct". "Too simple for the problem domain" is almost always a synonym for "build a heap of absolutely terrible and verbose pile of complexity on top of the 'simple' system". See also sysvinit.


Strongly disagree. Often complexity in your implementation is necessary to present a simple interface to your user.


The primary user of your implementation is the next developer to maintain it.


The user of a piece of software is more important than its current or future maintainers.


Yes but current or future users want reliability, features, and speed. If you care about your users in the long term you should care about the project's maintainers.


I suggested a relative ordering of importance; nowhere did I imply that one was important and the other was not.


Thinking like this is why we have so much crappy software out there.


To quote Ryan Dahl: The only thing that matters in software is the experience of the user.

How many more times must it be said? How many more times must Apple win -- and win big -- before open source nerds get the message?


I thought the unix quote on simple vs correct is regarding things like "what should the kernel do when a process is processing a system call like read/write and needs to handle a signal. should it return an error of type EAGAIN, or somehow completely shield the user-space program of this condition?".

http://www.jwz.org/doc/worse-is-better.html

> This may have been true in an era when every site had to roll a homegrown solution for pert-near everything that didn't come with the base OS, but in this era of open source it is far more important for the implementation to be correct, because the Right Thing can be written once and everybody can use it, and save themselves the accumulated hours of frustration incurred by simple-but-subtly-incorrect implementations.

The BSDs and linux are open source, but we still must check for EAGAIN.


I knew this was a systemd advocate after the second sentence. Hilarious.


Uh, most of the Linux community consists of systemd advocates now. Systemd is now a fait accompli on all the major distros, and most of the users I encounter just want the bickering to stop. It would have stopped long ago were it not for a small but loud contingent of haters.


> One of these tenets is, "make the implementation as simple and as correct as possible; it is better for an implementation to be simple than to be correct."

Having been a Unix admin since long before Linux existed, I find this statement to be complete bullshit.


I think it's the chinese whisper effect applied to http://www.jwz.org/doc/worse-is-better.html [0] and as such more cult than information. As witnessed by the discussion around here.

[0] ("Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct.")


Interesting theory. Considering the ignorance displayed, I think you're correct. Microsoft quality anti-Unix FUD.


I can imagine someone proposing this at some meeting with managers. "For our next product, we're going to use FUCKING SHELL SCRIPTS for configuration and deployment". What would the managers' faces look like...

I wonder if this is what you guys had in mind when you came up with the name


The name came about when we were writing the majority of our ansible code. We mentioned to each other in frustration multiple times, "Man, this would be so much easier if I could just use fucking shell scripts."



When was this? Ansible today is much smoother than a year ago.


Just say FSS, managers love acronyms!


hah, this is awesome! thank you.

i've used both chef, and puppet. and i still have a puppet standalone bootstrap script that i use every now and then. but I too created a simple shell script bootstrapping process for one of the projects i was working on.

it pretty much boils down to this:

    tar cjf bootstrap.tar.bz2 VERSION library $2
    scp bootstrap.tar.bz2 protonet@$1:/tmp
    ssh -A -t myuser@$1 /bin/bash -c "
      mkdir -p /tmp/runscript
      cd /tmp/runscript
      . ~/.profile
      tar xjvf /tmp/bootstrap.tar.bz2
      bash /tmp/runscript/$2 $3
      rm -rf /tmp/runscript
      rm /tmp/bootstrap.tar.bz2"
so far i haven't really found a good reason why we can't just have a library of generic bash scripts doing things.

i have to apologize about the crappy function style, and the inconsistent shebangs, but meh, whatever.

https://github.com/fishman/simple_deploy


I agree.

It does in fact make me smile a little bit when the puppet installation script is longer than the script to do whatever it is that needs to be done to install the app itself. :)

Over time, it may make sense to move to a more deterministic solution, but to start there (IMO) is often a case of premature optimization.


Installation is mostly trivial. Maintaining a bunch of systems is not.

> Over time, it may make sense to move to a more deterministic solution, but to start there (IMO) is often a case of premature optimization.

I mostly agree with this though.


why we can't just have a library of generic bash scripts doing things.

It depends on what you're doing, I suppose. Me? I wouldn't want to try to use "simple" bash scripts to manage a large (or even medium) app farm, especially when those servers need ongoing management.


You definitely win points for working over pure SSH, I love tools that do that. (Which include 'fabric', 'ansible', and similar.)

Me? I like perl and I decided I'd write something that just executed "primitives" locally. Then setup a flexible system of pulling them from git, via rsync, etc.

My tool is largely ignored, but modelled after CFENgine 2.x, and is here:

http://www.steve.org.uk/Software/slaughter/


I see the appeal of FSS; I've felt frustrated with the extra effort it takes to describe some operations using Ansible compared to shell scripts. However, your project suggests to me that what we may really need is a tool to translate shell scripts into (reasonably idiomatic) Ansible playbooks, perhaps interactively and with further review from the user. That, or to otherwise automate the creation of playbooks where possible.

Edit: removed speculation about an idempotent *nix shell.


Thanks for this.

Lately I've been researching a way to automate things for a bunch of vps I own (nothing important, mostly self-hosted services like tt-rss), and obviously I looked at Chef/Puppet/Salt/Ansible and the likes. While those are valuable and awesome tools, I feel that they're too complicated for my simple needs.

FSS seems exactly the right compromise between running everything manually and using a full blown management tool.


I love it - we got into a dialogue at work today with OPs and Eng on what to use for config mgmt (of our app)... and then this hit HN today...

I sent this along as a tongue-in-cheek 100% Solution.

And.... I actually read the site after I sent this along and realized its actuallyfuckingcoolshellscripts.org

So... thanks!


If you'd like an alternative that is even simpler than ansible, yet still idempotent, try pave:

https://pypi.python.org/pypi/pave

https://bitbucket.org/mixmastamyk/pave


We're using Chef right now, and evaluating other options. They seem to either be between way too over-engineered, or not flexible enough, but nowhere in between. I'd love to see a tool like yours be brought to full maturity, then I think I'd love to use it.


As I mentioned in another place above, if you'd like an alternative that is even simpler than ansible, yet still idempotent, try pave:

https://pypi.python.org/pypi/pave

https://bitbucket.org/mixmastamyk/pave


Other co-author here: I couldn't agree with you enough. We settled on Ansible, but have had our fair share of speed bumps along the way.


Out of pure curiosity, what makes Ansible the best choice for you?

For my team, I decided on Ansible because it was the simplest option (no agents to install, pure ssh).


By the way, that's the same reason I'm investigating using Pallet[1].

[1]: http://palletops.com/


We were coming from Chef where no one knew what was going on. We were looking for something simpler. There wasn't really many other options unfortunately. We're not full-time devops people...hence much of our frustration.


What were your frustrations with Ansible?


  "Wanna just use fucking shell scripts to configure a server? Read on!"

  Step 0: Install the gem

  # gem install fucking_shell_scripts
  # gem: command not found
wat.


Exactly

If I want to use FSS to configure the server, I already do. I don't need the "latest fad in configuration tool" to do something FSSs already do!

Like fabric. If you're not using their advanced features, you're writing a shell script in python.

Thanks, I'll write a shell script


I agree. This would be far more awesome if you could install it with a fucking shell script. It is called fucking shell scripts. Not fucking ruby scripts.


Joking aside, there is something to be said about doing CM with the shell.

If somebody created a list of requirements for the perfect language for doing CM it would probably be similar to:

- Always available on every machine

- Proven reliable and stable

- Built to interface with the OS and other utilities

Hmm, that sounds a lot like the shell. If your application doesn't already depend on Ruby, bringing Ruby in as a dependency, is a lot of extra overhead. There is something to be said about this concept, but it should be shell all the way down.


On the other hand, Python comes installed by default on Red Hat, CentOS, Scientific Linux, Gentoo, Debian, Ubuntu, MacOS X, Solaris and OpenIndiana, at least, so a "full" CM system like Ansible is potentially more portable than FSS :)


Most people coming from chef would have done the drudgery of msking surr ruby works.


chef installs as omnibus, so you're suggesting that people should install the massive chef stack, and then the FSS gem?

_brilliant_ !


Well this is clearly aimed at people who are frustrated with Chef, above and beyond the frustrations of figuring out how to install ruby (just install from source, rvm sucks).

If someone doesn't already have Chef installed, why would they even have clicked on this link?



Yeah, but like, that's not a shell script, man.


Its just fucking shell scripts, already! *

* Depends on Ruby.


Honestly this is not a bad idea. Since bringing up a new instance VM will always be in exactly the same state, the shell script is completely deterministic. If any line fails to run, you could simply log an error and automatically shut down the VM.

The bash script itself is also extremely straight forward and is easily testable with a built in REPL (you know, bash). Anybody who vaguely knows unix is also going to be able to understand and maintain the shell script. Adding in new dependencies is simple. It's easily portable and you don't need to do any extra work to add in new features. I can't even think of a drawback.


Typically shell scripts are fine in the beginning, but over time the complexity rises and it becomes an unmaintanable mess, and you'll end up reimplementing it in a proper programming language.

Shell scripts are not easily portable either, unless by "portable" you mean "works in Linux". Node.js scripts, for example, really (mostly) work on Linux and other platforms like Windows.

Until you hit some nasty "path is longer than 254 characters" bugs. Oh well..


I'm making the assumption that you are running a bunch of AWS instances off the same AMI. If you're not, a shell script is not going to be deterministic or maintainable - but most companies these days are just using AWS with an ubuntu AMI and then running some software on top x100 for each server. The solution to this has been generally very complex deployment and management programs that you need a devops team to keep maintained.

For something like this, you'd start up an instance, ssh in, set up the server using bash, grab your commands out of bash history into a script, and then deploy 100x instances and have them all run the script. Simple enough anybody who has used unix now understands your whole ops setup. It doesn't have the perfect rigor that other solutions have - but sometimes that high learning curve and perfect rigor means that you miss the forest for the trees.


Finally, someone who gets it.

Just because someone wrote something shiny and new that replaces 10 lines of shell with 20 lines of chef installation and another 10 lines of chef, doesn't mean you have to use it.

for box in box1 box2 box3; do

cat << 'EOF' | ssh $box

# do something

EOF

done

easy. Even better, just pass in some data into userdata with your autoscaling group. (That userdata field? just start it with a #! line just like a shell script... and it'll execute your shell, php, perl, python, etc script!)


Shell scripts are extremely portable and should be the preferred method for a large set of tasks. Properly written, a shell script can run on a 20 year old Solaris machine, any version of Windows (with installed tools like Cygwin) and any modern Unix variant... claiming Node.js is more portable is ridiculous on so many levels.

The problem is so many programmers don't take the time (or care to take the time) to learn to use the well thought out design of Unix tools opting instead to see every problem as a nail corresponding to the latest trend in hammers (programming languages).

This has led a lot of programming types to create advanced tools for managing Unix systems which largely ignore the design of Unix.


It's hard to write a portable shell script. My dotfiles need to be portable, and they involve a lot of shell. Every time I introduce a new OS, I have to make changes. Various oddities get you. These, for example, look really innocent but aren't portable:

  find -iname 'foo*'  # [1]
  ... | sed -e 's/ab\+c//'  # [2]
  ... | sed -i -e 's/abc//'  # [3]
  tar -xf some-archive.tar.gz  # [4]
  python -c 'anything'  # [5]
Things like messing around with /proc are more obvious, but things like curl (is curl installed? what do we do if it isn't? try wget?) can be hard too.

[1]: find doesn't assume CWD on all POSIX OSs [2]: "+" isn't POSIX. You have to \{1,\} that. [3]: -i requires an argument on some OSs. [4]: This is stretching the definition of portable a bit; I've worked on machines where you had to specify -z to tar, given a compressed archive. (tar has been able to figure out compression on extraction for well over a decade now, so -z is usually optional, but some places are really slow to upgrade.) [5]: Unless anything is a Python 2/3 polyglot, you'd better hope that you guess correctly that Python 2 was installed. (And it's really hard here: python is either python 2 or 3 on some systems, depending on age & configuration, with python2 and python3 pointing to that exact version, but on some machines, python2 doesn't exist even if Python is installed, despite PEP-394.)


It's a well known fact that GNU tools have plenty of extra features which you have to be careful about using if you want portibility AND that many of the legacy commercial Unix implementations have positively ancient implementations and feature sets. I wouldn't really say it is so very difficult though.

Every script you write isn't going to be portable, but it's not that much of a stretch to endeavor to keep your script simple, not make assumptions, and be mindful of the potentially missing features of some implementations.

I take a special objection to [5], `python -V` isn't difficult at all to run, hoping and guessing are not necessary.

There's a good guide here: http://www.gnu.org/software/autoconf/manual/autoconf.html#Po...


Portability is a red herring anyway. If you pursue it, you'll always end up chasing the lowest common denominator.

YOU can control where the app is deployed (this is largely true even if you're selling your app just by having installation requirements or by selling appliances instead of installable apps).


> I take a special objection to [5], `python -V` isn't difficult at all to run, hoping and guessing are not necessary.

I mostly meant that in a simple statement of:

  python -c "code"
…you're probably forced to assume that it's Python 2 (or write a 2/3 code) and hope that your assumption is right. You can't run `python -V`: you're a script! The point is that it is automated, or we wouldn't be having this discussion.

Of course, you can inspect the output of python -V (or just import sys and look at sys.version_info.major) and figure it out, but now you need to do that, which requires more code, more thought, testing…


I'd argue that you should probably stick to one subset of things in your bootstrap script -- and I'd say grep, awk, sed and (ba)sh go together, anything "higher level" like python/ruby/perl/tcl does not fit within that. You might want to check for python with a combination of "python -V" and the dance described above -- and, as part of bootstrapping, make a symlink (or copy, if you need to support windows and/or a filesystem without symlink support) to eg: python2. Save that tidbit as "assert-python2.sh" and then first "assert-python2.sh" then "check-bootstrap-deps.sh" and finally "bootsrap.sh" :-)


Interesting, for 1 and 4 -- I immediately assumed that might break (as for tar, I'd generally prefer something like zcat (or for scripts gzip -dc) | tar -x... makes it easier to change format (both gzip to lzma and tar to cpio). For 2,3 I'd be wary of sed for anything that needs to be portable in general. For 3, it seems prudent to use a suffix with -i anyway; explicit being better than implicit most of the time.

As for 5; How many system does have python 2 installed, but no python2 binary/sym-link? (I've never had to consider this use-case for production).

Note a slight benefit of splitting tar to zcat and replacing python with python2, is that you'll get a nice "command not found" error. You could of course do a dance in the top of your script trying to check for dependencies with "command -v"[1]. If nothing else such a section will serve as documentation of dependencies.

Something like:

    # NOT TESTED IN PRODUCTION ;-)
    checkdeps() {
      depsmissing=0
      shift
      for d in "${@}"
      do
          if ! command -v "${d}" > /dev/null
          then
            depsmissing=$(( depsmissing + 1 ))
            if [ ${depsmissing} -gt 126 ]
            then
              depmissing=126 # error values > 126 may be special
            fi
            echo missing dependency: "${d}"
          #debug outpt
          #else
            #echo "${d}" found
          fi
      done
      return ${depsmissing}
    }

    deps="echo zcat foobarz python2"
    checkdeps ${deps}
    missing=${?}

    if [ "${missing}" -gt 0 ]
    then
      echo "${missing} or more missing deps"
      exit 1
    else
      echo "Deps ok."
    fi

    # And you could go nuts checking for alts, along the lines of
    # pythons="python2 python python3"
    # and at some point have a partial implemntation of half of
    # autotools ;-)
[1] https://stackoverflow.com/questions/762631/find-out-if-a-com...


Ahh yes, the much vaunted Hammer Factory Factory.

http://discuss.joelonsoftware.com/default.asp?joel.3.219431....


Completely off topic, but very interesting -- the author of that post wrote a book about surviving the Costa Concordia! http://www.amazon.com/gp/product/B00AUYIKNK/ref=as_li_qf_sp_...

Seems like he really did need some tools.


This fills me with great joy and sadness.


>The problem is so many programmers don't take the time (or care to take the time) to learn to use the well thought out design of Unix tools opting instead to see every problem as a nail corresponding to the latest trend in hammers (programming languages).

Unix tools are the arguably the best tools available to a modern user. That, however does not mean that the Unix tools are well designed; many would argue that the Unix tools are extremely poorly designed or have no discernible design at all. S-expression are a much more powerful and useful abstraction than a "stream of bytes". POSIX was hacked on many years later in attempt to make sense out of the mess that shell commands had become. Shell scripts are very fragile and have never been truly portable across various *nixes, although the situation is better than it was twenty years ago, when it was enormously difficult to port scripts across the various commercial Unix installations, because they would break in many different and subtle ways.

I recommend reading the out of date but still useful Unix Haters Handbook: http://pdf.textfiles.com/books/ugh.pdf


Portability always comes at a high complexity cost. If you don't see it personslly, someone else in your org does. Instead, look at why you think you need portability.


This. So many ORMs written with the idea that "it'll make your code DB-independent". How many projects switch DBMSes midway through?


Why do you really need it to be portable outside of Linux? I don't imagine many services will evolve to run on other platforms in their lifetime without significant configuration changes anyway.

I can't really argue the point about complexity though.


Because there are other operating systems out there too, with their own features making them better choice (or just a preference) for certain tasks over Linux, including FreeBSD, OpenBSD, DragonFlyBSD, SmartOS, Illumos... World doesnt end with Linux, neither starts with it ;)


Chances are that you don't have a mixed fleet for a particular application though. So if you have a bunch of FizzBuzz VMs that you need to bring online, you can safety write shell scripts that work on FreeBSD because you know all FizzBuzz machines will be running FreeBSD. If you've also got your BazQux service that needs a Linux fleet, then for that fleet you create shell scripts that work on Linux.

There may be some overlap between things that must be done on both the FizzBuz and BazQux fleets, but that overlap is probably in simple tasks.


I think the point was that it's not common for someone to switch from running a server on Linux to running a server on something else, and it's even less common for someone to do that without changing the configuration pretty extensively.


> Honestly this is not a bad idea. Since bringing up a new instance VM will always be in exactly the same state, the shell script is completely deterministic.

Wrong, kind of. There are two variables in a VM: 1. Everything not in the VM and 2. The script itself.

1 is mostly dependent on what you're doing. If you're just calculating digits of pi, then yes, it's quite probably deterministic; if you're deploy software that's being pulled from github, running some initialization scripts, attaching some storage, then you're going to run into variables. All of those aforementioned actions have failed: github.com might be down (a rarity, but happened this week!), your scripts contain new code that's not quite up to par, and the cloud provider says the storage is attached to the VM, but it doesn't actually show up.

2 is that the script is probably in a VCS, and people are changing it. Someone is bound to write a line that doesn't work. (In fact this seems to happen quite often when tests are absent…)

> I can't even think of a drawback.

I can. The biggest one is that bash's arcane syntax is a deathtrap. It's a great shell, but for stuff that needs to work and work reliably, it's riddled with holes. Take the article's script:

  sudo apt-get -y install build-essential zlib1g-dev libssl-dev libreadline6-dev libyaml-dev
  cd /tmp
  wget http://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p247.tar.gz
  tar -xzf ruby-2.0.0-p247.tar.gz
  cd ruby-2.0.0-p247
  ./configure --prefix=/usr/local
  make
  sudo make install
  rm -rf /tmp/ruby*
Several of these (apt-get, wget, tar, did you just install code downloaded over an insure channel onto a server?!, ./configure, make, make install) can easily fail; if they do, your fucking shell script will keep plowing along as if nothing happened. Depending on the next action, this can be meh, or WAT. Since it ends with "rm -rf ...", I think if it does blow up horribly, it'll return success. You can say "set -e" at the top to cause it to bail sooner, but `set -e` won't catch failures in all commands (false | true). Fucking shell scripts.

Don't get me wrong: shell scripts are great, especially if you need it to work NOW. One off stuff especially. But if it's going to stick around awhile, having something that automatically looks at and raises exceptions/errors when stuff fails is great.

The thing I miss from a lot of these automation libraries is being able to annotate dependencies between commands. That wget and apt-get can run together. (The rest is must pretty much run in parallel.)

Libraries also allow people who really know how to make this stuff sing built the low level functionality in. That make could be make -j $(( $coeff * $number_of_cores )) ; make install could be similar. Maybe CFLAGS or CXXFLAGS could compile ruby with a bit more options for a slightly more optimized install. We might extract the tar in a directory where a rm -rf /tmp/ruby* won't inadvertently delete something (unlikely if you're on a new VM, but I find that's not always the case).

Shell scripts are a tool. They have a place. Nobody is saying get rid of them, nor is anyone saying get rid of them for deploys. I just want something a little more robust.


Thank you for explaining this with some clarity; it's early here, and I've been struggling to say this very thing.

Bash is fine, but it's not a sophisticated high-level language. For doing sophisticated things, a simplistic tool is not enough. We need processes that are deterministic and that can act intelligently. The "bash is fine" crowd, in my experience, tends to be the same crowd that thinks that servers are special snowflakes that we must feed and care for. Those days are over.


After 4 years of heavy Cheffing, I completely agree with most opinions expressed here. I'm pretty sure Puppet is just about the same, I have 0 experience with it though, I won't claim otherwise.

I got frustrated enough with the Capistrano, Fabric, Puppi lot that I wrote a pure bash deployment tool with a fitting name htpps://github.com/gerhard/deliver.

Ansible on the other hand is something else though. There is some learning curve, agreed, but it's not as bad as awk or sed. And seriously, if you know your bash, you will know both awk and sed. I consider my shell scripting to be above average, and I've attempted a Docker orchestration pure bash tool https://github.com/cambridge-healthcare/dockerize, but Ansible just makes the same job easier. It's not everyone's cup of tea, but before you dismiss it, give it a real chance. I should know because I have initially dismissed it thinking that it's too complex, yet another Chef circus, yada yada, but trust me - it's worth it ; )


I spent the last year and a half cross-training from dev to ops.

One of the most important lessons I learned: use operating system packages for production; don't compile from source.

Compiling from source:

- Wait minutes for each instance to come up, as each time requires a fresh compile

- Have to download from ruby-lang.org -> you have a dependency on this site being up, bad idea when you hit a load spike, need to scale, and ruby-lang.org is having a bad day

- Loss of dependency management. Now you can't tell whether your environment has the right packages installed, nor can other things express a dependency on your code being installed

- Very difficult to remove packages for upgrades/maintenance/security fixes

Packages:

- Host your own repo, removing the need for external dependencies (can do this with source as well, but it's much better when packages know to discover themselves from a repository)

- Much cleaner rollback -- almost impossible to trash a system with apt-get/yum, they'll always leave you in a good state in case the package fails to install, or other mayhem ensues


Agreed. Tracking all those versions for security vulns is a major pain as well. Unless you absolutely positively must have X new feature that's only in upstream, DON'T compile it yourself unless you're prepared to pay that price.


The nice thing about an approach like this is that you don't actually need to use shell for your scripts - you could use Perl, Python, etc. The point is you can just write in whatever scripting language you like/gets the job done and not need to worry about learning a new domain-specific configuration language.

Does anyone else here see value in a configuration management system that centers around installing files and running scripts, much like FSS? I'm contributing to, and using in production, a configuration management system that takes this approach. It's a lot more mature than FSS, but unfortunately the primary author isn't ready to open source it yet. If there's some demand for it, maybe I can convince him to hurry up.


This does not manage a server, it provisions a server. The whole point of configuration management software is to bring your software into a desired state from any actual state the server may be in. This rests on the notion that configuration drifts, i.e. for long running services a server gets misconfigured eventually for a variety of reasons.

Bottom line, if you are just running a small startup with a few servers and a few people, then by all means use FSS, but eventually you will need real CM.


The CM system I'm using "journals" every change it makes to the system, so if you later remove or change a file or script in your CM repository, it undoes the old change. This eliminates virtually all configuration drift, provided you aren't making changes by hand outside of the CM system (which you should never do anyways).

This system has been managing about 100 servers and several hundred desktops at a single site since 2009, so the approach does scale. It has some problems that declarative solutions like Puppet solve, but at the same time it solves problems that Puppet/Chef/others have.


Which CM system?


In the cloud, provisioning a server might be the only management that you do. In fact, arguably, SHOULD do.

Servers are now disposable.


I use (my own tool) Slaughter for configuring about fifteen servers. The vast majority of configuration changes are:

* Upload/update a configuration file, and if it resulted in a change then restarting the affected service. * Append a line to a file, if it is missing. * Search/replace a pattern against a file and do something if that resulted in a change.

So yes, add the ability to install/remove a native package (be it a .deb, .rpm, or whatever) and you've got a lot of power.

http://www.steve.org.uk/Software/slaughter/


Having begun using Chef over the last year I've found that my problem has not been so much with chef (a major issue with how it deals with when/whether to restart services aside) itself but with the community cookbooks. This is where the overcomplexity tends to come in in my experience.

I'm not sure you can genuinely have a good provisioning script that treats all of these things as variables in a matrix:

- OS (sometimes including windows as well as linuxes and bsds)

- OS version

- Using system packages or building from source

- Package/source version

- Every single config variable the program/service can have

Every time we use a community cookbook, which includes several hundred plus lines of code to deal with platforms we will never ever deploy to, it ends in tears eventually.

Having a 20 line recipe that works on the Ubuntu LTS we've standardized on is much closer to what FSS can achieve, but even then it does often make simple tasks considerably more difficult.


> gem install fucking_shell_scripts

This fucking sucks.


So... this is basically just one module from Ansible, plus some obscenities?


Which is another way of saying it's Ansible without all of the stuff you don't need (plus obscenities).


We can add obscenities if you want :) We do have cowsay integration though. (There are some questionable cowsay modes!)

That all being said, the thing missing here between any of these tools is most obviously the resource model -- and the templating system and where you put variables and things to manage variance between systems. Thus, it will blast out some commands for you, but that is the easiest part of the equation -- not too much different than say, doing something from Capistrano or Fabric.

Even in Ansible, that's the part we built first.

Achieving idempotence in shell scripts is the reason most people move away from shell scripts, and also ... well, the desire to program less :) Then you'll want the rolling update features, or provisioning, or dry run, or a way to pull inventory from cloud sources, or... and you'll streamroller a bit.

The balancing act for us is achieving the right level of features vs language complexity and keeping in that sweet spot.

I still think you should have an easy time getting started, see also things like http://galaxy.ansible.com for community roles to download to go faster -- most people should be able to do basic things in a few minutes. The script module in particular is a great way of pushing a f'n shell script :)

http://docs.ansible.com/script_module.html

(full disclosure: I created Ansible, but I did not shoot the deputy)


You just sold me on trying ansible.


Except all the modules you don't use aren't getting in your way. I'd be totally for this as a simpler solution for simpler use cases, if it was actually easier than Ansible to get installed and set up. Since it's not, you're paying the same upfront cost that you'd pay for the more powerful tool, except you'll hit a wall if you try to expand it later.


I too think the name is bad, but for a different reason than everybody else. Without context, it sounds like the name is complaining about shell scripts; where the project itself endorses them. Before I clicked the link, I was expecting to come here and write a defense of shell scripts.


Every time I go to write a shell script I'm like "do I need to put quotes around this variable to interpolate it? How do I do looping again?" then I give up and use Python or Perl or something.


At my workplace, we just coined the phrase 'tickslexic' for when you can't remember what type of quotation character to use.


I guess you could load your favorite language runtime. That sounds like a one line change. In your shell script. Which is the point. I'm betting at some point you look back and realize that much of your config doesn't need a secondary language.


How would you take a template file, fill in some values, copy it to a certain path on the server and give it the right permissions? With, say, Python, I'd have to read the file, using mako or jinja to fill in the values, scp the file to some temporary file, then somehow run Python code on the server using SSH to copy the file and give it the right permissions.

With Ansible, this is a single line.


Let's see... I would take the template file, fill in some values, scp it up to it's final location on the server, and run chmod on the server.

I guess I could learn the "one-line" ansible way to do it, but I'd also have to learn to set up ansible. And I'm guessing it's not as flexible as, say, shell scripts.


The cool thing about Ansible is that there's no set up. You install it, write the config and run "ansible-playbook <config>". It connects to the nodes using SSH and runs without having to be installed in them.

As for not being as flexible as shell scripts, I'd say that's technically impossible, since it has a command for running shell scripts :) Personally, I never had to use it, but it's there.


I really like the idea, but the name is somewhat offputting, especially if I were in a position where I had to sell it to management (luckily I'm not). Will definitely give it a whirl, but I strongly suggest considering a name change.


It would be nice if there was a way to use this without having to deal with obscenities. Not everyone finds this funny. Useful idea though :) At least for early stage projects, I can imagine it getting complicated quickly.


And one more thing: the point of a system management tool is to have no dependencies. If you need to install or use anything other than the provisioner to setup a vanilla distro, you've been hustled. Bash is guaranteed to work everywhere and that's what most love about it - myself including. But Ansible works on my RaspberryPies, on my FreeBSD storage servers and on my production Debians, Ubuntus and ArchLinuxes with no crutches or aids. I had bash scripts before, was really close to open sourcing them, but then I realised there is a much better way ; )


1. You use unprofessional obscenities in your project title

2. I don't take your project seriously (perhaps unfortunately)


Not all of us get offended by the word fucking. We don't have to be super serious all the time.


Agreed. It's like some 17-year-old trying to sound cool.


Or someone trying to be Zed Shaw.


don't you mean kewl?


I originally thought this was a lament about how shell scripts suck, like along the lines of "man, those fucking shell scripts ..."

My surprise when I saw this is the exact opposite of that ...


This stuff is all fine and good for a few VMs. I wouldn't try to run a serious endeavor using them, unless I really wanted to spend a bunch of time diagnosing, debugging, and sweating.

The reason automation, CM, and devops have taken hold is because people are so goddamned tired of serving technology like it's some kind of god to which we owe worship. Screw that. I want the VMs to go make ME a sammich, not the other way around. Trying to build serious infrastructure using nothing but shell scripts is a quick trip to the temple of server-worship.


Got a good laugh out of me, seems a little hypocritical to depend on ruby (which chef et al. use)

So far I only have experience with puppet, and it seems really annoying to tie phases like "install postgres" -> "Reinitialize the database in UTF-8, but known to work on ubuntu" -> "Update the configs so non-localhost connections can actually connect" -> "Okay, now it can start."

The above problem is something I do not want to deal with when it comes to a provisioning system.


Author here: fair point. Ruby is only required for the client that's building machines. Ruby is our primary language so it's on all of our servers and laptops. Less about thought and more about lowest barrier to entry ;)


Seems like bash (or insert your favorite shell here) would be THE lowest barrier of entry on any Linux, BSD, whatever host--perfect for this project! :-)


Who in their right mind wgets and builds software on a production machine...?


The authors of Chef.


Uh ... not really.


I actually do this with one piece of software, and that is PostgreSQL. I know it in and out, it's easy to upgrade via source, and I prefer to put the database in a bundled location e.g. /data/postgres or /home/postgres... instead of /var/lib/postgresql/9.1/main, /etc/postgresql/9.1 et cetera.


Why not just build your own packages? fpm[1] is very simple to use.

[1] https://github.com/jordansissel/fpm


Hopefully: over https, not from the central repo, and only after verification of the sha256sum


Hahaha, I love that the install process is more complicated than the example shell script.


Please do not wget files over HTTP (no S) and just run them, as the example script does. Have the script run the downloaded file through sha256sum and check the output matches what you expect.

I know that was only an example script, but we don't want to be encouraging bad practices here.


Having managed 10's of thousands of servers and multiple hundreds of configurations at a big web company for many years, I completely see where OP is coming from.

I think a huge problem with puppet/chef is that they try to do it all, ie deployment and OS state management and make it work with vendor specific package management systems.

Unfortunately rpm/debian based package mgmt systems are not well suited for complicated deployment strategies. Most companies I worked at come to a solution similar to:

  * Install everything you need in /package_versionstr (except say glibc)
  * Point the current version with a symlink
This enables you to do a simple atomic rollback/rollforward and is much easier to reason compared to complicated pkg state.

For simple OS config management (say usermgmt, sysctls), we used a system similar to the ideas expressed by OP.

1. Keep it simple. puppet/chef have horrendous DSL's that make it really complicated to reason about. I shouldn't have to debug a backtrace 15 levels deep to understand why an useradd didn't work.

2. Server side logic. Don't try to do "intelligent" stuff based on client side state, it will be almost impossible to get it right. All data needed for state needs to derived by group membership. This helps in * validating all changes upfront * diffs for state changes across a group of nodes.

3. No orchestration. Except any changes to be applied at any time. This acts as an enforcing function to make your scripts idempotent.


This actually looks like as much if not more work to set up than Ansible. As soon as you start talking about required directory hierarchies, you've lost the "just f###### do it" feel. Then I still have to write a special YAML file, and then my scripts.

The Ansible setup for my personal server started as two files: an inventory file consisting of an ip address, and site.yml. My ignorance (and some installation issues) notwithstanding, it has scaled pretty smoothly from there to copying up config files, templated nginx config, and so on. I don't see much room for anything between that and a literal "just f###### shell script".


If you really want some fucking shell scripts:

http://www.nico.schottelius.org/software/cdist/


This is simply brilliant! I've dreamt about such tools. And now you have reached into my dream and turned it into a reality. We might as well be soul mates you and I.


I've been doing almost exactly this for years, always ashamed that I didn't take the time to learn a "real" tool like puppet. Solidarity.


There are two ways to debug something:

1) Step through the program, try to think of logic errors etc.

2) Do a binary search through your latest commits and see where something broke.

Those are your two clues.


As someone in the PHP community, I've made something similar (Vaprobash - "Vagrant Provision Bash Scripts") with the goal of helping people learn more about what goes on when setting up servers for use (installing web servers, databases, configuration, etc).

https://github.com/fideloper/Vaprobash


It's not a good name. Outside of the Valley, where youth and irreverence do not dominate, the name makes this a non-starter.


Actually, i think the name goes over rather well in London, where we like swearing (and shell scripts) quite a lot.


Don't get me wrong, I'm the biggest fan of shell scripts and chapeau for writing opinionated software.

However, not having idempotent configuration files seems to be not only missing some of the power of shell scripting but also missing a huge principle of "good" server configuration.


If you're going to use ruby, you may as well commit and use sprinkle. You still get to use all the shell scripts you want and you get the deployment power of capistrano. I've really enjoyed using it for my small provisioning requirements.


The name is bad.


If you're going to go this direction wouldn't it make more sense to just upstart another script that runs your scripts? How much time and code does this actually save?


A wrapper for shell scripts with a dependency on Ruby but lacking determinism, templating, etc? In other words a remote command dispatch and execution tool.


I gotta know...

Do the scripts reproduce with all that fscking going on?


I'm all for anything that makes managing servers easier, but why use swear words in the name? It is unprofessional.


another project that needs to discover rdist


I ♥ Ansible ...

'nuff said.


why do I have to use ruby to install shell scripts? Why not wget or curl or GET?


Wait, a gem, a yaml file? What exactly is this doing with my scripts? Can I really expect this to make my shell scripts work across clouds?


Guiys, the fucking swearing in the name suks alot. Not all the managers like it. That's why we have perfesionalism, cause prfesionalism is good. Guiys, we haf to get along. We cant do it with swearing. Please, think about pleasing everyone and offending none. Otherwise ur milenials, ruining our designated corprate productivity spaces.


Shrug

If you want people to use your software -- and you do or you wouldn't be promoting it -- you should consider how you present it. Bad words don't offend me, but they do suggest immaturity and inexperience... qualities I tend to avoid in systems orchestration.


I think they were joking =)

> That's why we have perfesionalism, cause prfesionalism is good.


Yes, it's sarcastically making fun of people who complained about the name.


Wow that fucking sucks. Installing rails to run a shell script, no thanks I think I would rather blow my brains out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: