Hacker News new | past | comments | ask | show | jobs | submit login
Cron in production is a double-edged sword (orchestrate.io)
104 points by neo2001 on Oct 16, 2015 | hide | past | favorite | 57 comments



Most of what he's writing about, and much more, is made substantially easier with systemd timers.

E.g. want errors to cause e-mails, but everything else to just go to logs? Use a timer to activate a service, and make systemd activate another service on failure.

Want to avoid double execution? That's the default (timers are usually used to activate another unit, as long as that unit doesn't start something that doubleforks, it won't get activated twice).

(Some) protection against thundering herd is built in: You specify the level of accuracy (default 1m), and each machine on boot will randomly select a number of seconds to offset all timers on that host with. You can set this per timer or for the entire host.

And if you're using fleet, you can use fleet to automatically re-schedule cluster-wide jobs if a machine fails.

And the journal will capture all the output and timestamp it.

systemctl list-timers will show you which timers are scheduled, when they're scheduled to run next, how long is left until then, when they ran last, how long that is ago:

     $ systemctl list-timers
    NEXT                         LEFT     LAST                         PASSED       UNIT                      
    Sat 2015-10-17 01:30:15 UTC  51s left Sat 2015-10-17 01:29:15 UTC  8s ago       motdgen.timer             
    Sat 2015-10-17 12:00:34 UTC  10h left Sat 2015-10-17 00:00:33 UTC  1h 28min ago rkt-gc.timer              
    Sun 2015-10-18 00:00:00 UTC  22h left Sat 2015-10-17 00:00:00 UTC  1h 29min ago logrotate.timer           
    Sun 2015-10-18 00:15:26 UTC  22h left Sat 2015-10-17 00:15:26 UTC  1h 13min ago systemd-tmpfiles-clean.timer
And the timer specification itself is extremely flexible. E.g. you can schedule a timer to run x seconds after a specific unit was activated, or x seconds after boot, or x seconds after the timer itself fired, or x seconds after another unit was deactivated. Or combinations.


I agree: I recently moved my scripts from crontab to systemd timers and there is no going back. Finally I have a proper way to debug and log. Also on NixOS I can have the unit file and timer generated in very few lines. Look at this one for example:

    "xkcd" = {
       description = "send latest xkcd comic"; 
       wants = [ "network.target" ]; 
       startAt = "Mon,Wed,Fri *:0/30"; 
  
       path = with pkgs; [ telegram-cli ];   
       serviceConfig = { 
         User = "rnhmjoj"; 
         Type = "oneshot"; 
         ExecStart = "${cabal}/bin/xkcd"; 
       };
     } // basicEnv;


It's a fun example, but for webcomics RSS is probably the better solution than scheduling something on your machine.


This. While certain crowds like to hate on systemd, the many features beyond init are lost in the noise for the casual observers. I love systemd timers.

The biggest shortcoming with systemd timers, is that it doesn't have an easy way to notify admins of failures like standard cron does.

I tried to hack around this[0], but it still feels wrong.

[0] https://github.com/kylemanna/systemd-utils#scripts


Something we've found to be fairly lightweight (compared to e.g. Chronos), but incredibly featureful is using Jenkins (the CI server) as a cron runner. We use http://docs.openstack.org/infra/jenkins-job-builder/ to configure it at deploy-time so it lives as part of the deploy rather than system config.

Here's a small list of things we're getting out of it:

- concurrent run protection (& queue management via https://wiki.jenkins-ci.org/display/JENKINS/Concurrent+Run+B... )

- load balancing (e.g. max concurrent tasks) and remote execution with jenkins slaves [sounds complicated, but really jenkins just knows how to SSH]

- job timeouts. No more hanging jobs.

- failure notifications via slack/hipchat/email/whatever. [email only on status change via https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin ]

- log/history management: rotation & compression.

- fancy scheduling: e.g. run this job once every 24h, but if it fails keep retrying in 5 minute increments (https://wiki.jenkins-ci.org/display/JENKINS/Naginator+Plugin ). You could also use project dependencies for pipelines, but we've been staying away from that.

- monitoring: we use the datadog reporter & alert on time since last success. Given how mature Jenkins is, this likely translates to whatever system you're using just as well.

It's worked incredibly well for us. We migrated to Jenkins from crontabs with cronwrap (https://github.com/zomo/cronwrap). We're never going back.


Jenkins is nice but you should be careful.

Once I had a job that went stray and got the disk full with logs. Since Jenkins couldn't write to the disk anymore, it stopped working completely and thus no jobs and more importantly no notifications. Funny thing, there was one job to monitor the free disk space but the stray app wrote ~100GB in less than 15 minutes (damn SSDs :p).

Another time (times actually), I had the OOM killer kill a jenkins related process. Being a JVM based app and starting with about 1GB of RAM use, doesn't help I guess. This lead Jenkins to hang on a job; timeout didn't work, I couldn't even stop the job manually. Other jobs wouldn't start and no notifications would be sent again.


For those preferring a self-hosted oss monitoring solution, Jenkins is a good multi-purpose choice (it does more than continuous integration!).

I inherited a legacy application with tons of cron jobs running scripts on the production server. Instead of risking moving our jobs to jenkins, we're simply using jenkin's post endpoint to post job results from the cron jobs themselves. It's not perfect, and doesn't give us all the goodies listed above, but it does give us more visibility on the jobs themselves until we can move them all off reliably. +1 from me if you are in a similar situation.


Yup, we do the same.

We made sure that jenkins doesn't fiddle with the environment, so that everything was derived from the various networked user accounts.

using @hourly, it spreads the load evenly over the hour to even out resource starvation spikes.

We have jenkin's job builder(and yaml) in a git repo to make sure that the delicate snowflake that is jenkins is repeatable.


We use Jenkins for a cron-replacement too. We've noticed all the benefits you mention plus it's dead easy for others in the organization to (re)run tasks, even with different parameters.


I've been using Dead Man's Snitch[0] in production for a few years. It's been a life saver. Not affiliated, just a happy customer.

[0] https://deadmanssnitch.com/


Seconded. DMS is the easiest thing to just drop-in on the Nth cron job you add. Eventually you might need something more complicated for monitoring/outages/etc, and that something is probably either a whole lot of Nagios and bailing wire and/or PagerDuty, but DMS is perfect for "I really need Tarsnap backups to not just silently fail."

I also end up creating a lot of Twilio scripts which are either positive control or negative control for the call/SMS, depending on how critical the thing is that I'm monitoring. For example, one of my sites updates an /api/healthcheck result with a timestamp every five minutes if everything is going peachy, and another box polling that endpoint blows up my phone if it fails to get HTTP 200 and a timestamp within the last five minutes. (This works, but I swear I need to tweak it just a wee bit, as today I had my once quarterly woken-up-at-4-AM-because-gremlins-ate-a-single-HTTP-request.)


This reminds me of https://docs.google.com/a/gravitant.com/document/d/199PqyG3U... on how you should only wake up engineers when there really is a problem. I'd suggest logging based on error messages -- though I get it, if a problem occurs upstream, you wouldn't know it unless you'd polled for it too, as a data point. HN comments on that doc at: https://news.ycombinator.com/item?id=8450147


Shameless plug: https://healthchecks.io Same idea, open source


Healthchecks.io looks really interesting, both because it's an open source django project and because I was disappointed with Dead Man's Snitch. DMS forces me to live within their timing for running checks -- If you have something that has to occur @ 3am every morning, you won't know it failed until midnight UTC later that day, or when a customer calls to complain.

Healthchecks handles this a lot more sensibly. I might throw it on a linode and give it a shot. Thanks for releasing it.


Wow, that's awesome. That really is the biggest problem with DMS. I asked them about that feature a couple years ago, they said it was on the roadmap. Might ping them again.


I'll throw in a vote for DMS. I use it at work to verify that our cron jobs ran successfully. Dead simple and very effective.


This is not an argument against cron. It is a demonstration of people not abstracting code. One of the thousands i've come across.

Take all of the features he mentions, and abstract the to a launch_from_cron.sh file. Make that file accept a script path as an argument and viola! All of the safety added to cron without the need for code duplication or these massive overhead solutions listed in these comments.


did you not see the cron script at the end of the article? the author does exactly this.


I work for Yelp, and we use cron for purposes similar to those mentioned in this article, mostly synchronizing small bits of configuration or data that we want local to the machine. We're heavy Puppet users, and we made a module to assist us in the management of our crons [1]. If you're a Puppet shop, I highly recommend checking it out. It provides answers to each of the problems mentioned in the article, often using the same mechanisms. I especially like its integration with Sensu, which we use for monitoring the jobs.

We've found that deploying cronjobs onto individual hosts is quite powerful, and helps us fill a niche between configuration management tools (like Puppet) and specialized coprocesses (like Smartstack). We have cronjobs for downloading code deploys, showing Sensu state within the motd, reconfiguring daemons (especially the Smartstack ones), and (of course) cleaning up unused data.

Of course, there's also the separate problem of scheduling and coordinating tasks across an entire cluster. In most cases we don't use our cron daemons for this, although we do have some jobs that run on multiple hosts and enforce mutual exclusion by grabbing a lock in Zookeeper.

[1] https://github.com/Yelp/puppet-cron#puppet-cron


No one mentionned Rundeck: http://rundeck.org/

I've been using it for two years now. This has replaced cron on about 200 nodes.

Not only it does cron, but also helps deploying artefacts (integrated with Jenkins) through simple forms. We now have ops with 0 experience in Linux deploying code.


+1. Also, I replaced all my nagios event handlers with rundeck jobs, so nagios just calls the rundeck API. I get a full audit trail of when the job ran, with what parameters, how long it took, and its outcome.


Having local mailboxes in each server is not really useful in a cloud setup with hundreds of machines. But it's not a reason to silence the output; something bad might happen and only stdout/stderr might give you an anwer of what exactly is going wrong.

Instead use https://github.com/zimbatm/logmail. It's a `sendmail` replacement that forwards everything to syslog. Then forward all your syslogs to a central place an you can capture and analyze these messages.


I use Jenkins instead of cron. I get an rss feed of processes that exited with non-zero, it captures the output but doesn't e-mail it to me. This is totally not what it's designed for, but it is closer to what I want than cron is.


I do this too. We basically use Jenkins as "cron that the non-engineering team can read with a web gui, auto-archiving of files, configurable email notifications". It's ugly but it gets the job done.


That's actually a pretty damn brilliant use of a CI system. Can it be distributed, though? All timed jobs running from one box screams "single point of failure"


It's as distributed or not as cron.


I meant, if 5 boxes run cron and one box blows up, only the jobs on that box are affected. If one box is running all jobs and it blows up, all jobs are affected.


If 5 boxes run jenkins and one box blows up, only the jobs on that box are affected.


That's not really true. With Jenkins, the Jenkins master controls 100% of the execution of jobs on the slaves and Jenkins cannot be multi-master. With cron, each host is responsible for executing its own cron jobs, removing the single point of failure.


The problem isn't cron, cron is just a dumb execution tool.

The problem is that we don't have any way of alerting our monitoring systems from a cron job.

This is exactly what I've been implementing, a simple curl API call to our monitoring system when a cron job has run is all that we need. This puts the monitoring of cron into the same sphere as all other monitoring and puts the alert on a webpage where it can be found eventually by our 2nd line or our on-call personnel, instead of in someones mailbox.

Edit: And you don't need a fancy REST based API for your monitoring system to do this, ye ol' nagios agent could do it with some hacks.

The hard part is having the discipline to fix all your cron jobs in this way, but adding || true is already tantamount to this.


This is basically the approach we take with Prometheus, with the option to add in additional stats like duration and processed records too.

http://www.robustperception.io/monitoring-batch-jobs-in-pyth... is the full Python version, and the simple version is a bash one-liner too.


Doesn't systemd timers https://wiki.archlinux.org/index.php/Systemd/Timers address these issues?



These all seem like issues you'd run into with any task scheduler. Error emails, overloading a central resource with many tasks. Most of these aren't particular/limited to cron at all.


I dunno. Cron is particularly bad. Want a sane looking cron? You'll probably end up writing a wrapper script to handle stdout/err. Every time I deal with an annoying dev or proprietary binary, my crons turn to a total mess.

Also: /home/on_a_phone/parse_today.sh `date +%Y%m%d`

Will fail catastrophically because cron treats '%' as a newline character for some silly reason. Have fun troubleshooting that one!

Side note - clean your damn leap second crons, Steve!


DATE=date +%Y%m%d

/home/on_a_phone/parse_today.sh $DATE


You can solve said problem by having a semi decent CLI api with defaults, e.g. "--date=<date> defaults to $TODAY".


That's seriously, seriously easier said than done in about 10% of cases that this sort of thing comes up. Especially when dealing with awful vendor code.

What about an application that takes an arbitrary date as input? Keep in mind that we're talking about production-leven infrastructures with many potentially many thousands of servers that might have 10 different distros with many thousands of differences between each machine, so falling back to "Just install X" isn't a possibility.

Then again, there's something to be said for vetting an application/script for prod-use on its "cronability". I don't think that's the point you were going for, though.


gee this is a difficult concept

how to get cron to only send important emails and not every time it runs

you think maybe you should have just used

    > /dev/null 
and not

    > /dev/null 2>&1
why is this a full blog post?


Very interesting discussion. I've been doing some related work where I needed to run some tasks in a non-overlapping way and while flock was an initial option I later moved to redis queue (ie rpush & blpop mix) to guarantee a certain (and needed) order of execution. This is mixed with a 'send email in case of error' check and so far is doing fine, though I'll definitely look into Jenkins if I ever feel this current approach proves not to be reliable enough.


Great read and definitely will keep this in my toolbox, the whole article is explaining why the below good when you need use cron:

15 * * * * ( flock -w 0 200 && sleep `perl -e 'print int(rand(60))'` && nice /command/to/run && date > /var/run/last_successful_run ) 2>&1 200> /var/run/cron_job_lock | while read line ; echo `date` "$line" ; done > /path/to/the/log || true


Excessive use of crons is a devops (hate the word) smell. You get reliant on their side-effects and to migrate to other solutions you need enormous amounts of testing and legacy interfaces. The most obvious downside to a cron is the at least 1m interval. On average you are waiting 30s for something which already should be there. Of course it's perfect for things like reporting which make sense for certain intervals. Using it for mail queues and stuff.. bad times.


Something else. The cron service is a one hit wonder. All it does is schedule. It places responsibility for handling output and setting semaphores for use by other applications to the person who wrote the command called by cron. You can't really blame cron if the command/script doesn't do these things. You just need to look to another type of scheduler/batch facility that provides a richer feature set for handling workflow, monitoring and reporting.


I've been pretty happy with shush[1], which is a similar script that helps with a lot of this--including randomdelays, locking to avoid overlapping runs, e-mailing only on errors (or other criteria as you see fit), and so forth.

[1] http://web.taranis.org/shush/


https://github.com/Yipit/cron-sentry is also quite nice as a wrapper to capture failing cron jobs and forward them to https://getsentry.com/


CFEngine also provides a scheduling capability that can be used in conjunction with other factors using boolean expressions. Something like "Run at midnight on Saturday if you are a production linux server." The splaytime parameter can spread out the execution of a command across a cluster based on its name hash.



I just started using AWS Lambda for a cron-like job I needed. It supports scheduling now.


Nice blog explaining how someone cant use cron. Doesn't mean the rest of us are that incapable.


I admire anyone who puts their learning process out in public to help other people.


Linux is designed like a 1970's mainframe.

TempleOS is designed like a C64.

I don't see cron as useful for a C64 user.


Is there any good open source distributed scheduler that blends both timer based tasks and event based tasks?

Chronos is the only one I'm aware of, but I don't believe it supports event based tasks.


Celery is a task queue that supports scheduling as well, if you're using Python. Backed by several different brokers including Redis.


Shameless plug: we replaced most of our cron jobs with celery and django celery fulldbresult[1], which provides more info and features than the default django-celery integration:

- Save result as json, which is queryable in the database

- Save enough info in the task result to retry the task from the result.

- Retry a task from its result in Django Admin

- Run a periodic task now (e.g., to test your cron task)

[1] https://github.com/resulto-admin/django-celery-fulldbresult


JobScheduler by SOS is free, distributed, supports most *ixes, Windows, and covers most of the things I want in a scheduler.


Rundeck can trigger jobs based on Jenkins events, for example.


Another cron trick is to use chronic/cronic before your command. It silences the command except for error states - cron likes to report any text it sees, which you don't want for non-error states. It also detects errors better than just assuming all errors happen on STDERR.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: