I just so happened to use this page today and have used it many times in the past. It's very useful for just double-checking my syntax. It's not like cron has a complicated syntax but for some reason I can't ever seem to commit it to memory, especially for the slightly trickier things like doing something every 15 minutes on the second Saturday of every month.
Part of my problem with committing the syntax to memory likely stems from the fact that I've always had pages like this one to fall back on.
I can't recall it either; and at 5 fields it's a case study of the problem of positional syntax. Even if we're a minority, we're a significant minority who find we have to check the man page to either read or write, thus defeating the savings from brevity.
I have things like that too. There is a lot of stuff i'd usually just commit to memory but now i seem to just recall I've got it bookmarked or of course have commited to memory the search phrase to find it. I'm not sure if thats a step back in personal memory management, or just being more efficient with it.
On debian based systems at least they use this to add an explanation and maybe dome examples. Never needed to check man pages since they started doing this.
Cronitor isn't the first and probably won't be the last, but what is it with these stealth companies? Crafted in Berkeley is the only info that somewhat narrows down who is behind it ("somebody that was in Berkeley when this was built"), but nothing else does. Even "contact us" just gives you an Email. There's a phone number for support, but no company, nothing.
Who trusts sensitive information like cron output to anonymous third parties? Is "but they are charging me, so I know I can trust them" the trigger here?
My name is Shane, and my co-founder is August. We did an indie hackers interview about Cronitor a few years ago and we’ve been pretty active here on HN.
I guess the only question is: why are you not actively communicating who is behind the tool? The impression it gives me is that either you don't want your names attached to the project, or that you fear that attaching your names to the project would hurt prospects.
Or is it just an oversight and you didn't intend to run the service anonymously?
On day one when we launched Cronitor (in 2014) I was a Sr Engineer at a tech company and August was the 1st Engineer at a YC-funded startup. Neither of us wanted to trade on that for our side project.
Over the years, as Cronitor grew into a successful business, we've shared a lot about it publicly with our names attached but I never felt a calling to put our faces on a webpage. I have always been more eager to collect customer testimonials and share what they have to say about us. Maybe I'm just shy.
It's not even about putting your face onto it, that's understandable. It's just that there's nothing. Sure, you can dig around, find the termly-page and then at the end find some info, but that looks so scammy to me, while the rest of the page doesn't give me that vibe at all.
Typically, I associate "we don't tell you who we are" with grey/black stuff. A DDOS service would only provide an email address for obvious reasons. But for a company? If somebody wants to sue you, they will find out who you are in any case. I wouldn't expect pictures, CVs etc, but a contact page that doesn't list a company/individual name and a street address triggers two feelings: a) scam or b) run from a bedroom. Neither makes me want to trust you with critical infrastructure (and here's where I'm irrational: seeing a company name and address is enough, unless there's some suspicion, I won't even dig into it to see whether the company or address actually exists).
As I mentioned, it's not specific to you at all, a lot of SaaS companies do this, and I don't understand it. Might be a cultural thing. In Germany, you're breaking the law if you're running a commercial site and are not clearly stating who you are, so maybe that's why alarms are ringing in my head when I don't see it.
Java Spring documentation also claims to support the question mark vs. asterisk character, however it seems that some versions treat them identically, where the ? is supposed to be a random value and the asterisk is supposed to mean every value...
Not that I have a problem with systemd, which I don't hate and I use extensively (both in my customers' products and on my machine because it's the default init system of the distro I use), but this thread illustrates a lot of why a lot of pre-systemd Linux users just don't dig it.
Every time someone asks "why should I use systemd instead of cron" they get these (technically correct -- the best kind of correct!) answers about how systemd timers allow you to attach jobs to cgroups, define dependencies, test unit files separately vs. changing crontabs for 1 minute ahead (which I don't think is a "cron idiom" -- I've certainly never done i -- but we all have our weird hacks), and how it has built-in jitter options and per-job environment settings and whatnot.
These are all very relevant for real-life workloads of super advanced cloud systems, I'm sure. (Which I'm not saying in a derogatory way. I haven't written back-end code in like 12 years now, the things that are happening there today blow my mind. I know these are complex systems so I'm pretty sure they have complex problems which have complex solutions.)
But guys, I just need to run a bash script every day at 2 AM. 99% of us just need to run a bash script every day at 2 AM. This has worked reliably in cron since practically forever. I have grumpily learned the new way of doing it but I can't really justify the time I've put into it, nor the (very extensive) troubleshooting that was involved in it. I just woke up one morning and sat down in front of a Linux machine and I found out that there's a new way to run things at 2 AM.
If you want to woo me with cool systemd-timers features, tell me that it finally has an equivalent to cron's MAILTO, so that I don't have to write yet another systemd unit file that wraps a script written by me if I want to get a notification when a job has failed, because somehow that hasn't made its way into the "everything but the kitchen sink" list which otherwise includes things like attaching jobs to cgroups.
I suppose it would be quite nice to have a MailTo= directive in a service. When the service stops, systemd could retrieve whatever it logged to the journal and mail it to the specified address.
Until such a feature is implemented, you could try:
ExecStart=/bin/bash -c 'shopt -o pipefail; mycommand | mail -s foo@example.com'
Certainly not as convenient as MailTo=, but a lot less complicated than using OnFailure= with a separate service that tries to mail you the output.
Huh. I never thought of that, although it was so damn obvious (I feel twice as stupid now that I realize I do a version of that on my machine, albeit not in order to send mail). I guess that's what I get for going all "Arch Wiki will you please just show me how you kids are doing it today and leave me be" on it.
But to be honest, not having to do <shell> -c <exec this or send me an email> ever again is one of the reasons why we're doing the whole systemd thing, and one of its big promises. If it comes to writing shell mantras again, I might as well use vixie cron (or one of its descendants) and benefit from 30+ years of bugfixes across more Unices than I can name, plus the acquired wisdom of the Interwebs.
Edit: plus... does this still allow you to get the other goodies, like being able to specify environment variables for a specific job?
Not sure I follow about the environment variables.
As for 'exec this or send me an email' -- I would also much prefer to see this be done by systemd itself. Fortunately bash's 'pipefail' feature plus bsd-mailx's '-s' flag make this reasonably quick and sane; I wouldn't even bother to attempt it if I were stuck with POSIX sh or if the mail command lacked the logic to not send the mail if stdin is empty.
Try systemd-cron https://github.com/systemd-cron/systemd-cron. Best of both worlds: simplicity of crontab and manageability of systemd. And it comes with built-in MAILTO support.
One more thing systemd tries to take over. Do not want.
Instead, try GNU mcron (guile-based).
"The mcron program represents a complete re-think of the cron concept originally found in the Berkeley and AT&T unices, and subsequently rationalized by Paul Vixie. The original idea was to have a daemon that wakes up every minute, scans a set of files under a special directory, and determines from those files if any shell commands should be executed in this minute.
The new idea is to read the required command instructions, work out which command needs to be executed next, and then sleep until the inferred time has arrived. On waking the commands are run, and the time of the next command is computed. Furthermore, the specifications are written in scheme, allowing at the same time simple command execution instructions and very much more flexible ones to be composed than the original Vixie format. This has several useful advantages over the original idea. (Changes to user crontabs are signalled directly to mcron by the crontab program; cron must still scan the /etc/crontab file once every minute, although use of this file is highly discouraged and this behaviour can be turned off). "
The cron syntax is used by a bunch of other things as well, e.g k8s cronjobs so sites like this are still useful even if you aren't using a traditional cron
Define a unit and run it individually to troubleshoot vs modifying the crontab for 1 min ahead
Logging in journalctl
Built in jitter options to spread tasks across a window in a large fleet
Ability to use the unit as a dep for other things
Instrumented in systemctl list-timers
Store config in user home dir (I think ) for tasks owned by a user
These aren’t really better, they are just different ways of doing things the old fashioned way.
(e.g. put all your logic in one script, test it with “env -“, lock exclusively with flock if you need it, or whatever else you have to hand in your language of choice.)
A nice feature of Systemd timers is the ability to say something like "run once per 24 hour period, but I don't care when". The task runs at a random time during the period, which is nice for tasks that hit network services, since you don't get a zillion clients hitting the server at the same time. (i.e. local midnight or UTC midnight.)
Package repo updates and Let's Encrypt renewals are a couple services I've seen use this. There isn't a great way to do this with cron that I know of, besides running some script that sleeps for a random duration.
That said, I still use cron for things that don't need this feature because it's portable and I'm just more familiar with it.
I think both have their use cases. For quick and dirty things (like manual logging/system performance), cron works just fine and is quicker to set up. For tasks that require dependencies or have to happen at very specific points in time (like immediately after the network services are started), systemd is more suited to the task.
There are of course quite a few people who dislike systemd for justifiable reasons (but I doubt it receives the disdain that SELinux does), here's a starting point:
That said, I very much like systemd and I hope it isn't going anywhere. I think it's improved on a lot of things from the SysVinit days and I hope and believe that its shortcomings will be addressed and improved in the future.
My problem with cron for short tasks is that pretty often I forget to add logging and/or locking. And then machine runs out of resources because of all the parallel jobs, and there aren’t even any logs.
I've got this website on a permanent spot in my bookmarks bar. It's kinda like tar - I use it enough that I should have absorbed the syntax by repetition and osmosis, but I still find myself looking up the syntax for cron "just to be sure" and running tar with --help.
I settled on LaunchControl instead because I was impressed by the semantic understanding of fields that it has. If a job goes wrong it frequently has a 1-click suggestion to fix it or to facilitate debugging.
> We created Cronitor because cron itself can't alert you if your jobs fail or never start.
That's a lie.
A mail is sent on failed jobs. The receiving mailbox is by default the executing user (so usually root)
There is also an entry into the systemlog which should already be monitored.
The user might not get the notifications because he doesn't pay attention, but cron does notify.
Please don't use the lie word: it goes to intentionality. It's wrong, true: Cron has (for a very long time) been designed to mail and log on failure.
The thing is that a huge number of deployments these days don't run functional mail send, and so the failure states go to /dev/null (in effect)
Also, cron can fail to run: Linux moved cron to a different execution state from BSD, which has also introduced (on linux) the interaction between cron and anacron. Changing the state of cron.d/ doesn't always register in cron, and cron won't always send mail if it doesn't know new things have to be run. The older crontab -e method reliably made cron re-read the state of the file.
(I think this predated Vixie cron, cron existed since time immemorial in UNIXEN. Vix improved things, but the underlying system behaviour was established before he coded in this space)
> Please don't use the lie word: it goes to intentionality.
If someone built a whole product around it - and use it as their marketing pitch when it takes like 30 seconds of man-page reading to find out that it's wrong, I'd argue you have some fairly good reason to assume intentionality.
(That said, yes, I agree that there are failure states in traditional cron that won't alert you with an email, so the service might of course be useful.)
Lie is a little harsh but absolutely - cron has had notifications built in for as long as I can remember and a Unix box that can't send email is a pretty rare beast and is probably broken.
I take it you are describing policy rather than capability. What I was getting at is that email is kind of built in by default with Unix like systems. It is actually quite hard to install a Linux or BSD box without the capability to send email. SMTP is at heart, very, very simple and can be conducted via telnet if necessary although doing TLS via telnet by hand might be a bit tricky.
I have deployed many Ubuntu boxes and sometimes been surprised to receive emails - cron (logwatch) usually. Our central SMTP daemon has me as an alias for root, postmaster etc. Don't know why I'm surprised - I set the bloody things up to work - perhaps I'm going a bit senior.
No -- sending external mail is very difficult and requires setting up at least DKIM. "Email" is built in and a LAN of *nix servers can usually send messages to each other but they are not set up as a state of the art public mail server would be. Even the LAN sending is being phased out, if you get a message it may be local delivery only.
If the messages are going out they probably use an MTA and are logging in to a hosted service, which is something that needs to be set up.
You are having a laugh - SMTP is not hard (for a given value of hard.) I've run Unix SMTP daemons and Exchange (and GroupWise and L Notes) for over 20 years. My weapon of choice is Exim with rspamd these days but I know Postfix, Qmail and sendmail and of course Spamassassin with extras.
DKIM is the second of the trifecta, SPF first and then DMARC after DKIM. DMARC is when things start to go horribly wrong or wrongerer than normal in the world of email. Mailing lists get funky shortly after you proudly publish a TXT record starting "v=DMARC1; and p=<unwise choice>.
I'm not sure that state of the art is appropriate when describing SMTP. It is what it is and no more, nor less. It transfers a huge amount of data daily, without fuss or comment. Perhaps that is the state of the art - it is in my world: I like stable and boring and just works.
I'm (nearly) a Civil Engineer and I think of SMTP as an odd analogy for concrete. It's fundamental and very boring but can be surprisingly complicated and interesting if you get it wrong. Conc. can burn or get tricky in all sorts of ways if poured in excessive amounts - setting is an exothermic reaction and will work under water as well. Conc can suffer from concrete cancer which is where sea air and a few other factors causes "map cracking" and potentially failure. SMTP is another function that is dumped into the background, forgotten about and relied upon to just work until the wheels fall off.
Yes, yes, I've set up a working mail server also. All my point is is that any SMTP daemons in a Linux installation are not really in a usable state by default and require a lot of reasonably tedious actions to set up. Every distro I have used in recent memory has not been able to do anything but local delivery, and even then sometimes an SMTP daemon is missing.
Every distro I have used in recent memory has only ever had local delivery working by default, and sometimes not even that. So the GP (or GGP) post about cron not having a failure detection mechanism is mostly true.
The only major distro like that is Debian. RHEL, SuSE, Alpine, even the default AWS images are configured to send outgoing mail by default. All four BSDs are too.
Local-only (outside of Debian) is generally only found on desktop- or hobby-oriented Linux things like Fedora and Arch -- which aren't really germane here.
Yes but from memory that outbound mail is only able to be received by machines configured to receive it with no authenticity checking, i.e. your other servers you set up. The usecase I usually see for this is you set up one server as an MTA that forwards to GMail, etc, and all other servers send mail to that MTA.
But that local MTA you have is not typically something you log in to check your mail, and all of these things are not set up by default.
... that the functionality people here seem to expect is not set up by default. Some of the people commenting don't seem to know what needs to be set up, thus the explanation for their benefit, and also an explanation of what exactly is not set up.
Your objection is that the computer does not magically precog the admin's delivery address? Or what? It works fine without DKIM or SPF or any of that stuff; at best you must whitelist the message sender.s
I'm having a hard time understanding you because you refuse to specify "what needs to be set up" and when you have you've been mistaken.
LAN sending is not being "phased out". Email is not being "phased out". Nothing has changed with email policies on any of these distros for the better part of a decade.
I think it's fair to compare with the classic Dropbox vs ftp argument.
It takes some configuring to make sure crown sends emails to the right place - and without the email going straight to spam. Sure you can setup filters, etc. But that's manual work. Not to mention of you'd rather have it text you, send a message to slack etc.
Also cron can't alert you if you accidentally removed the cron job, the host is down, host loses its connection etc.
We have certain cron jobs that if they failed we could be losing thousands of dollars an hour.
Of course they're tested, and you're right they've worked fine for a looooong time.
But I don't care if they "normally work fine", I need to know immediately if it didn't run for any reason. We self hosted a cron monitor, it was free and only took less then a day to add and have fully integrated, including text message and MS teams notifications.
Part of my problem with committing the syntax to memory likely stems from the fact that I've always had pages like this one to fall back on.