Scheduling fixed pauses between runs is often better (for jobs where fixed frequency is not critical, which is most of backround jobs) because you then don't have to care whether it will complete within the period.
It is certainly simpler and less fragile than various solutions involving cron-like scheduler and locking (by the way implementation of cron itself is somewhat more complex than it might look like because of exactly this issue).
One thing that somewhat surprised me about typical industrial automation is that running the logic in some variation of do_work(); sleep()/yield(); loop is pretty common (typical modern PLC works that way) and nobody seems to much care about the resulting latency jitter which is from theoretical standpoint totally horrible but in practice insignificant.
Ideally you'd use something modern that invokes your function every hour (cron? :P) so that the rescheduling is detached from the function. I think if generation takes X hours of raw CPU computation where `X >= 1` then as long as you've got C cores and `C > X` you should be OK?
Out of curiosity, why is having the process scheduled tightly with (something akin to) cron ideal to you? atd is, to me, a perfectly reasonable alternative. I guess it depends on the environment a developer finds themselves in when implementing the feature. It might just be easier to setup the next scheduled job than implementing cron-like features in the system that does the executing of scheduled junk.
To me it's not much a stretch of the imagination that this is what they're already doing, and the time between the scheduled task triggering and setting a new one up might take that much to begin with. The whole setup feels like systems with atd scripts that often rescheduled themselves (possibly based on some condition or using intervals with some variability depending on system load or other such state).