Except when doing a pcap, I actually want the actual time it is. Maybe in some cases you'd explicitly want an offset, but not in general. It's really a plea to get your clocks synced up, so you aren't forced to choose between reporting an incorrect time or an incorrect duration. If I'm running a pcap and the system time changes over a day by several seconds, I'd prefer each packet to report the closest concept of the right time instead of being way off as time goes by.
Not to mention: if the monotonic clock can keep such accurate timing, then everyone would just use that and NTP would not be so necessary.
Really: Under what conditions do you have a usefully functioning system when the clock is so off you need to do multi minute jumps? Even HyperV, with the utterly atrocious w32time manages to keep it with a minute or two (and a Linux guest can easily have ~ms accuracy).
The leap second point is valid, but that's an argument against leap seconds which serve no use in today's society other than to introduce unnecessary problems. Even Google just gives up and purposely introduces inaccuracies in their clocks for a day so that when the leap second comes around they're synced again. A leap hour would be a far better solution, as it's something many people are (unfortunately) used to from DST, and it wouldn't bother us for a dozen centuries.
Under what conditions do you have a usefully functioning system when the clock is so off you need to do multi minute jumps?
One example is embedded systems. Many don't have an RTC, or boot after the RTC has lost power. If a network connection finally comes up, NTP will instantly fast-forward the clock by years
The simple solution would seem to be setting the clock first then doing the packet capture instead of setting the clock in the middle of the packet capture.
for debian based embedded systems, the fake-hwclock package is helpful here (it's a script to periodically save the current time, and restore on boot). You'll still have big jumps after a power loss, but probably not years. It's also helpful in case you ever change the motherboard on a regular system with a RTC.
> if the monotonic clock can keep such accurate timing, then everyone would just use that and NTP would not be so necessary.
Stable and accurate is not the same thing.
And if you have two NTP servers, and one is off (example from the article), then yes multi-minute jumps do happen. A misconfigured NTP server caused TCP sessions in a load balancer to drop and ping commands to just hang. That is not OK.
Side-note: The misconfiguration in that example was "the NTP server ran on a virtual machine, where timer ticks from hardware to VM drifted, and the time difference to upstream became so big that the NTP daemon stopped trusting it and ignored it". I'm not defending that design, I didn't do it, and I fixed it when I found it.
Not to mention: if the monotonic clock can keep such accurate timing, then everyone would just use that and NTP would not be so necessary.
Really: Under what conditions do you have a usefully functioning system when the clock is so off you need to do multi minute jumps? Even HyperV, with the utterly atrocious w32time manages to keep it with a minute or two (and a Linux guest can easily have ~ms accuracy).
The leap second point is valid, but that's an argument against leap seconds which serve no use in today's society other than to introduce unnecessary problems. Even Google just gives up and purposely introduces inaccuracies in their clocks for a day so that when the leap second comes around they're synced again. A leap hour would be a far better solution, as it's something many people are (unfortunately) used to from DST, and it wouldn't bother us for a dozen centuries.