GPS time doesn't have leap seconds. GPS time contains leap seconds, but it will not recognize any further leap seconds. GPS time contains nine leap seconds-- the nine that occurred before creation of GPS time in 1980. GPS time is 19 seconds ahead of TAI (9 leap seconds plus the 10 differential between utc and tai from 1950 or 1960) and 17 seconds behind UTC.
This number (GPS-UTC) will grow as more leap seconds are inserted in the future. This is not the entire picture and it is important that people do not hard code this assumption into their programs. Leap seconds can also be removed, that is GPS-UTC can also shrink. This has never happened but there is no point in hardcoding incorrect ideas about time scales in your program.
ntpd will act like all of its sources are insane, with huge offsets and jitter values approaching -17000 milliseconds. Jitter values will not approach 17k milliseconds. Jitter is the variance between repeated measurements of a configured peer.
Regarding your first point: Is GPS time defined for dates before 1980? If not it does not have leap seconds in the sense that it skips or inserts seconds at specific times. It just has a constant offset to TAI, I guess to maximise the number of leap seconds that GPS can handle in its 1 byte leap second field.
The GPS signal carries information about the GPS vs. UTC drift in a 1 byte field. As said, my guess is that they just wanted to maximise the use of this field by setting it to 0 instead of 19 at the beginning.
Probably a minor point, but I'm curious about the use of "fast" in the article. Wouldn't it be more accurate to say that the affected clock would be running ahead, but not necessarily fast?
To me, "fast" sounds as if the second counter in the clock advances at a rate greater than 1 tick per canonical second. In this case, the fast clock would diverge from actual time (whatever point of reference, it doesn't matter) such that the concept of being 17 seconds away from NTP time would be an accurate statement only at a single instant in time.
It seems that the article is not discussing the accuracy of clock counters, so I am ignoring the fact that no clock is perfect for now. It seems like 17 seconds ahead is much more accurate to capture the idea discussed, that the clock in question advances its second counter at the appropriate rate, but the value of that counter is 17 greater than that of NTP time, and will continue to be so until the next leap second.
I'm not sure there's a single, standard usage. Among (mechanical) watch enthusiasts, who certainly have an axe to grind on the semantics of time, a "fast" timepiece is one that advances too quickly, usually on the order of a few seconds to a minute per day.
The watch enthusiasts are technically correct, but the colloquial meaning while not technically correct is also a phrase that people can use to effectively communicate the idea to a great many people. So they're technically wrong but language-ly correct.
My understanding is that NTP will NOT set your system time back and cause the same time to be seen over and over. This will not happen unless you force it to manually. NTP's behavior is to slow down or speed up the clock on the local machine until it is in sync with it's time sources.
If the time differential is too extreme it will skip ahead or back instead of just skewing.
> ...The ntpd algorithms discard sample offsets exceeding 128 ms, unless the interval during which no sample offset is less than 128 ms exceeds 900s. The first sample after that, no matter what the offset, steps the clock to the indicated time.
> This may on occasion cause the clock to be set backwards if the local clock time is more than 128 [m]s in the future relative to the server. In some applications, this behavior may be unacceptable.
The double negative issue makes me think that checkboxes are a UX antipattern. They should always be replaced with radio buttons or a "light switch" toggle with two named states that describe what is does.i.e. "Ignore Corrections / Honor Corrections" not "Yes / No". Then you can also order them and color them differently depending on which is the default or "safe" state, without having to have to use double negatives to make all the options default states look the sames.
The biggest reason I can think of of is because you have a system that doesn't tolerate leap seconds, and you've decided to just omit them entirely for safety.
(Of course, time smearing is usually a better option in those cases since it eventually converges with UTC after a leap second occurs.)
Ah, true. I can think of microcontrollers without RTCs. But I usually handle that by ignoring time completely until acquiring it by the data logging device (usually a computer with appropriate time sense).
GPS time doesn't have leap seconds. GPS time contains leap seconds, but it will not recognize any further leap seconds. GPS time contains nine leap seconds-- the nine that occurred before creation of GPS time in 1980. GPS time is 19 seconds ahead of TAI (9 leap seconds plus the 10 differential between utc and tai from 1950 or 1960) and 17 seconds behind UTC.
This number (GPS-UTC) will grow as more leap seconds are inserted in the future. This is not the entire picture and it is important that people do not hard code this assumption into their programs. Leap seconds can also be removed, that is GPS-UTC can also shrink. This has never happened but there is no point in hardcoding incorrect ideas about time scales in your program.
ntpd will act like all of its sources are insane, with huge offsets and jitter values approaching -17000 milliseconds. Jitter values will not approach 17k milliseconds. Jitter is the variance between repeated measurements of a configured peer.