Hacker News new | past | comments | ask | show | jobs | submit login
Falsehoods programmers believe about Unix time (alexwlchan.net)
442 points by pplonski86 on May 15, 2019 | hide | past | favorite | 268 comments



I read articles like this and come to the conclusion that UTC is flawed, not Unix time. Leap seconds seem mostly useless. People seem to think they are important for astronomy, but for every astronomical calculation I have ever done, your first step is converting from UTC to TAI. Move any jumps in time to once a century (or millennium). Such jumps have occurred in the past (Julian to Gregorian) and are easy to handle programmatically. People think time increases monotonically and it's confusing to push it any other way.

The idea that I don't know how many UTC seconds will pass between now and May 15, 2022 0:00:00 is absurd. The fact that a clock sometimes reads 23:59:60 is also absurd, as is the "possibility" of a 23:59:59 being a forbidden time on a certain date if we ever add a leap second of the opposite sign.


Well, the problem with this is that every simplistic view at time is... well... too simple. It's not that many people haven't tried, I especially like this article: https://qntm.org/abolish

That being said, why your approach doesn't work is that there are several hard astronomical or cultural definitions that you'd throw off by playing with time.

A day is defined as the rotation of the earth around its axis.

A week is a hard cultural and religious boundary for just about everything in life.

A month is roughly the rotation of the moon around the earth (although that definition is arguably the weakest and ready to go)

A year is defined as the rotation of the earth around the sun.

Changing any of these will make the summer drift into the winter, or the night into the day, or whatever.

Time just isn't simple, and although most of these intervals _almost_ fit within each other, reality is that they don't and we'll always have artifacts.

If you enjoy philosophy on these kinds of imperfections as much as I do, I can heavily recommend this article about musical tuning: https://blogs.scientificamerican.com/roots-of-unity/the-sadd...


Your article is about time zones, not leap seconds?

The US has put forth a proposal to abolish leap seconds. It is now supported by China, Australia, Japan, and South Korea. The ITU keeps punting on actually voting on it - it's now scheduled for 2023.

https://en.wikipedia.org/wiki/Leap_second

A day already drifts... every single day in fact. Why is 0.9 sec the magic threshold for maximum drift? Why can't it be 1 minute of drift, or 1 hour of drift? We are putting out problematic corrections for something a minor drift on the scale of years that should be happening on the scale of centuries or millennia in my opinion.


That's interesting, I stand corrected. TIL that it's actually a heavily disputed topic and much less clear than I painted it, I had quite a different impression before — thanks!


Happy to see it is not just me that enjoy getting rid of my ignorance : )


It's a lot easier to test and prepare for something that occurs at least semi-regularly than a monumental event far into the future that is "so far into the future we don't have to worry about it"....


If we abolish leap seconds (and any other changes to UTC) and let the error accumulate until it hits 30 minutes then a 1-hour time zone change can adjust for it. Time zone changes happen at least semi-regularly so it wouldn't be monumental.


It would take millennia for such an error to accumulate. Leap seconds are inserted less than once a year and 30 minutes is 1800 seconds.


Falsehoods programmers believe about leap seconds: 1. The rate at which leap seconds are inserted will remain roughly constant. [...]

The Earth's rotation is slowing down (by about 2 ms per day per century), so the difference between TAI and UT1 is increasing quadratically, not linearly.


Wouldn’t that move the international date line?


I guess so, but I don't know why that matters.


The international date line isn't straight in order to avoid land. Moving it would be problematic.


think of it as ordinal: you're shifting left 1 unit, not moving the line, per se.


Still matters because some places want to make sure it's the same day (according to their local reckoning) as it is somewhere else in particular (usually a major trading partner).


Or the next town over.


The other boundaries are a lot straighter.

Imagine what would happen when the International Date Line ends up in the middle of the US.


1 minute of drift seems fine. 1 hour of drift is rather noticeable.

If the drift is about 0.5 second per year, about a minute of drift will have to be compensated on a century border.


The third principle of Continuous Delivery is "If something is difficult or painful, do it more often".

So, if leapseconds are actually painful for you, then maybe we need to contemplate making this kind of adjustment on a finer timescale, like milliseconds.

OTOH, if you think leapseconds are painful now, just you wait until you postpone this pain and do it even less frequently.


That principle is more of a koan than actual concrete advice.

For example, deleting your entire codebase and firing all your developers is painful, but even the most continuous deliverer wouldn't advise you to do that more often.


Do you also pee in front "wet floor" signs?

Getting past pedantry, the advice is obviously about foreseeable, repeating parts of normal business, and it applies to more than devops.

A long time ago (tail end of the "desktop publishing revolution"), I was a production assistant, and then manager at a magazine. We published six times a year. Towards the end of my first year there, I realized we had the same problems, right down to our Advertising Director's emotional meltdown, every. single. issue.

After getting to know folks working at other magazines and people at our press, I noticed that the monthlies seemed to run smoother with less drama, and the weeklies were even better at it. Eventually I realized it was because they had to be. If there was some minimum amount of human drama that had to happen, it was forced into exhibitions that didn't disrupt the (tight) schedules. If last-minute changes from flakey advertisers came in, they didn't cause a firedrill, they just didn't run, because that issue is already on the press and we're talking about the issue after next now. And so on.

The general principle is actually very straightforward, and applicable all over the place, including your personal life. If you have high-friction processes, devoting time and attention to them is the way you make them lower-friction processes. And while it may be possible to do that without doing things over and over until you get there, it probably is not possible for you to get there without repetition, else you'd not have the problem in the first place.


Doesn't every company do this slowly as they replace code and developers?


> So, if leapseconds are actually painful for you, then maybe we need to contemplate making this kind of adjustment on a finer timescale, like milliseconds.

That is what a lot of organizations do, "smearing" the leap second since they know their systems can't handle the discontinuity. I think this shows that software has failed in general at handling leap seconds correctly. As another said, I think leap seconds are unnecessary complexity with the frequency at which they happen.


On the contrary, I think that the smearing behavior is an indication that leap seconds poison an otherwise-useful model of reality. The discontinuity is an annoying edge case driven by the fact that "time error" accumulates, so it seems to me that that a 1-day smear is a more natural approximation than an instantaneous discontinuity. That, and tuned for the attention span of human organizations.


>The third principle of Continuous Delivery is "If something is difficult or painful, do it more often".

That only works assuming you need to do it at all.

If leap seconds are useless (to the majority of users) or insufficient, not doing them is preferable to doing them constantly.


That third principle is about facing (and implicitly, automating) your difficult-but-necessary processes. Outside of that narrow scope it's terrible advice. Being in a car crash and converting a country from one system of units to another are both difficult and painful and you should do either as seldom as possible.


The obvious correction is to change the duration of a second such that a year is exactly $integer seconds long. While this will make the duration of second drift from year to year, that drift is so small that everyday devices like consumer-grade computers could just ignore it, provided they're NTP synced once every five years or so. Physicists can just keep using whatever they're using and call it by a different name. Ideal solution all-around.


The core issue is that the number of days per year is messy, so it's not possible to have a 'nice' number of seconds in a day and also in a year.

The best solution is probably just to use a nice number of seconds per day and accept the fact that in 100ish years summer will start in March, not December. It's not like we need to build our entire society around planting and harvest any more.


That's not anywhere close to "the best solution"


This discussion isn't going to be very meaningful unless you wanna tell us what 'best' means to you.


Are we to infer that your lack of criteria for "best" in your own post means that it too is meaningless? You made a breath-taking leap going from {our society isn't as dependent on planting and harvesting anymore} to {therefore it's "probably best" to radically alter universal concepts of temporality to partially solve an obscure technical problem}. Methinks the burden on proof lies firmly in your court on this one.


I'm not the one arguing with a statement in that post.


That's fair. I simply can't see the best solution relying on getting rid of deeply ingrained culutural norms that span thousands of years.


March is already summer here (Southern Hemisphere). We have all sorts of weird eccentricities due to "cultural norms" (we use snowflakes on our Christmas cards in mid-summer, whereas the Steam Summer Sale happens mid-winter).

Not to mention with climate change, we're probably going to have to deal with many other changes in season etc. anyway.

All of us deal with the fact that the day gets shorter and longer every year anyway (and daylight savings didn't seem to be a terribly good solution).

I think having the seasons slowly drift is a non-issue, provided it's just slow enough to not be noticeable from year to year.


Yeah, I can see that being an issue, at least until a decent proportion of humanity lives off-world. I can't see people living in the asteroid belts adding arbitrary leap-seconds.

Of course, once you start dealing with astronomical distances and speeds, the whole concept of a single universal measure of time kind of goes out the window anyway so it's probably going to get worse, not better. :P

(FWIW my definition of 'best' here is something along the lines of 'simple, no special rules or tinkering, gets translated to a human-readable local time on demand'. Which is pretty much how Unix epoch works, come to think of it.)


Yes... let's let future asteroid dwellers solve their own problems, which will be as much cultural as technical. Today's elegant universal solution is sometimes tomorrow's Esperanto: perfectly useless.


But we do build our entire society around sports seasons.


> Well, the problem with this is that every simplistic view at time is... well... too simple. It's not that many people haven't tried, I especially like this article: https://qntm.org/abolish

This has nothing to do with leap seconds. Leap seconds are the worst mechanism in keeping time, there's not a single advantage to having leap seconds.

> A day is defined as the rotation of the earth around its axis.

And we can now measure time with precision high enough that tying to Earth's rotation is not acceptable. We can still use timezones to fix the shift.

> Changing any of these will make the summer drift into the winter, or the night into the day, or whatever.

Yeah, and it will take a millennium for Earth rotation to shift enough for people to notice it. On the other hand, you'd have no problem with summer/winter time I presume?


> Yeah, and it will take a millennium for Earth rotation to shift enough for people to notice it. On the other hand, you'd have no problem with summer/winter time I presume?

No, I don't, as they are just views on a monotonically increasing time scale named UTC that keeps up with Earth's rotation, so that every single thing from climate diagrams to everything else humanity is syncing on keeps working and being comparable.

> We can still use timezones to fix the shift.

Or, we could not do that. Everyone is free to use TAI inside their own projects as they see fit if they have the needs for strictly monotonically increasing time. I would argue that leap second smearing has way less artifacts in practice than bolting time zones onto a shifting UTC time.


"Changing any of these will make the summer drift into the winter, or the night into the day, or whatever."

Leap days, sure.

Leap seconds - as the article points out, there have only been 27 so far since 1970. Less than a minute per century isn't much drift, and could be corrected in a bigger block later.


> A day is defined as the rotation of the earth around its axis.

When you get into the world of programming for most people, businesses especially, a day is defined as 24 hours. We like to think that our days and months and years are tied to vast astronomical forces, for romantic reasons, but when it comes down to it, everything comes back to the second, and the second is defined to be a single unvarying length of time.


Well, depends.

In TAI: A day is 24 hours (86400 seconds always), and a second is a SI second.

In UTC: A day is a rotation of the earth, and a second is a SI second. (And since these are incompatible, we have to introduce leap seconds).

In UT1: A day is a rotation of the earth, and a second is 1/86400 of a rotation of the earth.

Julia, interestingly, uses the UT1 notion, thus avoiding leap seconds, IIRC.


My point is this: Most people don't know about UTC/TAI/UT1 or any other standard. They know what's common knowledge, and what's common knowledge is that one day is 24 hours precisely, one hour is sixty minutes precisely, and one minute is sixty seconds precisely. Therefore, when writing software to interface with most people, including businesses with contracts defined in terms of those time definitions, one day is 24 hours or else you end up with problems.


That's why I advocate using UT1 in programming (like Julia), where all of the above is true. The cost we pay for that is that a UT1 second is not exactly a SI second, but I don't see how that matters except in GPS etc.


Well, just to nitpick, seconds do vary in comparative length from reference frame to reference frame...and that actually does start to matter in global computing.


Seconds don't vary, your approximation of a second varies.

If you want to nitpick, you're going to need to start with Cesium-133 oscillation periods as averaged from over nine billion samples.


I like the cut of your jib. But I suppose what I was saying was that, for the purposes of establishing “now”, “now” isn’t really standardizable except with respect to a single inertial frame, and any time-keeping that occurs outside of this inertial frame will not be absolutely in sync, even though it may have a highly accurate measure of time for its own inertial frame.

This matters when computers are bouncing records about time around satellites and across the global.

Edit: I have no idea what I’m talking about so please do not hesitate to correct me on the particulars!


Seconds do vary because there are no privileged reference frames in the universe. You'll get a different measurement if your cesium measuring experiment is traveling at a different speed than you are.


That's the direction of your mass-energy vector, not seconds.

A bit like complaining that UTC is wrong because it doesn't match your local Solar time.


The earth doesn't travel the same course each year. It's chaotic.

Best to define a second as a certain distance light travels in a vacuum and use that to build minutes, hours, etc. Then, have a fixed geo reference point (like, say, geo 0:0) that holds an atomic clock as a main reference.

We are only really talking about local coordinate changes in the time dimension, really, so might as well simplify.


> The earth doesn't travel the same course each year. It's chaotic.

I would prefer to call it quasiperiodic.



There have been a net +27 leap seconds since the 1970’s. That’s not enough to turn summer into winter or day into night. That’s roughly a full minute per century.


Your statement is both true and misleading.

The speed of the Earth's rotation is slowing over time. Therefore leap seconds are becoming more and more common. In a century, we should be averaging something like 1 a year. In 500 years, we should be averaging one every few months. And so on.

It took over a thousand years for the Julian calendar to fall apart. Our current timekeeping system will also fall apart eventually, and I would be surprised if it lasted as long.


500 years from now, if we're still alive, enough of us will be living in space colonies rather than on the Earth's surface that the Earth's rotation will not necessarily be immediately relevant to everyone's lives.

Regardless, we've had maybe half a minute of drift since the 1970's. If the drift is 1 minute in the 2000's, 1 second per year in the 2100's, 2 seconds per year in the 2200's, 3 seconds per year in the 2300's, etc., then by 2500, the cumulative drift would add up to...60 + 100 + 200 + 300 + 400 + 500 or 1560 seconds of total cumulative drift by 2600. That's less than half an hour. If we didn't bother accounting for leap seconds, UTC midnight would be only half an hour off from astronomical midnight in 600 years, which is roughly the same error inherent in most time zones.


We need a pseudo day still due to our sleeping rhythms but perhaps people could choose that based on personal preference not just stick with 24h


The day would shift half an hour in the next 600 years. That’s half as much as it shifts twice a year for DST, except it would happen gradually over a period of time roughly equal to the time between the fall of the Eastern Roman Empire and today.


I was sort of nodding to the people who claim they have a 25 hour sleep cycle. Once we are in space and using artificial light, you can choose the sleep cycle that suits you. But predicting things in 500 year is just Sci Fi I guess!


I'm reasonably sure that any assumptions about how our anatomy will look like in 500 years are more wrong than right.


Even if there was a leap second every year... does it matter? It would still take several hundred years for the difference to be noticeable, at which point you can just change UTC by +-1h.


What it means is that the discrepancy between clocks and time of day is growing quadratically. It won't matter to us in our lifetimes. But eventually it will be a problem.

It will take on the order of a thousand years for the discrepancy to add up to an hour. In 5000 years, it will be around a day. The fact that it is currently growing about a minute per century is true, but not a good predictor of what will happen.

That said, I prefer if we just lose the astronomical basis for time keeping, and let our distant descendants figure out that they should use time zones, and not modify UTC. Hopefully by the time it becomes obvious that clocks are drifting too fast for time zones to change all of the time, we're a multi-planetary species and paying attention to what is happening on the Earth seems quaint.


In 5000 years the majority of humanity, assuming we haven’t gone extinct, will live in space and the astronomy will be moot anyway.


We should abolish local time too. There should be a single time for every country which is the number of milliseconds since either the beginning of human civilization, the beginning of earth or the universe.


So You Want To Abolish Time Zones

https://qntm.org/abolish


A. Why?

B. If you want to be super strict about it, as I imagine you do, that scheme would hit problems with relativistic effects. Local time is the only real time.


Mean Julian Days (MJD) are the agreed standard for something like that. Abolishing local time has a number of issues as linked in other comments. The time standards we currently have are all well suited to their purpose, it's just a case of picking the most appropriate one for the task. (Plenty of room for more TAI in modern technology).


I like Swatch Internet Time.


>A day is defined as the rotation of the earth around its axis.

>A year is defined as the rotation of the earth around the sun.

Sounds like we need a generic geo-solar measurement.


I know it will never happen because of cultural reasons, but really, we need to separate the time (as in how I describe a point on the time scale, for, let say, a meeting, especially one that involves people in globally distributed locations), and the cultural aspect (at what time people are sleeping or eating lunch).

Right now those two things are intertwined, and that's bad. Not everyone eats lunch around noon or go to bed at 23:00 anyway, so the cultural part is very fuzzy.

I should be able to describe time in an almost metric way. A unit that is the same no matter where I am in the world and doesn't change based on how fast a rock moves around a fireball. "Hey person in Japan, let's have a video meeting at 1,234,99" where that number means the same thing to the as it does to me. Its going to be dark for one of us and bright for another.

Then we can have a different unit, let's call it "day offset". That is the offset from the first number that dictates when a particular area considers to be "the day", when the average people wakes up, have lunch, have dinner, go to bed.

"Let's meet at 1,234,99. I know you're in Japan and your offset is +9,23 while here it's "-5,78" so it's a bit inconvenient, but hopefully you can make it!"

Yes, I realize I'm roughly describing how time and timezone works, with an important difference: with the current system, people live around the adjusted number. The number after the timezone is applied. After the leap seconds and all that crap is applied. Some people argue about daylight saving or no. Computers have to deal with all of it. Let's make the non-adjusted number what people use day to day, and the "offset" is a static number picked by the locals, that include stuff like daylight saving, leap seconds, and various other cultural adjustment. Its only purpose is to communicate the cultural or location difference between 2 people.

Obviously that will never happen, but Ill keep dreaming.


It's called UTC. You just want UTC with a different string format and a timezone format appended to it, or maybe seperate from it.


Pretty much, yes.




> The idea that I don't know how many UTC seconds will pass between now and May 15, 2022 0:00:00 is absurd.

It's perfectly sensible though? 0:00:00 is a designation of an event. Namely, the event of midnight during a particular rotation of our planet. Every calendar day is one rotation, so May 15, 2022 is also an event -- it's that many rotations of Earth after today. But a second is a duration and it's independent of Earth's rotation. An asteroid could hit the earth and make the rotation shorter or longer (or just shatter the planet...) in the meantime. You obviously can't know how many seconds will pass until that particular midnight. It's inconvenient, but it makes perfect sense when you can't predict the future.


> come to the conclusion that UTC is flawed

The different time definitions exist to support different use cases. UTC isn't flawed, in fact the adjustments that keep it aligned to solar time are very helpful for its defined use cases.

It's just that time is complicated. If you think you will be doing time math where duration is paramount consider using TAI.

PS: for all those who think 'time' is complicated in fascinating ways, you should fall down the geodesy hole some time. Saying 'where' on earth something is located is .. not simple.


Well, time is complicated, but not for reasons that really matter to 99.95% of engineers (relativity and such). At its most basic level, Time is really just a monotonically increasing counter of seconds (or ms or ns or whatever). That's it.

The problem is really the cultural and societal layers on top of it. That's not Time's fault, its Our fault. A "Day" has to be 1 rotation of the earth around its axis, a "Year" has to be 1 rotation of the earth around the sun, etc.

Every unit of time at the Second and below is Simple. Every layer above it is Complex, including the counter of seconds from any date in the past, because Humans are Complex, not because Time is Complex. If we stopped caring that it always has to be cold in Winter (in the US) or that 7am is always morning (well, sunrise changes w.r.t both time and location), or that days are always (usually) 86400 seconds long, our interactions with time would be much simpler.

And, really, given that EVERY SINGLE UNIT above the second has not just some but many special case rules that are instantly confusing, maybe we should stop caring so much.


Ah yes, the spherical cow of software development: it's much simpler to build software which interacts with nothing and especially nobody.


There's definitely something to be said about overengineering though. No human is going to notice if the clocks all drift together by 1 second per year.


Seconds are only Simple if they are defined as such. If they are defined as a subdivision of the day, then they need to be rebound to the slowing rotation of the Earth as the Earth slows and as relativistic differences due to altitude come into play.

If the second uses some other definition, you still need some centralized authority to define its length, for the same reason that it’s measurement will drift depending on your reference frame.


> Every unit of time at the Second and below is Simple. Every layer above it is Complex...

This is an excellent insight. Thanks!


A friend of mine was getting a real estate license and talked to me about the insane ways land ownership is measured, because the curvature of the earth. I believe that was just the tip of the iceburg. I'll have to check out geodesy.

I remember an ancient mathmatician described how all of existence must be a single geometric point, I think it was Euclid, but I can't find it. This all reminds me of that.


> The different time definitions exist to support different use cases. UTC isn't flawed, in fact the adjustments that keep it aligned to solar time are very helpful for its defined use cases.

The user above listed several explicit cases that seem needlessly complicated. Can you give examples where the adjustments to UTC are so helpful for its defined use case? Or elaborate what that case is?

I'm not trying to be obstinate - it's just that without examples your response boils down to "No it's not" without further support. I'm open to being convinced...convince me.


We humans almost always want our concept of "what time is it" to correspond to the relative motion of the sun in the sky. We ask "what time is sunrise?", and "when will the days start getting longer?". We like "noon" to be when the sun is highest in the sky. All those things are what UTC is designed for.


Solar noon varies from 11:53 in November to 12:23 in February where I live (it keeps closer to the hour during DST [1]); an adjustment of 37 seconds doesn't seem to be very big compared to the 30 minute swing we observe over the year. It seems more reasonable to wait until the gap gets large enough to coordinate an hour jump in clocks with enough advance notice that there's a good chance of having it happen simultaneously. At the rate of leap seconds in the last 60 years, that should be several thousand years away; hopefully none of my software is running then.

[1] I'll save the rant on DST.


We humans, without the use of a clock, do not have the faintest idea when midday or midnight is. We can't judge when the sun is highest in the sky, and we certainly can't judge when it's lowest under our feet.

The sun isn't even highest in the sky at noon! Firstly because of timezones (your noon is based on highest-sun in the middle of your timezone, not your current longitude), and secondly because of this whole epicyclic botheration:

https://en.wikipedia.org/wiki/Equation_of_time


I've been surprised at how "in tune" I've become to time, at times, when I've lived outside for substantial periods. Certainly it's not hard to have better than the "faintest idea" when midday or midnight is, if +/- 30 minutes qualifies? Plenty good to decide when to have lunch...

In the higher latitudes, it requires even less acclimation once you've got an idea of your orientation, and maybe what the stars look like. To the point that, at the poles, a clear sky is a direct-reading 24-hour clock.


We sure can judge whether it's before or after sunset though.


Nope. Is sunset when the lower limb of the sun touches the horizon? When it's halfway? When it disappears completely? And is that the actual horizon where the sun happens to be today? The treeline above it? The tower block in the way? Where sea level would be if there wasn't land there? Or just when it gets dark? But then is that civil twilight? Nautical twilight? Astronomical twilight? When the light from the sun gets dimmer than the light from the streetlights that happen to be in your area?


> We ask "what time is sunrise?", and "when will the days start getting longer?". We like "noon" to be when the sun is highest in the sky. All those things are what UTC is designed for.

The first thing is for time zones, inclusive of DST/Summer Time/etc. The second thing is for calendrical reform, which we've basically stopped doing now that the Gregorian calendar is as widespread as it's going to be. Leap seconds are for this:

https://tycho.usno.navy.mil/leapsec.html

> Currently the Earth runs slow at roughly 2 milliseconds per day. After 500 days, the difference between the Earth rotation time and the atomic time would be 1 second. Instead of allowing this to happen, a leap second is inserted to bring the two times closer together.

The "error" induced by the Earth slowing amounts to 12 minutes over the course of one thousand years.


And the leap seconds meaningfully alter that?


Yes. Leap seconds are there to align UTC (an atomic time) with UT1, which is defined based on mean solar time, ie "the day", since UT1 isn't uniform.

Without leap seconds, UTC will slowly drift out of sync with the Earth's rotation.


That's the point though. The above post said leap seconds cause more harm than good - a correction done once a century would fit well enough within the purpose you've defined here. If UTC is out of sync with the sun by less than a minute, no human will notice, and non-humans don't care about the sun and are happier with fewer disruptions.


I think most applications (including computers, except GPS receivers internally) should just switch to UT1. A rotation of the earth takes a day, and a day has 86400 seconds. Done. (Fine, a second is not exactly an SI second, but close enough for most practical purposes, and computers can handle it.)


"UTC isn't flawed, in fact the adjustments that keep it aligned to solar time are very helpful for its defined use cases."

What use case in the year 2019 is made easier by UTC getting leap seconds instead of treating it as a monotonically increasing scale (TAI)? I'm genuinely interested. The only suggestion I have ever heard is use of a sextant -- what an anachronism in the modern world...


See my comment on the other reply.

I'll add in; astronomers like to be able to find objects in the sky using calculations based on time. UTC (actually TT, which underlies UTC) is good for that.

Spend a little time on the various wiki pages for time standards, and think about why those things exist.

https://en.wikipedia.org/wiki/Universal_Time


I've spent a lot of time finding objects in the sky to high accuracy with astronomical algorithms. The first step is always converting from UTC to TAI. See Astronomical Algorithms by Jean Meeus if you don't believe me.

Your noon example works for a single longitude in a time zone and the time between subsequent noons on two different days will only be exactly 24 hours 4 times a year. It seems unnecessarily complex to push annual leap second updates to preserve something like as obscure as these 4 events for the 24 time zones exactly on the line of longitude to the accuracy of a second.

See the Wikipedia on abolishing leap seconds, "that the drift of about one minute every 60–90 years could be compared to the 16-minute annual variation between true solar time and mean solar time, the one hour offset by use of daylight time, and the several-hours offset in certain geographically extra-large time zones"

https://en.wikipedia.org/wiki/Leap_second


> The first step is always converting from UTC to TAI

I believe you are actually converting to Terrestrial Time. If there is a correction of TAI+32.184 seconds in your calculation, that would indicate TT.


Yes, it is ultimately TT. The order goes: UTC -> TAI -> TT

I could argue the TAI to TT is the second step. But let's put the pedantry aside: Is your best answer of something made easier in the modern world by leap seconds ensuring that solar noon happens at exactly 12:00:00 four times a year on an exact line of longitude for the time zone? And 4 times a year would require a time zone that does not honor daylight time, otherwise it is twice.

I'm not some crackpot here talking about the absurdity of leap seconds. US, China, Australia, Japan, S. Korea are on board for discussions about abolishing to happen in 2023.


> I'm not some crackpot here talking about the absurdity of leap seconds

No question about that; leap seconds have been controversial since they were invented. The question is whether "the juice is worth the squeeze" which is of course an opinion rather than a fact.

The nice thing about time standards of course is that there are so many to choose from. All of them have flaws when you try to use them the way we do in civil time applications.


I have a theory that simple sets can become difficult to understand not because of anything in their nature, but because of how they are desired to be consumed.

In other words, if you have a simple to understand thing, then it might become hard to understand if it needs to be fed into something that is itself hard to understand. (Or even if the way in which you feed it into the other thing is hard to understand).

Additionally, if you have something which is easy to understand that is fed into multiple incompatible things (either the things are easy to understand or hard to understand), then the whole system becomes harder to understand.

Finally, if the starting point that is fed into all other things is not actually easy to understand but in fact hard to understand ... things can get pretty bad.

Even if time was easy to understand by itself, then I still think it would be pretty hard to deal with because it is used for multiple incompatible domains (astronomical vs how long a day is on earth for example).


Yes, the use-case and domain set the constraints. Using the wrong measurement type or abstraction either increases the complexity of the exceptions necessary to make it fit the constraints or increases the inaccuracy/errors... or both.

With regards to time specifically, I recall many years ago talking with an economics professor I had that was from Zimbabwe and he mentioned the frustration that western industrial nations had when attempting to establish business in Africa. Industrial nations long ago became dependent upon a higher resolution and accuracy of time and so when they say "2PM" they expect accuracy within a minute or two. But the African work force would often show up at 3PM and when confronted by this "lateness" they would reply "3 is the friend of 2." An agrarian or pre-industrial society has no real need for accuracy to the minute, but instead can operate quite well with human observation of the position of the sun. This impedance mismatch caused no end of frustration on both sides, since neither could really fully understand the other's conception of timeliness.


Some people think one year is one orbit around the sun, and not exactly 365 days. Leap years help to keep that reality.

Some people think that a day is one revolution of Earth's axis, and not exactly 86400 seconds. Leap seconds help to keep that reality.

The fact that axial rotation is less predictable than orbital paths is a quirk of nature.


> Some people think that a day is one revolution of Earth's axis, and not exactly 86400 seconds

One day is more than one revolution of the planet. The sidereal day is exactly one rotation of the planet, but that's about four minutes shorter than a civil day.

The reason is that the planet also moves around the sun, so in order to get the sun to the same position in the sky for noon the planet needs to turn a little bit more than 360 degrees, about 360/365 extra.


Leap years have pretty straightforward rules for when they happens. Approximately every 4 years, expect on 100 years, except on 400 years. Leap seconds on the other hand happen at the whim of the IERS


Leap years are actually just slightly inaccurate. Leap years estimate an orbital period of 364.2425 days but the actual orbital period is 365.24217 days.

Also, it should be noted that basically nobody actually understands a "year" to mean an orbit of Earth around the Sun relative to the Local Standard of Rest, which is called a sidereal year (ca. 365.256 days). Instead, to most people, a year means a cycle of seasons which recreates the same angle between the Earth-Sun axis and the Earth's axial tilt, which generates seasonal temperatures and is called a tropical year. Because the axial tilt is variable, the tropical year also varies and eventually drifts away from the sidereal year. The tropical year and sidereal year differ by about 20 minutes, so today's year starts about a month "away" from the year Julius Caesar enacted the first true solar calendar.

The difficulty in this case is that spinning rocks in space can float around however they please.


A nice hack for a time system for Earth might be to add the leap seconds on the last day of leap years.

That would keep the time and date adjustment code together in one place.


You chose your units arbitrarily to fit your argument.

Here's my version of it:

Some people think one year is one orbit around the sun, and not exactly 365 days. Leap years help to keep that reality.

Some people think that a day is one revolution of Earth's axis, and not exactly 1440 minutes. Leap minutes help to keep that reality.

And now we are at less than one change every century!


This made me think of time in science fiction, eg. https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky#Interste....

On a long enough horizon, I take it as a foregone conclusion that simple metric measurements will become standard.


For most practical purposes (like knowing when to show up at a meeting), you need to know local time.

Even if you got rid of leap seconds, you still wouldn't know for sure how many seconds there are between now and some date and time in 2022 because you don't know what the local timezone will do. It's not clear why this is a useful calculation? For scheduling events in the future, you need to store the timezone (or location) and local time anyway.

Unix time is a useful approximation for comparing and converting between local times.


Storing computer time as something related to UTC is flawed, should use TAI. Solves the leap second problem, you're already converting time to display it to humans, and for time zones, so correcting for leap seconds is just one more step.


> The idea that I don't know how many UTC seconds will pass between now and May 15, 2022 0:00:00 is absurd.

Not at all.

To quote Feynman: "Nature cannot be fooled".

The earth will rotatate in a (slightly) chaotic fashion. Midnight has a definition which is based on astronomy.

Thus, you need to adjust the length of a second (which is not desirable) or the number of seconds in a day.


Astronomy has the need for both.

A notion of an instant in time that advances linearly without any ambiguities, which would be TAI, and a notion of the precise orientation of Earth, which would be UTC.

To record events, you will use TAI, to point your telescope to a celestial object, you will use UTC.


Astronomers don't really use UTC for pointing telescopes though, they use sidereal time. A sidereal day is 4 minutes different from a mean solar (UTC) day, so you still have to do a conversion.

You still need something that keeps track of the Earth's rotation, but presumably the ITU will still keep track of the difference between UT1 and UTC. I'm an (ex-) astrophysicist, but I'm not convinced that astronomy would actually be particularly impacted if leap seconds would stop being applied to UTC. You already need to keep a leap second table, it would just shift where you apply it.


> The idea that I don't know how many UTC seconds will pass between now and May 15, 2022 0:00:00 is absurd.

Why don't you use TAI then for this purpose?

I think the notion of time in relation to the earth's movements (for timekeeping on earth) is fair.


> Move any jumps in time to once a century (or millennium).

You hear a lot of people remarking that Let's Encrypt has made things better by requiring certs to be reconfigured more often rather than less. I know they're not exactly the same as time in general, but as a general idea, knowing that you need to do some fiddling often and automating it, might be better than growing complacent because nothing needs to be done for 50 years.

Might be setting things up for a lot of work when that next change comes due.


It's almost as if we've tried something like that in the past, and the effort to avoid the worst issues was a monumental effort.


Confess I'm not quite clever enough to know what you mean or to what you're referring.


Y2K problem, because someone at some point decided that two digits was enough to represent years. Thus when 99 came to an end, the clock would show 00 for year. It was an undefined behavior how to interpret it.

The event was rather boring, because a huge effort went into making sure that critical infrastructure was "Y2K Ready".

So my point was that pushing the problem in front of us until it becomes too large to ignore, is not a good strategy when we're talking (it) infrastructure. Handling leap seconds every now and then are the lesser evil.


Usually the first step is to correct from what ever the local clock at the observatory said to actual UTC.

Then you convert from UTC to TAI.

And then you convert from TAI to the arrival time at the solar barycenter, removing the geometric light travel time to Earth (or more precisely the observatory) as well as the gravitational effects of Sun, Jupiter and Earth.


> Move any jumps in time to once a century (or millennium)

Instead of building it into all into the protocol and time libraries, it's so far into the future that "We don't have to worry about that". We've had that issue before with Y2K.

> Such jumps have occurred in the past (Julian to Gregorian)

In a very different time. Today it would not be as easy - in fact since time and date are ubiquitous, it's not a simple task at all.

> The idea that I don't know how many UTC seconds will pass between now and May 15, 2022 0:00:00 is absurd.

If it matters for your use case, the don't use UTC time?


I don't mind having UTC with leap seconds, but I do mind it being the defacto standard. NTP would be more useful if it was synchronizing TAI.


Leap seconds are useful if you're an astronomer. Astronomers have always determined the current time. Therefore, we all get leap seconds.


How are leap seconds useful if you are an astronomer? I am genuinely curious what problem in the year 2019 is easier due to leap seconds.

I have worked on code that needed to do astronomical calculations to do things like:

position of sun, moon, Mars, Earth, and spacecraft

ECI <-> ECEF

All of these depend on a conversion from UTC to TAI. It's covered in books like Astronomical Algorithms in the intro: https://www.willbell.com/math/MC1.HTM


The only one I can think of is: determining the phase of the 24 hour day (as measured by atomic clocks) with respect to the Earth's rotation.

It does seem logical use TAI for civil time. People interested in calculating the Earth's rotation to high precision could consult a regularly-updated publication somewhere. Eventually the Earth's rotation will drift out-of-sync with the atomic clock timebase, but that won't be important until we accumulate several minutes/hours of error from TAI which could be centuries from now.


Why TAI and not UT1? Who/what really cares about SI seconds?


People who want a predictable and monotonic time scale


SI seconds (in UTC) lead to leap seconds. With UT1, "1/86400 of a day" seconds lead to a predictable and monotonic time scale.


Unfortunately, electronic devices come with quartz crystal resonators, or rubidium or cesium resonators. After frequency calibration these will keep stable time relative to the atomic clock timebase.

There is no device available that allow you to derive a clock from the relative motion of the sun about the Earth’s axis, such that the changing definition of days and seconds relative to realizable clocks can be tracked.

Furthermore it’s kind of nice to have a stable and standard definition of seconds and days. For instance, under the “ut1 system” you don’t know how long a present second is until the present day’s final observations are made.


Here's a falsehood I've seen a bunch of times: the idea that Unix timestamps need to be converted to your local timezone. Unix timestamps are the number of seconds since a specific date in a specific timezone (UTC)! If a user gives you a Unix timestamp and you know they're in the PDT timezone, you should not add or subtract 7 hours of seconds to "convert" the timestamp to UTC! It already is. Similarly, if your client receives a Unix timestamp from your server, you shouldn't modify the Unix timestamp to "convert" it to your user's local timezone. Unix timestamps are always UTC. Your platform's date handling APIs should already offer a way to display a Unix timestamp in the user's timezone or an arbitrary one, and maybe even have a class that represents a Unix timestamp paired up with a specific timezone. At no point should you be doing arithmetic on the Unix timestamp portion to convert it between timezones.


Unix timestamps have no inherent time zone at all.

They are a quantity of seconds (ignoring leap seconds & relativity) since a specific instant. That instant in time happens to conveniently line up with 1970-01-01T00:00 when described in UTC to make things easy for us. But it is equivalently defined as 1969-12-31T16:00-08:00 when described in another time zone. The elapsed quantity of seconds since that instant does not vary depending on how you describe the instant itself.


I think that's a better way of putting it. The main lesson is that there's no operation called "changing the timezone of a Unix timestamp", and any code trying to do that is wrong. A date string is a function of a Unix timestamp and a timezone, and if you need to change the timezone, you need to pass a different timezone parameter, not try to do something to the Unix timestamp part.


> A date string is a function of a Unix timestamp and a timezone

If offset or zoned. Sadly the average date string has no more inherent timezone information than a unix timestamp.


I meant that a datetime string is generated from the combination of a Unix timestamp and a timezone together, not that it lossily retains both of those inputs. For example, if I want to show the time of day of an event like "3:12 AM" to the user, then I need both the Unix timestamp of the event (for example, 1558050450579) and their timezone (could be as a UTC offset, like -7) in order to produce "4:47 PM". That's true whenever I'm trying to make a string from a Unix timestamp, regardless of whether the final date/time string explicitly says its timezone.


Some probing questions:

Alice and Bob both live in England and have planned a conference call at 15:00 on 4-jan. Now Alice happens to travel, and she is in American on 4-jan. What should here calendar do? Moreover, Alice also has a recurring event "Workout" every friday at 9:00, what should that shift to? Finally, it turns out Bob is also in America, what time should the conference call be at now? Finally, for some reason England or America decides to change the DST changeover will now happen on 3-jan.

There is no universal semantics of time that will deal with every case. Certainly 'store UTC and convert to the users's time-zone' is not universal, nor is 'store every timestamp with a time-zone'. The way people perceive of 'do this thing at this time' is very hard to capture. Moreover I'd wager no users would actually fill out time with sufficient detail to deal with this. "What do you mean UTC, time-zone, or local-time" I just wanna work out at 9:00 every day, and meet with Alice at 15:00 in a few days. I thought computers were meant to make things easy".


Unix timestamps have an extremely strict definition and should not be left up to interpretation.

You're describing UI/UX challenges with calendaring and appointments. Very real issues, but separate from Unix timestamps.


In your case, you're not doing the specific thing I prohibited. I only meant you should never do arithmetic on a timestamp to try to "convert" it to an equivalent representation of the same instant (as Java 8 defines it) in another timezone. However you're specifically doing arithmetic on the timestamp to calculate a new different instant, which is fine. (Your case isn't that different than the user pressing a "shift this event time by N hours" button.)

However, I saw a good tip once that you should only store timestamps of past events and events that happen at a fixed instant regardless of calendars and wall clocks as Unix timestamps. Timestamps for things like future calendar appointments (that may be affected by future changes in regions' timezone definitions) should be stored as a date and wall clock time and regional timezone, and only converted to a Unix timestamp when it happens. This makes it possible to see the timezone the user intended, let it be changed, and works well even if timezones themselves change before the event happens.


> you should only store timestamps of past events and events that happen at a fixed instant regardless of calendars and wall clocks as Unix timestamps

I think this works more often than not, but it's hardly foolproof or without repercussions. Say, you can imagine Google Calendar having a list of holidays for the US. Say it's New Year's day. You're saying you'd replicate that into 6 epoch timestamps (one per time zone in the US) per year in the past, instead of just storing it as "January 1, 00:00:00, recurring every year"?


This is a trick question - for whole-day events, the best way to handle them is to record the calendar day you want them to happen on, not the timestamps of the start and end of the day in some particular timezone. See what iCal does with DATE versus DATE-TIME (which must be UTC or include TZ): https://tools.ietf.org/html/rfc5545#section-3.3.4


"Whole-day event" is a red herring. You could've just as easily wanted an event for the first hour of New Year's day rather than for the whole day.


Yes. This sounds a bit silly for New Year in particular because its timing is slightly less arbitrary than other holidays', but holidays move around: https://en.wikipedia.org/wiki/Uniform_Monday_Holiday_Act


It's not "a bit silly", it's ridiculous. It's one uniform holiday in the entire country, recurring at a particular time on a particular calendar day. That's how it's defined, so that's how you record it. If it gets moved one day then you change the recurrence rule from that point forward. The rule you wan't isn't "turn past timestamps into epoch time", it's "record timestamps with their correct semantics for the situation".


I really agree, especially with "There is no universal semantics that will deal with every case." When I enter a time for an appointment, then travel to a different TZ, but know I'll be home when the meeting occurs, I have to tolerate the Macintosh calendar converting my meeting to the local TZ, then back to my local TZ, which ends up being odd at least, and confusing and annoying when it first happened.


> Here's a falsehood I've seen a bunch of times: the idea that Unix timestamps need to be converted to your local timezone.

That's less of a "falsehood", more of a complete misunderstanding of what Unix time is.


I think it's a common misunderstanding worth bringing up here that probably affects more people than the falsehoods listed in the article. On roughly 3 out of the 4 times I've had a coworker run into a timezone issue (different person each time), their first impulse was to do something like what I described. (I stopped them early most of the times, so maybe they would have figured out it was the wrong way to go about it.)

Unix times show up in a lot of APIs and aren't always explicitly called "Unix times". People just see that at some point there's an integer value representing a timestamp, and the time string displayed to the user is some number of hours off, so they think they need to change that integer.


Yep, a faulty belief is not the same as ignorance.


I wouldn't be surprised if someone told me 10-15% of our bugs were because of Eng/PM misunderstanding this.


I've argued this and the issue comes when you try to Target another timezone from your own.

So for example if you're in EST(-5) and need to Target CST(+8) you can't give the Unix time for the other timezone because that value is relative to yours talking about another timezone.


> But it’s unsatisfying to say “this is false” without explaining why

Got my upvote. I can't stand the "falsehoods programmers believe" articles that make a point out of not backing up any of their claims.


I blame (perhaps unfairly) this idea that people write a lot of programming related articles and blogs as resume fodder, and not really for an audience beyond that.

I've run across so many "how to do" and it's the same example code and you can see where it has skewed over time... and evolved into something that the author really doesn't understand.

Same goes with some of the fundamentals and gee wiz "falsehoods" and such articles.

I appreciate the folks taking the time to write it but if they're not going to tell me why:

1.) I don't know if they even know / gave me the right idea.

2.) If I can't grok why then I'm not going to really understand it...


Talking about resume fodder, my favorite is how everyone seems to have to write an article about bloom filters. Or at least every month or so someone posts an article about them on HN from their blog[1]. They're a brain dead simple data structures and a lot of times the author leaves off the actual hard bit about them: deciding how big they should be and (after reading up about them a bit while writing this comment) how many hash functions they should have.

[1] https://hn.algolia.com/?query=bloom%20filters&sort=byDate&pr...


This is all over dramatized myth busting -- one could simply say they are all true except for the occasional leap second adjustment. No need for the dramatic "they are all false."


Everything is true, except when it's not.

;)


Yeah, the standard falsehoods listicle would have ended right before that line.

Timekeeping is strange. Makes me want to store everything in TAI and then convert it for display.


As is done in qmail.


Interesting—I wondered if anyone was doing that.


Is this really a problem? The "falsehood programmers believe" articles I remember reading all list things that were either obvious, or obvious in retrospect.


Take the original one about names, which claims that sometimes children don't get names.

Should I be flagging mandatory name fields as an I18N concern? How many people are affected? In which regions does this warrant UI hints or changes? Will they have names by the time COPPA stops applying?

I've casually Googled this and found nothing, so whatever point the author hoped to make was lost.


Yes. Only a small minority of Falsehoods articles so much as point people at resources to relevant resources, much less find better practices. Fewer back up their claims.

The Falsehoods listicles that were actually obvious were Falsehoods Most Programmers Don't Actually Believe.


The artcicle is confusing unix time, which is an integer that increments every second, with it's UTC representation. This representation is an interpretation of that number. In theory it could be decremented. But this has never happened. In practice what happens is that we add leap seconds, which means that we allow the UTC interpretation of the incrementing integer to have an extra second and not that we move the integer forward by two seconds.

The reason that this is not a problem is that most hardware clocks are pretty awful to begin with and need frequent automated corrections (through ntp). This does in fact cause that integer to increment/decrement locally when that happens and far more often than once every few years. This too is mostly a non issue. For practical purposes, time moves forward and when you access time using one of the many high level APIs you get a fresh interpretation of the system clock's integer. A much bigger problem would be the Y2038 problem when that integer overflows. I believe work is underway in the Linux kernel to address that.


I disagree about obvious. Some are not, IMO, obvious, and some are actually true within reasonable bounds.

From one of the previous lists[2]:

> DST is always an advancement by 1 hour

I'd point to this one as not "obvious": it is false. Some extreme northern/southern locations (where the seasonal day lengths get really long/short, since you're so close to the pole) adjust by 2 hours. (E.g., the aptly named Antarctica/Troll[4].)

> Months have either 28, 29, 30, or 31 days.

> The day of the month always advances contiguously from N to either N+1 or 1, with no discontinuities.

I usually presume that I'm either a. working in the proleptic Gregorian calendar, b. I am working within a range of time (e.g., company start to foreseeable future) in which the Gregorian calendar is a safe assumption or c. I am working in a situation where I know I'm not working with the Gregorian calendar (e.g., the app/job has specific requirements around, say, a lunar calendar) and I'll know I'm in one of those exceptional case when they apply / the falsehood won't be believed to be false.

Within that assumption, that "falsehood" isn't false. It's true: there is no month within the Gregorian calendar that does not have 28, 29, 30, or 31 days.

The trouble is the transition onto the calendar. Some languages — in particular, some parts of Java[3] do this — choose a date at which the calendar was adopted and actually expose that in their routines. (But note, however, that the Gregorian calendar was not adopted overnight; it started in 1582 and continued until 1923!)

For example, when Great Britain[1] adopted it:

> Through enactment of the Calendar (New Style) Act 1750, Great Britain and its colonies (including parts of what is now the United States) adopted the Gregorian calendar in 1752, by which time it was necessary to correct by 11 days. Wednesday, 2 September 1752, was followed by Thursday, 14 September 1752.

Again using Java as an example, I believe it chooses the Spanish transition (1582), and since there's a jump there, too, it jumps 11 days, resulting in a very short October, making the two falsehoods listed above actually falsehoods.

But just work in the proleptic calendar. IMO, what calendar you're using is effectively part of the datetime type. Having a datetime type that uses multiple calendaring systems is like having a text/string type that also lets you put random bytes in the middle of the string.

Of course, the next falsehood was,

> There is only one calendar system in use at one time.

…which is our stated assumption of "within the Gregorian calendar". So, by definition, this is true!

There were a few others, e.g.,

> Non leap years will never contain a leap day.

That are just more fallout.

Assumptions need to be stated, and explanations would be more enlightening to the surprised reader.

[1]: https://en.wikipedia.org/wiki/Adoption_of_the_Gregorian_cale...

[2]: http://infiniteundo.com/post/25509354022/more-falsehoods-pro...

[3]: https://docs.oracle.com/javase/7/docs/api/java/util/Gregoria...

[4]: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones


The original purpose of Unix time was decoupling local time representation and internal timekeeping of the system. Unix time could be the "One True Time" of the system, it always increases monotonically. When you need local time, all the tricky and nasty details, including DST, mandated calendar changes, etc, are processed by the tzinfo system library/database. If the calendar has changed, at least in principle one does not and should not have to update the system clock or most unrelated applications, simply update tzinfo and you're done.

Unfortunately, Unix time did not consider the effects of leap second, which broke the very foundation of Unix time and nullified all the benefits it had. A UTC change (leap second) will force you to update the system clock.

There is a way out: we should stop keeping time in UTC, instead, we do the timekeeping in TAI. And we provide a system-wide facility called "utcinfo" database to handle TAI-UTC conversation. Just like tzinfo, but much easier, it only needs to store all the leap-second events. All problems solved! I'm aware that the leap second still causes some issues: the kernel still has to be notified for the upcoming leap second for its UTC APIs, but still better.

The question is, why don't systems and libraries treat TAI as the first-class citizen of timekeeping? Why aren't we doing it right now? Because it's incompatible with Unix time, or it's something else?


Obviously, given the article and the discussion I am wrong, but what you propose sounds to me exactly like the definition of unix time:

The number of seconds since January 1. 1970 UTC.

Even if UTC counts the same second twice, this does not change how many seconds has elapsed since the unix epoch.

If you click through the articles reference to wikipedia and again from there, you will end up on "The Open Group Base Specifications Issue 7, 2018 edition"[1]

Which says:

> The relationship between the actual time of day and the current value for seconds since the Epoch is unspecified.

Which again sounds like there is absolutely no reason to double count or skip seconds in unix time.

However, I am not quite sure of the implication of the formula written there, but I fear that it says that the number of seconds since the epoch is defined, not as the number of seconds since the epoch, but by the UTC definition of the current time.

[1] http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_...


Thanks for the interesting link, good to know! So we are in fact, pretty close to a monotonically-increasing, TAI-like Unix clock.

The original article said,

> Each Unix day has the same number of seconds, so it can’t just add an extra second – instead, it repeats the Unix timestamps for the last second of the day.

Your POSIX link says,

> As represented in seconds since the Epoch, each and every day shall be accounted for by exactly 86400 seconds.

I'm not a expert on timekeeping, but it appears the only problem here is that the Unix time is bounded by a 86,400-second day, which I guess was meant to make a Unix day predictable, so we still have to double count or skip seconds. It seems the only thing we need to make Unix time monotonic is simply removing the 86,400 seconds Unix day from specification.

On the other hand, it means a Unix day would be unpredictable and it would be impossible to calculate a Unix time without a database, and difficult calculate future Unix time using calendar time. So TAI doesn't automatically solve every problem, everything comes with a tradeoff.

But I think a unpredictable Unix day should be fine for purpose of an internal system clock, so perhaps it's still not a bad idea.


> Unfortunately, Unix time did not considered the effects of leap second

The change to UTC to include leap seconds was in 1972, which was after unix time came into existence.


It has nothing to do with my argument, there could be awareness, to the extents that Unix architects had seen it in the news, but still, it was not considered. I don't believe support of leap second was built into Unix and honored properly until 1990s, when NTP became prevalent.

Also, the fact that Unix time starts before leap seconds were introduced made timekeeping in TAI an even stronger point. And I'm waiting for some insights on why it's not being done.


I deeply disagree with point 3.

Unix Time actually never goes backward: it just stagnates during a leap second. The article uses fictional fractional second to argue the contrary but I don't think it makes much sense. Unix Time is represented an integer and has no concept of such a fractional unit.

That's an important distinction because it means that if you use Unix Time as a timestamp you can actually be sure than an event with a smaller stamp happened before. You can't say anything about the ordering of events having the same timestamp but that remains true with or without leap second.


POSIX defines gettimeofday [1], which fills a timeval with integer (time_t) seconds and integer (suseconds_t) microseconds.

Is your concern over weather Unix Time is a time_t or a timeval?

A time_t shouldn't go backwards (in normal operation), but a timeval does.

[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/ge...


> Is your concern over weather Unix Time is a time_t or a timeval?

Yes, you could put it that way. I view Unix Time as referring strictly to the time_t part (seconds since Epoch) but I might be the one in the wrong. I didn't remember that the timeval part existed in the standard.


gettimeofday is obsolescent, but its replacement clock_gettime has the same issue.

gettimeofday gives you a struct timeval with microsecond resolution.

clock_gettime (which takes an extra argument specifying which of several clocks to use) gives you a struct timespec, with nanosecond resolution.

https://pubs.opengroup.org/onlinepubs/9699919799/functions/c...


Technically speaking double negative leap seconds will break the monotonicity of time_t as well. Fortunately for us, there is no provision of double (positive or negative) leap seconds [1]; ISO C's range for tm_sec, which allows for 61, was a mistake [1; "not really UTC in 1989/1990" section].

[1] https://www.ucolick.org/~sla/leapsecs/timescales.html#UTC


You can set unix time to any value, including the future or past. If timestamps are recorded during those time-traveling epochs you will indeed see time go backward. Anyone writing time-aware processing needs to take this in to account, or they will eventually suffer for it.


Sure, you can manually force Unix Time to go backward but that's very different from what is argued in the article. If you do time-aware processing and mess with the clock you are using, you can except troubles. That's in no way unique to Unix time.

Still, unless you actively mess with, Unix Time actually never goes backward.


It happens all the time. Go talk to ops/sre types who run large fleets and ask them for their time stories, also ask them how much infrastructure and scaffolding they have to have in place to ensure that such things are as rare as possible.

The consequences show up in large ways, as well, especially in distributed systems. Calculating an order of events, for instance, based on some notion of time is an obvious flaw. Even within a single system, assuming that a time stamp can be relied on to indicate order of events is wrong.

If you work in domains where these things are important you will eventually come to understand that simplistic and naive statements such as "unix time never goes backwards" are the swords that programmers eventually fall upon.


I don't understand this one:

> If I wait exactly one second, Unix time advances by exactly one second

How does UTC jumping around affect this? If a leap second is removed it doesn't mean you've waited 0 seconds.

I feel like this is wrong too:

> If there’s a leap second in a day, Unix time either repeats or omits a second as appropriate to make them match.

It's not Unix time doing that. It's UTC.


Unix time is based on a hard calculation of UTC seconds, minutes, hours, days, and years. So UTC jumping causes a discontinuity in Unix time.

Im pretty sure the second graph is mislabeled (the UTC second after 23:59:60 should be 00:00:00), but Unix time takes 23:59:60 to mean the same as 00:00:00. So 23:59:60.5 is also the same as 00:00:00.5, and so on. If you parsed the Unix time into a readable timestamp, it would tell you it's the first second of the next day for two seconds.


> How does UTC jumping around affect this? If a leap second is removed it doesn't mean you've waited 0 seconds.

No, but it means UNIX time does not advance by exactly one second per elapsed second. Instead it advances by either 0 or 2.

> It's not Unix time doing that. It's UTC.

It's also unix time. unix time is (86400 * days_since_epoch + seconds_since_midnight). A leap second means a day is not 86400 seconds, and thus you'll either get a skip or a repeat on midnight rollover.


> How does UTC jumping around affect this? If a leap second is removed it doesn't mean you've waited 0 seconds

The OP did not make this claim. He said that waiting 1s does not necessarily increase the Unix time by one second - not the other way around.

Or how my highschool math teacher put it: Every time it rains, the street gets wet. But not every time the street is wet, it rained - maybe my dog just took a leak....


UTC jumping around affects it if you're basing your time-elapsed measure by checking "time.now() - previouslySavedTime". That can give you a negative value, if e.g. your 1/2-second elapsed time crossed a leap second.


One thing to note is that both Google and Amazon smear out the leap seconds https://developers.google.com/time/smear so that this is no longer a falsehood on AWS or GCP. I suspect, over time, more organizations will adopt this approach.


Leap smearing makes #3 no longer false, but #1 and #2 remain false


Also worth noting that even tho it "solves" leap seconds, it does nothing to solve normal clock corrections. Clocks skew normally, and they need to be corrected eventually - nothing that I'm aware of skews across more than a couple seconds / within NTP limits.

I.e. disconnect from a time server for a few days/weeks/months/etc. When you reconnect, it could be off by seconds/minutes/hours. Odds are pretty good that your system will just jump to the correct time immediately, rather than smearing it out for like N*3x longer than the difference is.

The same is true any time you lose your internal clock, e.g. if your machine is shut down and its battery dies. If it were to skew across "all time since jan 01, 1970", it'd probably catch up well after the expected lifetime of the computer.


The article is wrong or misleading.

POSIX (and ISO) time_t is not supposed to "see" the additional leap second at all. POSIX time_t is defined to effectively always have exactly 86400 seconds per day, and no fractional parts. The seconds as defined by POSIX then can't last exactly as long as the atomic seconds. Even on the days where a leap second occur.

Wikipedia article confirms:

"Every day is treated as if it contains exactly 86400 seconds"

But that the seconds don't last the same and therefore aren't "the same" like the "atom clock" seconds should not matter for the normal users.

The graphs in the article with the fractions of the second going backwards are just poor implementations in some specific operating systems, libraries or programs. It's not something that POSIX standard prescribes that is supposed to happen.

The confusion of the common programmers, like the writer of the article or those who implemented the "backwards" behavior comes from them not understanding what they work with. Most of the users of most of the computers don't have atomic clock. So they also can't count atomic clock seconds. What the "normal" computers have are clocks which are much less precise. It's exactly for that kind of use the time_t is designed by POSIX to have exactly 86400 seconds per day -- the absolute error is at most one "atomic" second per c.a. half a year, but the error of all of the clocks directly available to the normal users in their normal computers is bigger.

So "normal" programs which do common human-related scheduling should not even try to care about the leap second. Use something like a "smeared" time as a reference:

https://developers.google.com/time/smear

The SI seconds in that article are the "real atomic clock seconds" -- but caring about them isn't even needed for normal human related computing tasks. If you have a real atomic clock, by all means synchronize it with other atomic clocks. If you have a normal computer, use the smeared time. There will be no "jumps" at all then.

Leave the leap second to the astronomers and others who are doing the "hard" time tasks, they have to care, and they have their own software for that.


I really don't think POSIX intends seconds to vary in size. And while it's hard to measure the difference over a year with a typical clock, it's very easy to measure the skew over a single day. Skewing over a day is not something for only atomic clocks to worry about. And skewing over more than a day would imply that you want to get dates wrong, which is stretching the language quite a bit...


Agree. In other words, computers should use UT1 (where one day is a rotation of the earth, and 86400 seconds; and consequently a second is not a SI second).


Correct, time_t second is already in practice not an "SI second" the later being defined by the "counts" in the "atomic" clocks:

"the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (at a temperature of 0 K)" https://en.wikipedia.org/wiki/Second

That SI second is what I refer to when I mention an "atomic clock second."

The time_t second is effectively simply one 86400th of a day.


I was aware of leap seconds after working for an online auction website. How to deal with all the auctions that may be ending right at or before the leap second?

Our solution was simple: temporarily pause all the auctions :)

We already had site-wide "auction pause" code as a result of people DDOS'ing the site.


Well, well i was working in a major bank, in HFT dpt. We had team all over the world. Our cisco switch were not able to deal correctly with leap second, so basically we had to switch them off just before the leap second, and switch them back on, right after.

Everything went as planned on our side of earth. But our local asian in Japan, managing several colocations in Asia (Jp/hk/singapore/india...) were thinking this leap second was at midnight LOCALTIME.

I just remember the mess they had to deal with, the day after. Cause midnight UTC is not midnight in Japan, hft is over sensitive to time coordination. shifting forward or backward 1sec can close your connection to market.


One thing that this article evokes is the 'I'm wrong and my belief is deeply held' aspect which technology people occasionally fall in to. Looking at the comments, with incorrect beliefs expressed as dicta ranging from just plain wrong, to incompletely specified, to ordinary confusion is giving my tech-PTSD a workout.

There are topics in which the ordinary or common sense understanding of a thing actually interferes in understanding how that topic actually acts in reality when looked at closely or under complex conditions.

The concept of time is one of those things. The best thing a naive developer can do when reasoning about time is to first know that almost everything they assume is wrong, and they don't even know what their assumptions are.

The concept of 'location' is another of these topics.

I would like to close this comment with a helpful link to a concise introduction for people to start with in clearing out the 'common sense' assumptions but I haven't ever found one, and haven't invested enough time to write one. Sorry. Links to same will be gratefully received.


Windows (since recently) is probably the only operating system with an actual leap seconds support: https://techcommunity.microsoft.com/t5/Networking-Blog/Top-1...



Love it. You will continue to capture my heart and my upvotes whenever you post anything about the many nuances of storing and representing time. Or something about Unicode / character sets.


Having spent some time studying time keeping in computers, I've come to the conclusion that nothing needs to be changed. In particular, UTC is exactly what it should be and should be left as it is. However, there are some things that need to be added:

* Every standard library needs properly implemented and properly documented functions for converting between UTC and TAI.

* NTP should (at least optionally) tell the user TAI and UTC (like GPS already does).

* When mounting a legacy file system there should be an option to specify whether timestamps should be interpreted as TAI or UTC.

* New filesystems should have a field that specifies TAI or UTC. It would probably be a single bit for the whole filesystem rather than per timestamp.

* The CLOCK_UTC proposal should be implemented, with tv_nsec in the range 1000000000 to 1999999999 during a leap second.


Leap seconds definitely add way more complexity/uncertainty when dealing with timestamps in the future. I once made a program that would output the amount of time remaining until some time in the future (with the future time represented as a Unix timestamp). I realized that it is simply not possible to report the number of seconds until an event >6 months in the future because we simply don't know if there might be a leap second or not between now and the time in the future. Perhaps the best approach for users would be to smear any leap seconds that are announced so it never has a hard jump, but it's still not ideal because if you really want to count down to the future time you simply can't.


These are nasty little corner-cases. I do wonder if the first two are worth worrying about: For (1) I can't see a use-case where it would be important. For (2) Timing of this granularity is likely going to be done through nanosleep() and the POSIX.1 specification says that discontinuous changes in CLOCK_REALTIME should not affect nanosleep(). For (3) smearing, as Google and Amazon do, will handle it, as pointed out by others: https://developers.google.com/time/smear


A reality that most of these types of blog dont mention is that you may well not have any applicable data. E.g. users registered before 2000. Or any dates at all that care about second precision.

Humans are also "wrong" but happy with that, celebrating birthdays indepenent of timezones. Some celebrate birthdays independente of the actual date birth occured like the Queen or Jesus or anyone born on the 29th of feb.

Far more likely your clock goes backwards because you fuck up your ntp config than any other reason.


So, if I adopt smearing for my NTP, everything works fine.


Except then, a second isn't actually a second.


But that's always true anyway because of clock drift, skew, etc. Otherwise we wouldn't need NTP in the first place.


But it pretty much is.

Einstein showed us seconds aren't seconds anyway, unless you happened to have the same inertial frame.


I’m not convinced this article is correct.

Posix defers to ISO C where they differ: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf

See page 391. The encoding of Unix time is explicitly unspecified there.

Posix goes on to say: http://pubs.opengroup.org/onlinepubs/9699919799/

“ The time() function shall return the value of time [CX] [Option Start] in seconds since the Epoch. [Option End]”

So, Unix time is optionally seconds since the epoch, with no further guidance about leap seconds.

Also, the spec makes it clear that time_t needs to be converted into the appropriate time zone, which suggests it does not reflect leap seconds.

I’d be convinced by source code or documentation for both BSD and Linux showing they’re intentionally not posix compliant on this front, and apply leap seconds to Unix ticks and not their time zone conversions.


> If I wait exactly one second, Unix time advances by exactly one second

There's a more insidious problem here - that a computer's internal representation of a second actually falls in line with an actual second. Quartz clocks are, at best, approximations. Temperature adjusted approximations at that.

Without NTP and its ilk, computers would be a complete disaster when it comes to keeping regular time.


chrony[1] neatly solves these and a host of related issues by guaranteeing time increases monotonically on a given host, speeding up or slowing down the clock appropriately. It's a nicer alternative to ntpd. Also, AWS recommends it if you are using their Time Sync service (which is GPS-locked atomic clocks in every region with leap second smearing).

[1]


The Reference Implementation of NTP from ntp.org will do clock skewing by default, if the offset is within certain limits of reasonableness.

And it has done so for probably at least ten or fifteen years.


> It's a nicer alternative to ntpd.

define nicer?


ntpd is a pretty weird daemon in the same way that bind is. The config format is insecure and impossible to understand; if you try to change it you'll probably break it.


  Unix time assumes that each day is exactly 86,400 seconds long (60 × 60 × 24 = 86,400), leap seconds be damned
I don't think I understand this claim. Unix time has no concept of a "day". Leap seconds increment UTC time, but doesn't add anything to the number of seconds that have elapsed since unix epoc.


That's one of the falsehoods. Unix time is not equal to the number of elapsed seconds since the epoch, it is equal to N x 86400 + s, where N is the number of days that have elapsed since the epoch, and s is the number of seconds that have elapsed since midnight.


Perhaps I don't fully understand the reasoning that went into these things when they were decided, but I think the following would be more sane:

- Hardware clocks track Terrestrial Time, and TT is used for timestamps and all timekeeping that doesn't care about where exactly on the planet you are

- Leap seconds are treated as part of the timezone data. UTC is treated as just another timezone, with the appropriate leap second offset given by the timezone data for that date and time

- NTP keeps hardware clocks synchronized to TT and also carries updates to timezone data (including leap seconds)

This doesn't solve the problem of hardware clocks jumping backwards or forwards in time - hardware clocks can drift or be misset etc. and be updated - but I can't help but thinking that much of the pain around time and timezones is caused by basing our timekeeping on UTC rather than TT.


just an anecdote about UTC vs GPS time. GPS time doesn't have leap seconds.

So my team was testing a system with some devices, one of which was a GPS and the main system had UTC from NTP. We had a big display that showed all our data including both times, so we could monitor what was going on. So the two displayed times were 13 seconds apart (the number of leap seconds then). Our program manager was a smart guy but gaffe prone. So in a demonstration of our system he blurted out to the whole room of observers, 'hey something is wrong, those two times are different'. We cringed and explained, but it sounded like we were covering up an error. But he would go on to repeat the gaffe again to a different group.


Perhaps a stupid question: Why isn't there a time standard that is monotonic and defined simply in terms of seconds, without attempting to match the movement of the earth (no leap seconds, no negative seconds, no daylight savings, no complicated calendar politics)?

If such standard existed, wouldn't it be the best to use for programming, with "simple" conversions to/from the all the other standards?

Basically, I want a monotonic clock that starts at an arbitrary point (I would suggest Isaac Newton's birthday), is able to go all the way back to the big bang, and forward until the heat death of the universe, with millisecond or better precision.


Not a stupid question...it's the central question here.

Unix time is set up to allow programmers to assume every day has the same number of seconds. Is this the best approach, or would it have been better to try to educate everyone not to make that assumption and to use a standard library for all UTC calendaring?


TAI is what you want.


I can’t WAIT until we have to deal with time dilation problems on networks and certificates.


I think we should have a scientific definition of time -- something that is highly static and precise (such as the time it takes for an atom to vibrant a number of times, or the time it takes light in a vacuum to travel a certain distance) and a cultural definition of time that is more loose than UTC (There is always the same number of seconds in a day, but maybe some days have shorter seconds than other days).

Cultural time is fine and dandy for human level stuff. Keep that simple. Scientific time for business, engineering and scientific stuff.


Time might not be too hard to define (relativistic effects aside), but the problem is days and years. The reason for inserting and removing seconds is because the rotation of the earth is NOT static and precise. It changes so seconds are added and removed to keep the day from drifting.

https://en.wikipedia.org/wiki/Day_length_fluctuations


> Cultural time is fine and dandy for human level stuff. Keep that simple. Scientific time for business, engineering

Business and engineering are chock-full of human-level stuff. And scientific stuff which isn't either doesn't use UTC (astronomy) or is not impacted (because it deals pretty much only with sub-day durations and everything works based on the SI second only).


I've been thinking about datetime APIs and how most of them become extremely tedious when you have to account for daylight saving, countries changing time zones et cetera. I was actually planning to write my own implementation for $language using unix timestamps as internal representation and requiring a timezone whenever parsing or printing. But then I considered leap seconds.

I don't know if there are libraries that can handle leap seconds or is everyone just counting on NTP sync fixing things whenever a leap second occurs.


> I was actually planning to write my own implementation for $language using unix timestamps as internal representation and requiring a timezone whenever parsing or printing

That does not and can not work when trying to represent future local events, which is the vast majority of them as an event normally happens relative to a specific geodesic location.

Astronomical events are more or less the only ones which actually routinely get planned in TAI / TT, and astronomical software is thus the only one for which this model could actually work. And then you wouldn't be using unix timestamps (because it's UTC).


Yes. Almost everything is subtly broken, and I realized my attempt would be broken too. Didn't feel like a fun hobby project once I realized that.



Is it also possible for Unix time to go backwards with NTP? I assume that's much more common in practice than leap seconds?


ntpd will only move the clock backwards if the gap is too big and you run a command. Otherwise, it will only slow down the clock and possibly alert you that it's having trouble catching up.


My Unix time question is, when I do “sudo date <DDMMhhmmyyyy>” and then get a sudo password prompt, is the time supposed to take effect before or after the sudo is authenticated?

Emprically it seems to be after but that seems wrong; shouldn’t the result of the command be the same regardless of how long it takes to type the sudoer’s pw?


After, because sudo won’t run any command until it authorizes that you’re allowed to run the command.


> Unix time is the number of seconds since 1 January 1970 00:00:00 UTC > If I wait exactly one second, Unix time advances by exactly one second > Unix time can never go backwards

I think it’s okay to say that these things are generally true with exception of leap seconds. Leap seconds don’t make these statements untrue.


Why does Unix time first travel forward one second, then go backwards one second, versus "pausing" (flat line on the graph) one second? Both have their pros and con; if the graph accurately depicts how Unix time is implemented, why was this decision made versus the other?


I found this out some years ago and for some reason I felt a profound sense of betrayal. :D


for those interested in this kind of topics, i cannot recommand more than suscribe to the timenuts mailing list : http://leapsecond.com/time-nuts.htm


Another falsehood, altough it goes against the POSIX standard: the epoch is midnight in UTC. There is at least one obscure Unix-like OS (can't remember the name) where the epoch is 1970-01-01 in local timezone


To see the current value for different time standards (e.g. UTC, GPS, TAI): http://leapsecond.com/java/gpsclock.htm


This is why I use smalltime when storing time values.

https://github.com/kstenerud/smalltime


A falsehood that Alex Chan believes about UNIX:

No-one uses the Olson "right" TZ data files.

What is stated in M. Chan's article is only true when using the "posix" TZ data files. But that's not the only option.

* https://unix.stackexchange.com/a/327403/5132

* https://unix.stackexchange.com/a/334029/5132

* https://unix.stackexchange.com/a/294715/5132



A whole second of leap? So grainy. Planck scale or go home.


The author uses graphs with quarter-second increments because they make it look weirder. It's not that weird for time to standstill for 1 second.


It is when your video and audio buffers needs to be updated 60 and 200 times a second. And it'll cause all kinds of exciting effects on databases or very active network connections.


Who cares if we're off by 1 second so long as we're all off by the same amount? Maybe we should all wait until this lag adds up to 1 minute before we adjust things instead of perpetually reaching for abstract perfection


Or conversely why don't we add leap deciseconds, or centi or milliseconds?


It seems to me that the one-second adjustment is exactly the right compromise. It's small enough to be ignored for most practical purposes (many clocks are out by a second anyway), and it means that the offset between TAI and UTC is a round number (37 s rather than 36.852 s), while leap seconds are frequent enough for people to get a bit of practice at handling them, while rare enough that you can avoid them if you're not confident about handling them: don't schedule any rocket launches for 00:00 on Jan 1 or Jul 1. It seems exactly right to me.


Falsehood #2 is false even for inserted leap seconds. If you wait one second from 23:59:60.00 to 00:00:00.00, Unix time has advanced by zero seconds.



So when we convert Unix time to Y-m-d format, are those conversion algorithms aware of the leap seconds?

Is it hard coded in there somewhere?


Question: would anything go wrong on a Unix system if you used TAI as your timezone, rather than UTC? TLS, maybe?


TLS doesn't care about a 30 second difference in clocks, unless you're running your certs really close to the notBefore/notAfter. There's enough broken systems out there that it makes sense to not use certs that don't have at least 24 hours of margin on either side.


Never knew about this... interesting article!


a fun command to run: cal 9 1752


Basically, everyone who has a clue about time standards uses TAI for internal representation in non-relativisitic applications.

In other news: most software is brain-dead and most software engineers lack basic education in pretty much everything other than composing tons of terminally boring code out of a few LEGO shapes provided by programming languages.


...oh, and there's also the issue of time domains and time sources having different spectral shapes of their noise.


This guy is confusing Unix time with local time. This statement:

> Unix time is the number of seconds since 1 January 1970 00:00:00 UTC

Is true regardless of the calender or leap seconds. Think of seconds in terms of some physical phenomena, like how many times a certain atom trapped in a crystal lattice vibrates and you see that doesn't depend on the calendar. Converting Unix time to local time obviously has to take that into account, but we still need an absolute measure of our progress does the timeline which is what Unix time provides.

This is why we use Unix time, it's the same everywhere and nothing short of relativity can affect it.


> > Unix time is the number of seconds since 1 January 1970 00:00:00 UTC

> Think of seconds in terms of some physical phenomena, like how many times a certain atom trapped in a crystal lattice vibrates and you see that doesn't depend on the calendar

Unix time is NOT the number of physical seconds since 1970 UTC. It's the number of Unix Seconds since 1970. Every day has 86400 Unix Seconds. Some UTC days have more than 86400 physical days. Unix time cannot represent the seconds beyond 23:59:59 on a UTC day, but otherwise attempts to match UTC.


Unix seconds ARE physical seconds.

> Every day has 86400 Unix Seconds.

Except a day with a leap second in it.

> Unix time cannot represent the seconds beyond 23:59:59 on a UTC day, but otherwise attempts to match UTC.

Err...it literally represents ~49 years beyond 23:59:59 on Day1 of UTC.


What's the Unix Time for the UTC second 1985-06-30 23:59:60? My understanding is this physical second is not representable in unix time.

How about the UTC seconds before and after that one, 1985-06-30 23:59:59 and 1985-07-01 00:00:00? My understanding is that the first is 489023999, and the second is 489024000.

There is one unix second, but two physical seconds between the start of the two times.


> What's the Unix Time for the UTC second 1985-06-30 23:59:60? My understanding is this physical second

That's not a physical second, it's a calendar second.


Every UTC calendar second represents a physical second.

On most days, every unix second corresponds with exactly one UTC second and they all correspond with exactly one physical second, and each one could be measured as the number of vibrations of some particular atom.

On a day with a positive UTC leap second, it's different. At 12:00:00 UTC, it's 12:00:00 unix time, the next day at 12:00:00 UTC, it's also 12:00:00 unix time; 86401 physical seconds have passed, and UTC has counted 86400 calendar seconds, one of which was a leap second, but unix has only counted 86400 seconds.

If you're running leap smearing, all of the unix seconds in that day are a little bit longer than the physical seconds (the exact details depending on your smear technique). If you're using classical techniques, 23:59:59 will be two physical seconds long, and the fractional second will reset to zero as the second physical second starts and count up again.

In contrast to UTC, and Unix Time, TAI always has exactly 86400 physical seconds per day, but after a UTC leap second, both UTC and Unix Time will be offset from TAI by an additional second.


If I understand what people are saying correctly, this is actually not true; leap seconds can not be referenced in Unix time and are not included in the unix timestamp value. They're just ghost seconds during which the counter stopped for one second. Which is not the way the system should have been designed, in my humble opinion. All seconds should be referenceable.


> leap seconds can not be referenced in Unix time

UNIX time does not represent/reference calendar time, it represents the number of physical seconds since 1970-01-01UTC. How you translate that into local calendar time is up to you!


Isn't this the opposite of what the article says?


Yes.


The article's assertion that Unix time treats days as 86400 appears correct to me. Both Python and Javascript report 12-01-2019 UTC as the Unix timestamp 1575158400, 12-01-2000 UTC as Unix timestamp 975628800, and the difference between those is 599529600, which is exactly 19*365 days + 4 leap days as 86400 second days.

The leap seconds added in 2005, 2008, 2012, 2015, and 2016 are all uncounted there. If Unix timestamps were the count of physical seconds that had passed since 1970/1/1 UTC, then the end result of my test should have been 599529605 instead of 599529600.


If what you said were true, then UNIX time were at some fixed offset to TAI. But it's not.


Are you saying Unix time doesn't get adjusted when there are calendar changes in the box?


Correct. When was the last time you changed UNIX time due to summer time/daylight savings?


It follows UTC, so that doesn't apply. From what I understand, it does follow things like leap seconds, though.


> So far, there’s never been a leap second removed in practice (and the Earth’s slowing rotation means it’s unlikely)

Oh, okay. Thanks!


Earthquakes and other geological events can speed up rotation though, but so far never enough to remove a leap second.


tl;dr: you should know about leap seconds.


Can this cause another "millenium bug" chaos again?


Unix time is the humber of seconds since 1970. Seconds are a physical thing. Unix time references the number of "cycles of the radiation produced by the transition between two levels of the cesium 133 atom" UTC measures time. This article is silly.


> Unix time is the humber of seconds since 1970.

It's not, and it has never been. The original unix time was "the time since 00:00:00, 1 January 1971, measured in sixtieths of a second", this got modified multiple times until it settled upon the number of UTC seconds since 1970-01-01T00:00:00Z, meaning it's non-monotonic, non-continuous, and not based on "physical" seconds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: