Well, 3600 seconds can be expressed much more simply than that: 'PT1H'. Which is more human readable than '3600'. The point in intervals is a compromise between human and machine readability - the point being that we can more easily determine long periods of time expressed in unit values rather than milli/microseconds.
> 'PT1H'. Which is more human readable than '3600'
Not to mention six years, four months, four days, three hours, forty five minutes and fifteen seconds, which is nothing short of 200072715 seconds (which my brain definitely wants to parse as two billions and something).
Nonsense, it's very readible. You could tell someone who can't code to say "period" for "P", "hour" for "H", "time" for "T" etc. and they'd be able to read out the period exactly, accurately all the time. They would also know how write their own forms of this. Humans can also look at it and know, intuitavely, without a calculator how long it is.
Trying to get them to multiple seconds (incl all the fun with leap seconds!) would be hard.
What is all this "Human Readable" nonsense. Who the hell is reading all these date times?
If two pieces of software are passing dates around, use a UNIX, UTC timestamp. If a human wants to read it they're probably a programmer and know how to parse a timestamp. If they're not a programmer, then you shouldn't be showing them unformatted date times anyway.
(1) dates/times before 1970 cannot be represented using a Unix timestamp
(2) you'll have much fun with leap seconds
(3) a Unix timestamp is just a number represented as floating point ... not much different from all other numbers. A standard date representation on the other hand has a unique format that's distinguishable from all other data types. This comes in handy not only for humans reading it, but also for parsers of weak formats such as CSV
Also data lives longer than both code or the documentation describing the data, this being the main argument for text-based protocols.
(4) programmers are people with feelings too and would rather read human readable dates (since they are so common), rather than lookup the boilerplate for converting a Unix timestamp to a human readable format. Here's the Python version, which I know by heart just because I had to deal with a lot of timestamps:
timestamp = time.time() # now
datetime(*time.gmtime(timestamp)[:6]).strftime("%c")
Oh wait, that gives me the date in UTC. Ok, that date is meant for me, the programmer that knows what he's doing, so here's how I can read it:
Great, now I should write a blog post about it. Or you know, just use standard date formats that are human readable, because there's a reason why our forefathers used the dates representations that we use today, instead of counting scratches on wood or something ;-)
True, but many libraries still do not know how to deal with negative timestamps. For instance PHP on Windows, prior to 5.1.0. MySQL's FROM_UNIXTIME was also not working last time I tried it.
And many applications and scripts can break, like those that store and manipulate timestamps assuming a certain format (e.g. positive integers).
The Unix timestamp was created to deal with timestamps, not with date representations. Therefore this requirement (dates prior to 1970) was not an issue that had to be dealt with explicitly.
This is good general advice but breaks for things which don't fit the Unix timestamp assumptions of second-level accuracy: I need ISO8601 because I handle dates for which lack precision below at day at best and often just a month or year. If I'm formatting dates there's an implied precision difference between displaying '1923-1927' and 'January 1st 1923 to January 1st 1927'
If you need to handle variable precision, time zones, ranges, etc. you can either invent your own format or use ISO-8601 which at least has the virtue of being more human readable and more likely for people to have previously encountered.