Hacker News new | past | comments | ask | show | jobs | submit login

Probably preparing for the year 9999 issue early.



Most systems don't have a Y10k issue: year numbers are variable-width already. Forcing a fixed-width 5-digit representation is creating a Y100k issue.


Never made sense how any software would consider the year value as _two characters_ when an integer works far better. There's even a 0 epoch date for crying out loud.


Possibly, the origins of the practice predates computers entirely, and goes back to their predecessors in business data processing, unit record equipment. As such, it may be like many other traditions - made complete sense when it was invented, yet people still clung to it long after it ceased to do so.

Unit record equipment generally didn’t use binary integers, instead using BCD; and most early business-oriented computers followed their example. Binary integers were mainly found on scientific machines (along with floating point), until the mid-1960s, when the hard division between business computing and scientific computing faded away, and general purpose architectures, equally capable of commerce and science, began to flourish.


Because when the Y2K problem was created, computers had only recently been invented, every byte counted, and there was no history of software best practices.

Even by 1980, which was 5 years before any recorded mention of the Y2K problem, an IBM mainframe, of the kind a university might have, had only e.g. 4 MB of main memory, which had to support many concurrent users.

Such computers would have been unable to load even a single typical binary produced by a modern language like Go or Rust into its memory - yet they supported dozens of concurrent users and processes, doing everything from running accounting batch jobs, compiling and running programs in Assembler, COBOL, FORTRAN, PL/I, or APL, and running interactive sessions in languages like LISP or BASIC. Part of how they achieved all that was not wasting any bytes they didn’t absolutely have to.


I think this is a generational divide when computers were significantly more limited. If you look at older data formats, they are nearly all fixed width. Either due to punch cards, memory constraints, performance considerations, or just band wagoning, prematurely optimizing for date representation seems silly.


Early mainframes read data on punch cards or tape files which were character-based (or possibly, binary-coded decimal). Two digits were used for the year to save memory.


I was under the impression it was a quirk in cobol.


no




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: