Hacker News new | past | comments | ask | show | jobs | submit login

what does this mean? you're okay if you're using 64bits? just briefly skimmed the 2038 wikipedia page, mentioned 32bit



If you store time as seconds since the Unix epoch (1st Jan 1970), you'll overflow a 32 bit unsigned integer in 2038 (around March iirc) and time will suddenly be back in 1970 again. I believe the Linux kernel did some work to circumvent this on 32 bit systems in a recent release, but if you're running an old 32 bit system you're probably out of luck.


Actually, it's signed 32-bit integers that overflow in 2038. Signed integers have been used because people wanted to store dates earlier than 1970 too.


And probably because signed integers are a default choice in certain languages and/or maybe on certain architectures.

Java, for example, famously doesn't even have an unsigned 32-bit integer primitive type. (But it has library functions you can use to treat signed integers as unsigned.) Ultimately not a good design choice, but the fact that it actually wasn't that limiting and relatively few people care or notice tells you that many people have a mindset where they use signed integers unless there's a great reason to do something different.

Aside from just mindset and inertia, if your language doesn't support it well, it can be error-prone to use unsigned integers. In C, you can freely assign from an int to an unsigned int variable, with no warnings. And you can do a printf() with "%d" instead of "%u" by mistake. And I'm fairly sure that converting an out of range unsigned int to an int results in random-ish (implementation-defined) behavior, so if you accidentally declare a function parameter as int instead of unsigned int, thereby accidentally doing an unsigned to signed and back to unsigned conversion, you could corrupt certain values without any compiler warning.


> Ultimately not a good design choice, but the fact that it actually wasn't that limiting and relatively few people care or notice

It was a horrific design choice, and people do notice and do hate it.

Not so much because of how it applies to ints, but because the design choice Java made was to not support any unsigned integer types. So the byte type is signed, conveniently offering you a range of values from -128 to 127. Try comparing a byte to the literal 0xA0. Try initializing a byte to 0xA0!

In contrast, C# more sensibly offers the types int / uint, short / ushort, long / ulong, and byte / sbyte.


> out of range unsigned int

AFAIK, that doesn't exist? Otherwise, unsigned would also have UB?


I think he's referring to an unsigned integer value that's out of range for the signed integer type of the same width—usually a bit pattern that would be interpreted as a negative number when cast to a signed integer type, but which negative number is not defined by the C standard because it doesn't mandate two's complement representation.


I'm reading that as, the value of the uint is out of range for the int.


Ah yes, thanks for the correction :)


One of the biggest mistakes in IT ever, in my opinion.

I'd even go so far to say that defaulting to signed instead of unsigned was also one of the biggest blunders ever. I would've never defaulted to a type that inherently poses the risk of UB if I have another type that doesn't.

Though it's also possible that precisely that was the reasoning for it.

Some more thoughts on unsigned vs. signed: https://blog.robertelder.org/signed-or-unsigned/


Related is NTP which overflows in 2036 because it uses unsigned 32-bit seconds since 1900.


That's technically correct, but NTPv4 has the concept of eras [1] which gets incremented when that number would normally wrap.

[1] https://tools.ietf.org/html/rfc5905#section-6




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: