Actually, it's signed 32-bit integers that overflow in 2038. Signed integers have been used because people wanted to store dates earlier than 1970 too.
And probably because signed integers are a default choice in certain languages and/or maybe on certain architectures.
Java, for example, famously doesn't even have an unsigned 32-bit integer primitive type. (But it has library functions you can use to treat signed integers as unsigned.) Ultimately not a good design choice, but the fact that it actually wasn't that limiting and relatively few people care or notice tells you that many people have a mindset where they use signed integers unless there's a great reason to do something different.
Aside from just mindset and inertia, if your language doesn't support it well, it can be error-prone to use unsigned integers. In C, you can freely assign from an int to an unsigned int variable, with no warnings. And you can do a printf() with "%d" instead of "%u" by mistake. And I'm fairly sure that converting an out of range unsigned int to an int results in random-ish (implementation-defined) behavior, so if you accidentally declare a function parameter as int instead of unsigned int, thereby accidentally doing an unsigned to signed and back to unsigned conversion, you could corrupt certain values without any compiler warning.
> Ultimately not a good design choice, but the fact that it actually wasn't that limiting and relatively few people care or notice
It was a horrific design choice, and people do notice and do hate it.
Not so much because of how it applies to ints, but because the design choice Java made was to not support any unsigned integer types. So the byte type is signed, conveniently offering you a range of values from -128 to 127. Try comparing a byte to the literal 0xA0. Try initializing a byte to 0xA0!
In contrast, C# more sensibly offers the types int / uint, short / ushort, long / ulong, and byte / sbyte.
I think he's referring to an unsigned integer value that's out of range for the signed integer type of the same width—usually a bit pattern that would be interpreted as a negative number when cast to a signed integer type, but which negative number is not defined by the C standard because it doesn't mandate two's complement representation.
One of the biggest mistakes in IT ever, in my opinion.
I'd even go so far to say that defaulting to signed instead of unsigned was also one of the biggest blunders ever. I would've never defaulted to a type that inherently poses the risk of UB if I have another type that doesn't.
Though it's also possible that precisely that was the reasoning for it.