Yet null is considered "not an object", but every language other than SQL implements null == null as true.
Because x == x is falsified by NaN, it's not obvious to a human auditor or compiler optimization writer than you can't just blindly fold this expression without being very careful about data types.
I would sum it up by saying that x == x yielding false violates the principle of least surprise. It'll trip up every beginner who hasn't been explicitly been taught about NaN's behavior.
NaN doesn't represent a number that doesn't exist, it represents a value that hasn't been computed because the computation yielded an error which wasn't handled yet.
Imagine having error values (or exception objects) that get allocated every time an error happens. You don't expect necessary to be able to compare two instances of an exception object, instead you'll call a function that will tell you e.g. which type of error it is.
Now, NaN are not really quite like that. For one, it's unusual to stuff the error object in the same place as the actual value and also treating it as if it was the same type as the value. Furthermore reassigning the error object into another variable implicitly "clones" it into a new identity.
It's possible to imagine an alternate universe where each arithmetic operation that produces a NaN allocates a new unique number (there are many available bits so that you'll get quite far without a wrap around. But the additional complexity in hw was deeped not worth it.
Null and nullability in general is also a mistake that many languages carried over from C where I think it originated (though it possibly predated C). So it doesn't surprise me that there are other problems with null, like for example, comparing two nulls.
Removing nullability of a reference doesn’t mean removing null-relationships, it means properly typing them as Maybe/Option sum type to forcibly diverge code paths that work with non-existence cases. A language without an advanced compile-time type system cannot have these, not to mention untyped languages. You can’t just remove null/none/nil/undef from js, python, etc.
NaN self-inequality comes from the old FPU design decision (FCOM/FUCOM instruction group), which is probably questionable (or rational if you dig deeper) in regard of a programmer’s convenience. That carried over to all languages because it’s a fundamental hardware behavior that cannot be fixed without an overhead.
But this is orthogonal to the equality property: the commkn alternative to using null/nil values is to use algebraic types (e.g. Option<T>), and with them too None == None is true
Because x == x is falsified by NaN, it's not obvious to a human auditor or compiler optimization writer than you can't just blindly fold this expression without being very careful about data types.
I would sum it up by saying that x == x yielding false violates the principle of least surprise. It'll trip up every beginner who hasn't been explicitly been taught about NaN's behavior.