I love Eric Lipper, but he's just wrong here. This isn't SQL. He's treating null to mean "unknown", whereas it really means "nothing" (aka "absent" aka "missing value"). The opposite of an absent value is a present value (but an unknown one), which has no equivalent type in C#. (Or I guess you could say the opposite of "no value" is "every value".) Given those aren't options, the closest choice we have available is: true.
If you insist on treating null as "unknown" then you just destroy everything you know about the language. e.g., (null == x) and (null != x) would both have to always evaluate to null. Which is (thankfully) not how C# works.
That's indeed the biggest problem with null. It can mean many different things, often at once. It can be the third state of any tri-state logic, either absorbing ("unknown", NaN) or absorbed (`Maybe` monad in Haskell) or something else. Or it can be an object that responds to any message but does do nothing (a "black-hole" object). Or it can be just a placeholder for the actual value to come with no particular operation in mind and will fail on any use. All these properties are implicit when null is being used.
Specifically, for NaN (and for definitions of NULL that act similarly), it breaks pretty much all useful math due to violating the axiom that `x = x`. All the other weird things it does are totally fine, but an absorbing element still isn't allowed to do that.
Ehhh. I increasingly believe that implicit casts are a mistake. Therefore a null value is neither true nor false. Treating some pointer/referenced type variables holding a null value as a boolean is (or should be imho) a compile error.
If you want a bool then perform an equality operation or call a function. It’s slightly more verbose. But soooo much pain and suffering comes from implicit cast bullshit that I increasingly believe it is the way.
> Ehhh. I increasingly believe that implicit casts are a mistake.
I wasn't disagreeing with that part, just the part where he tries to justify it with "Treating null nullable Booleans as false leads to a number of oddities... neither Foo nor Bar is executed." That portion is not true, but that's fine - there are plenty of other reasons to avoid making null implicitly cast to bool.
For me, as a digital artist, the difference between null and zero often comes up. In the digital colour domain black = zero. However, for a designer/artist black is as active as white or any of the hues. In most digital colour spaces there is nothing that corresponds to null i.e. 'nothingness', i.e. transparency. For this, pre-multiplied alpha must be employed.
This is not new thinking. Goethe often spoke against the Newtonian understanding of black as an active presence (i.e. paint squeezed from a tube) rather than an absence of light.
I know they are the odd DBMS out, but when you live strictly in Oracle land, this decision makes sense. Yes, you could theoretically use an empty string to mean "we know this value is blank" and a NULL to mean "we don't know the value", but I've never had to encode this distinction in my schemas. In exchange, I've never had to worry about data being misinterpreted because a NULL didn't survive a roundtrip to the client or because someone filtered out empty strings, but forgot about NULLs.
I love Eric Lipper, but he's just wrong here. This isn't SQL. He's treating null to mean "unknown", whereas it really means "nothing" (aka "absent" aka "missing value"). The opposite of an absent value is a present value (but an unknown one), which has no equivalent type in C#. (Or I guess you could say the opposite of "no value" is "every value".) Given those aren't options, the closest choice we have available is: true.
If you insist on treating null as "unknown" then you just destroy everything you know about the language. e.g., (null == x) and (null != x) would both have to always evaluate to null. Which is (thankfully) not how C# works.