Perhaps because I don't understand type theory, this doesn't seem especially hard to me. Every type should be a subtype of boolean, right? Otherwise you couldn't evaluate truth or falsity on something like a string in the first place.
Wouldn't the 'not' function signified by your operator just return a basic boolean (for any argument which was a subtype of boolean, which would be any argument...)?
[To be clear, "go read 'Book That Will School You' " is an acceptable answer to me, if it really will]
It is a great book, but a little dense. Its example language is a typed lambda calculus. This is both good and bad. The good: you can evaluate most expressions in your head or with a pencil. The bad: sometimes it is difficult to read without evaluating the expressions.
It is heavy on proofs and mathematics, but it does what it says: it introduces you to the basics of type theory as applied to programming languages.
I second the commenter who recommended Types and Programming Languages, but the short answer is, "Sure, if your language has subtyping, and you want to arrange the inheritance hierarchy in that way." It would solve it a little bit better because the type of
x || (y && z)
would default to being the 'most general' type of all its arguments, which could be Bool but might be more specific in certain circumstances. But this of course only works if your language has some kind of subtype relation, which is not necessarily true of every language—it is not, for example, true of Go, or most typed functional languages that I know of—and it still ends up having typing rules like
t1 : A t2 : B A <: C B <: C
------------------------------------
t1 || t2 : C
It also assumes that the Boolean class is implemented as a single class with no subtypes for True and False—i.e. not like Ruby's TrueClass and FalseClass; you'd have an instance variable or something which tells you whether an instance is True or False—because if you implemented it with singleton instances of a True class and a False class, then you'd bifurcate your whole object hierarchy, and it also assumes that there can be no type 'more general' than Booleans, because if there is something 'above' Boolean in the hierarchy, then you'd have to rewrite the rule as
t1 : A t2 : B A <: C B <: C C <: Bool
------------------------------------------------
t1 || t2 : C
Instead of using a Boolean, use a type that means what you actually want:
You want to short-circuit combine two (or more) values, returning the first one that is valid, without evaluating the later ones.
You need a type that represents a possibly-unavailable value. "Boolean" is not that type. "Maybe" is Haskell's type-safe "Nullable" type. It has two kinds of values: "Nothing" or "Just a", where "a" is a value of any type.
Some quick simplified definitions of Haskell terms:
"Control.Monad" is Haskell's generalization of "flow control"
"mplus" is Haskell's generalization of "or" (as it means in Perl/Python).
backticks are used to make a regular prefix function into an infix operator.
"undefined" is like Perl's "die" or Java's uncatchable Exception, used here to show where short-circuit happens.
ghci is a Haskell interpreter.
% ghci
> import Maybe
> import Control.Monad
> undefined `mplus` Just 1
*** Exception: Prelude.undefined
> Just 1 `mplus` undefined
Just 1
> Nothing `mplus` (Just 1) `mplus` undefined
Just 1
> Nothing `mplus` (Just 1) `mplus` (Just 2)
Just 2
Wouldn't the 'not' function signified by your operator just return a basic boolean (for any argument which was a subtype of boolean, which would be any argument...)?
[To be clear, "go read 'Book That Will School You' " is an acceptable answer to me, if it really will]