> Why not? Strings are perfectly valid in boolean context.
That's not a good reason for anything, by that yardstick Array#length could return a float and nil#nil? could return an array. And worse, the `?` postfix in Ruby normally means the method returns an actual boolean, not a "boolean context" (convention over configuration only works if the convention is respected). You can't XOR a string and a boolean (no `^` defined on string, although XOR-ing a boolean and a string will work), you can XOR two booleans.
> Why not? It is possible to shoot yourself in the leg, but otherwise a very useful feature.
I'm not sure, but I think he phrased it badly and asks why "interpreter-provided" mutables (e.g. Array) can be set as hash key: while "user objects" are also hashable by default in Python, built-in collections and the like are specified as unhashable as the semantics are not really sensible:
>>> {{}: 1}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>> {set(): 1}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'set'
To be fair, this was one side of that argument. A lot of people adopted the boolean convention, not the truthy one. And the few examples like "defined?" and "nonzero?" really seem more like quirks than conventions. In five years of doing Ruby, those examples built into the language and the Rails shitstorm are the only places I've seen this truthy convention adopted.
Original intent aside, if your language is prided on convention, then it has to be open to the convention changing.
One of the quotes in the linked blog post is the pretty much official statement of the languages creator. defined? and nonzero? are core language methods and unlikely to change. The fact that many ruby programmers have a misconception about what predicates were supposed to imply is sad, but won't change things. In short: All ruby devs need to learn how predicates were intended or they're in for a surprise.
Btw: The mantra "convention over configuration" is a rails mantra and not a ruby mantra.
I'm expecting core methods to change either, as that would undoubtedly break programs. Incidentally, this would be kinda funny because the argument about your program being written poorly if it fails with such a change would be turned on its head. But, in any event, to pretend that Ruby was a perfectly designed language and couldn't possibly have warts is weird.
Also, I never said "convention over configuration," since there's nothing to configure here. I was talking specifically about the convention of what a "?" should return. In that same quote Matz also says that predicates typically return a boolean value, but it's not required. That seems to both imply and endorse a convention.
I think we may be in more agreement than either is letting on. However, in most arguments on this matter, the "should" part seems to just get ignored.
Now in that whole Rails hoopla, it turned into "it's not required and neither defined? nor nonzero? do it," ignoring the whole "should" part. And now people are pointing at Rails as another example, reinforcing their own bias.
I agree with you that it might be nice if you could actually rely on it, but OTOH I have never personally encountered an error caused by a non-boolean predicate.
However, it's not only nonzero? or defined? that don't return boolean values. see http://news.ycombinator.com/item?id=5074676 for more examples. If you read through the core libs documentation you'll find more examples. You just cannot rely on predicates returning true/false in all cases, so you either have to learn not to rely on it at all or learn every example where it doesn't. So just don't rely on it.
I have long (6+ years) used truthy values from my predicate methods, and rarely coerce them into explicitly true or false values. (There are times when it's necessary, but these are extremely rare.)
I'd say that justifying it as because you can still determine the truthiness of a string is missing the point; defined? (http://ruby-doc.org/docs/keywords/1.9/Object.html#method-i-d...) can return one of a number of string values, or nil, depending on what expression you pass to it. The actual return value can be useful to you depending on what you're doing, or you might be happy enough just accepting that there it returns a value other than nil (which is to say you don't care what it is, just that yes, something is defined which matches the passed expression).
While the actual return value can be useful, it would be just as useful if the method had a more sensible name than hinting it's a predicate, which it is not.
What would such a more sensible name be? I look at this as a trade-off between code readability and consistency. Particularly when assuming it is a predicate (without greater knowledge of the method) is not a dangerous assumption, I'd say the trade-off is a good one.
token_type or lexical_type would make more sense to me. On top of defined? not returning a boolean, it has other confusing semantics. In particular, it's the only keyword I can think of that appears to exhibit call-by-name semantics. I don't know if this is just a special parser case, but it doesn't evaluate your arguments despite looking like a method call. So the return value can be doubly confusing.
No. "Weak Typing" doesn't mean jack shit, and there are very few under its thousands of different and incompatible identities which Ruby would match, save for the very least useful of them.
works, because the returned string type is implicitly converted to a boolean. that's weak typing. at least one (i guess the most common) interpretation of weak typing.
> If you couldn't use mutable objects as hash keys a bunch of things would become _way_ harder.
data[[v1, v2].freeze] = something
meh. Hell, Hash could even freeze its parameters on entry if they're not already frozen.
Meanwhile if anybody gets a hold of the key and happens to mutate it (not necessarily for nefarious purposes, just because it solves they problem) you can quite literally lose your pair:
Rebuilds the hash based on the current hash values for each key. If values
of key objects have changed since they were inserted, this method will
reindex hsh.
On one side I must admit that the behavior is a bit surprising I haven't ever encountered in the wild. It's not that easy to get hold of a hash key at a random point in the code where you don't know that it is a hash key.
That's not a good reason for anything, by that yardstick Array#length could return a float and nil#nil? could return an array. And worse, the `?` postfix in Ruby normally means the method returns an actual boolean, not a "boolean context" (convention over configuration only works if the convention is respected). You can't XOR a string and a boolean (no `^` defined on string, although XOR-ing a boolean and a string will work), you can XOR two booleans.
> Why not? It is possible to shoot yourself in the leg, but otherwise a very useful feature.
I'm not sure, but I think he phrased it badly and asks why "interpreter-provided" mutables (e.g. Array) can be set as hash key: while "user objects" are also hashable by default in Python, built-in collections and the like are specified as unhashable as the semantics are not really sensible:
versus