Hacker News new | past | comments | ask | show | jobs | submit login

> What you are proposing would double the required number of collection classes and all of its traits, because it would require separate ones for Collection[E <: AnyRef] and for Collection[E <: AnyRef].

Less than double, because not all collections make sense for both - e.g. a sorted set or sorted map only makes sense if the keys are values.

> There is literally no reason for introducing this complexity. Go has demonstrated how poorly this idea has worked out in practice.

It eliminates a common class of errors. All type-level distinctions add a bit of complexity, but we often consider them worthwhile to make.

> Additionally, this approach would make it nearly impossible to migrate reference types to value types, because it would break all users of the code.

Changing from one to the other is a radical change that should force the user to reexamine code that deals with them.

> That's a complete non-option. You might not like the IEEEs definition of equality, but this is what it is. Messing with it would break all existing code using floating point numbers.

Java already deviated from the IEEE definition with Float and Double. The sky didn't fall. Maybe strict IEEE semantics could be offered in their own type where needed, and that type would neither be value or identity-is-meaningful. (This would mean the type system wouldn't allow you to use the strict-IEEE type in any standard collection, which I think is correct behaviour; compare e.g. Haskell where for a long time you could corrupt the standard sorted set structure by inserting two NaNs).

> Their identity is the bits they consist of, just like identity on references is the bits of the reference.

That's a low-level implementation detail that may not even be true on all platforms. The language semantics should make sense.

> We already have that: AnyRef and AnyVal.

No, those are just implementation details of how they're passed around. Many AnyRef types have value semantics.

> That doesn't make any sense. The case keyword is basically just a compiler built-in macro to generate some code. It is already doing way to much, and overloading it with even more semantics is not the way to go.

Well, what I'd like in an ideal language is: no universal equality, opt-in value equality with derivation for product/coproduct types. As for references... I'm not really convinced there's a legitimate use case for comparing references, especially the implicit invisible references that the language uses to implement user classes. If we need reference comparison at all I'd rather something a bit more explicit - either an opt-in "the identity of this class is meaningful", or a notion of explicit references that were much more visible in the code (something a bit like ActorRef), or both.

> What I'm proposing improves the consistency across value and reference types so that it's always obvious which kind of comparison happens: > - identity: Low-level comparison of the bits at hand. Built into the JVM and not overridable. > - equality: High-level comparison defined by the author of the type.

That's very inconsistent at the language-semantics level; which things are "the bits at hand" are a low-level implementation detail that should probably be left up to the runtime to represent as best suits a particular code path. At the language level, "does 2L + 2L equal 4L?" is the same kind of question as "does "a" + "b" equal "ab"?", and both those questions are quite different from any question to which reference comparison would be the answer.




This hardly makes any sense, is not practical to implement and theoretically questionable.

It makes decisions that break existing code and introduce pointless complexity, while failing to offer any tangible benefits in return.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: