Hacker News new | past | comments | ask | show | jobs | submit login
Java Generics FAQs (angelikalanger.com)
58 points by xvirk on April 16, 2015 | hide | past | favorite | 65 comments



TIL why

    class Foo<T> {

        void test(Foo<T> f) {
            // ...
        }

        static void boom() {
            Foo<?> f = new Foo<String>();
            f.test(f); // <--- boom! compile error
        }
    }


If anyone is curious about this, from the FAQ:

"Assignment to a field of the "unknown" type is rejected because the compiler cannot judge whether the object to be assigned is an acceptable one. Read access to a field of the "unknown" type is permitted, but the field has no specific type and must be refered to through a reference of type Object ."

http://www.angelikalanger.com/GenericsFAQ/FAQSections/Techni...?


There isn't really a good reason. The compiler just isn't sophisticated enough to support it.

You could always write a generic method to do it for you:

    static <T> void testFoo(Foo<T> f) {
       f.test(f);
    }

    static void boom() {
       Foo<?> f = new Foo<String>();
       testFoo(f); // works just fine
    }
So the problem isn't that what you're doing is fundamentally unsafe, it's just that Java has no syntax to capture `?` except through a generic type parameter on a method or class.


Actually, it's not in general decidable whether f.test(f) is safe because f gets read twice and the value might change in the meantime. In the example a simple flow analysis will tell you it doesn't but the general case is equivalent to the halting problem. The generic method solves the problem because now there is only one read in boom() and the concrete type in testFoo ensures that f can't change in an incompatible way between reads.

A more sophisticated type system could allow access if it can be proven safe with simple flow analysis but wildcards are already complicated, that would make them much more so for not much benefit.


> Actually, it's not in general decidable whether f.test(f) is safe because f gets read twice and the value might change in the meantime.

Might it? I can't see any way that actions in the current thread can change the value of f in between those two uses. Actions in another thread can be ignored until the receiving end of a happens-before relationship is encountered, and there isn't such a point inside that expression.

More generally, yes, two uses of the same wildcarded variable in one method might not yield objects with the same actual type argument. But in this case i think it's safe.


What I said didn't really make sense -- f.test(f) is a concrete example so how am I generalizing? What I meant was: in the general case of expressions that contain multiple references to the same variable it might, for instance

f.testTwo(f = newF, f)


First, I wasn't speaking about a general case, I was speaking about the case presented.

Second, as of Java 8, the compiler knows whether a local variable is "effectively final". In the example given, f qualifies as effectively final. In earlier versions, my point stands that it's simply because the compiler isn't sophisticated enough.


The point I don't agree with is that it's really about the compiler. Had it been an optimization that doesn't affect which warnings or errors you get then yes, that's all the compiler. But what you need here is for the types of variables to incorporate flow analysis such that f gets compatible types on both reads if it doesn't change, and incompatible if it does. That would be an incredibly complex type system and people already struggle with how generics in java work.


This is a rude response. The reason is quite reasonable, since there is no way that would work in all cases. A case is easily made where Foo<Number> extends Foo<Integer> makes as much sense as Foo<Integer> extends Foo<Number>.

So, which would you have chosen?


It didn't come across as rude to me. And i think it's entirely accurate.

Moreover, that pattern - of sending a variable with a wildcard type argument into a method with a type parameter so that the type argument gains a name - is a something i've seen and used many times, and is useful to know. I don't know if there is a name for this technique, but i call it "taming a wildcard".


In the example case, though, it isn't about giving it a name. It is about using the name in the first place. Is no different from:

    class Foo {
        void test(String arg) {..}
        main(..) {
          Foo foo = new Foo();
          Object s = "sd";
          foo.test(s); // Boom
        }
    }
Note that if that were instead:

    class Foo {
        void test(Number arg) {..}
        main(..) {
          Foo foo = new Foo();
          Long n = 4L;
          foo.test(n);
        }
    }
Then things work, because the subtyping is chosen. In the generics case, there is no relation between Foo<Number> and Foo<Integer>. If there were, then the capture could potentially treat Foo<?> as Foo<Object> and then depending on if you went co or contra, could allow the call.


I'm both sorry and baffled you found my response rude. As for your question, I'm afraid I don't understand it whatsoever. What am I supposed to be choosing? What does your case have to do with the comment I was replying to?


I'm assuming it's because the T variable isn't generalized over within the definition of Foo, so you can't use (test) at arbitrary types until Foo is closed?


In concrete terms, imagine Foo looks like this:

  private T value;

  void test(Foo<T> f) {
      f.value = this.value;
  }
You could then assign anything to a member that should only hold a String.

This is the same reason why arrays should not be covariant (even though they are in Java). This compiles:

  Object[] arr = new Integer[2]; // This should be a compile error, but the language spec allows it
  arr[0] = "abc"; // This blows up with an ArrayStoreException at runtime


Your example presents zero unsafety for the usage presented by eridal. The only reason it's considered unsafe is because the compiler isn't sophisticated enough to compare captures of wildcards, and in this case, might not be sophisticated enough to know that `f` isn't being reassigned.


I understand the covariance bit, but I think I just fail to understand the Java type semantics more broadly. I'd have to just study it and become more familiar


The key thing to remember is that generics are just for the compiler. There isn't anything actually embedded in the bytecode that knows what type a generic is.


This is only partially true. Generic types are accessible through the reflection API. What you are referring to is type erasure.

See for yourself: https://gist.github.com/nicobn/8135faeda058fef5b6b9


Generics are captured in various places in bytecode - most importantly in type definitions. This is how the common type token pattern is exploited. Where generics are erased is in local variable declarations.


I love Java-style generics! Any other popular languages have anything similar? Besides C# of course.


I can't stand Java generics. They're better than nothing, but type erasure really limits their utility.


If you need reification, then you are doing a runtime look up of type information.

Which, by definition, has nothing to do with generic types.

By the way, this is another reason why erasure is the only sane way of doing this: supporting reification means that you let developers second guess what the compiler already proved for you. Just say no to reification.


What? No, I need reification because:

1) I do not want the compiler to create extraneous call-site code, bridge methods and other cruft to ensure that a value is really of the declared type (or derived from it). For instance, when you iterate over a (generic) collection, the compiler will insert type checks and downcasts in to the call site code. With reification the collection would be guaranteed type-safe and no extra cruft with incurred overhead needed.

2) Type erasure rules out using primitive types (or value types) as generic type parameters. Reification allows the actual type to be used with generics, regardless of the "kind".

3) Type erasure forces me into the underworld of automatic boxing/unboxing conversions and suboptimal code. Reified generics allows the compiler to generate verify the code at compile time but generate actual optimized (with respect to the type) code when JITting.

4) Type erasure has a large number of contra-intuitive consequences, gotchas. Example: All realizations share the static members, even though they should be separate classes. Type erasure resembles a macro scheme rather than a function that return classes.

There's much more. You are actually the first person I have ever encountered who argue that type erasure is desirable. I can understand type erasure from a compatibility point of view, but I have never seen anyone defending type erasure without the referring to the migration compatibility that was achieved with TE. That includes Neal Gafter, one of the main forces behind generics in Java.


What you wrote is pretty much my point of view, so I'm not going to elaborate. :) Type erasure also hobbled the type system in Scala, annoyingly.

Type erasure was a compromise to allow migration, it is not a design choice that would've been made otherwise.


Type erasure is the very reason that Scala could adopt a more advanced type system. Otherwise it would be absolutely forced to essentially conform exactly to Java's type system since the targeted bytecode would require all sorts of type annotations which would have to be correct-according-to-java.

Scala wouldn't even be possible in its current form if Java had used reified generics.

(Scala type's system is constrained by the need for interop with Java, but that's an unrelated issue.)


> Type erasure is the very reason that Scala could adopt a more advanced type system.

No. Having a byte-code interpreter which is essentially an assembly language allows you to create any type system you want. It is wwhen you require interop with an existing type system that you are constrained.

>Otherwise it would be absolutely forced to essentially conform exactly to Java's type system since the targeted bytecode would require all sorts of type annotations which would have to be correct-according-to-java.

One could argue that it would be prudent to base your type system on the type system of the platform you choose. Essentially what Scala chose the route of not integrating deeply with the platform type system. That choice could have been made regardless of how advanced the type system of the platform was.

So if the platform had offered reified generics, Scala could still had chosen to ignore those and gone with type erasure. But they could also have chosen to support reified generics. I'm interested in what concrete constraints that would have imposed on Scalas type system, except for interop with Java.

Ironically, the chosen design has forced Scala into adorning with all sorts of type annotations to ease interop with Java: It has to lie about the generic type argument the same way Java does in the type metadata, i.e. "consider is a collection of Shapes. Note: it may be other object types so you must perform downcasts yourself*".


> No. Having a byte-code interpreter which is essentially an assembly language allows you to create any type system you want. It is wwhen you require interop with an existing type system that you are constrained.

Not with anywhere near decent performance.


#1: The JIT can optimize this out in many cases.

#2: No it doesn't. You can still specialize as appropriate when you know the types at compile time, see e.g. miniboxing for Scala.

#3: This was added by the Java language designers for convenience. It would be possible to require explicit casting/specialization, but they deemed it too verbose. Of course then we ended up with advice like "explicitly unbox" as seen in Effective Java...

The last bit is an appeal to authority. There are many people, esp. in the Scala community, who argue convincingly against reification.


> The JIT can optimize this out in many cases.

It can optimize some of the cases. Problem persists and is now even less predictable.

> No it doesn't. You can still specialize as appropriate when you know the types at compile time, see e.g. miniboxing for Scala.

Yes it does. The subject here is generics in Java, not scala kludges to compensate for the poor generics design.

> This was added by the Java language designers for convenience.

Strawman. You can point to any language feature and call it convenience, basically anything above a Turing Machine is convenience. The fact is that they realized that the type erased generics would appear severely hampered if they could not be used for the most used types at all. Hence, they swept the problem under the rug, by allowing implicit (automatic) conversions. The problem is still there, there's a big bulge on the rug and the linked FAQ is essentially a map to guide you around it so that you do not trip.

> The last bit is an appeal to authority. There are many people, esp. in the Scala community, who argue convincingly against reification

I'd be interested in hearing those arguments, specifically those that do not invoke interoperability with Java. Have source?

My source: http://gafter.blogspot.dk/2004/09/puzzling-through-erasure-a...


Why do people find that type erasure is a problem? I'm happy, for instance, writing in languages wherein type erasure is the obvious right answer.


I don't often find myself needing to get T, but sometimes I do. Consider the case of a Handler<T extends Foo> interface that several classes implement. If I have several of these, for various Foo subtypes, I might want to put them in some sort of collection that I can look into later after receiving a Foo message, so that I can dispatch it appropriately. Why do I have to do this dispatch myself? The types of all the Handlers go away, so I need to hold on to them somehow, and match at runtime.

The clear way to do this is to put them in a Map<Class<T>, Handler<T>>. Now how do these Handler objects declare that they're ONLY able to handle a specific Foo subtype (let's call it FooBar)? It'd be REALLY nice if I could just say "Hey handler, what's your generic type? Is the message I just got an instanceof your generic type?" Java won't let you do this due to erasure.

Okay we can get around this. Each Handler<T> has to declare a method (say getType()) that returns the Class<T>. Since I'm generically declaring my class FooBarHandler<FooBar>, I can protect myself from returning a BazBat in this method, but there's NO way for me to abstract this method away. Each Handler has to declare a "public Class<T> getType()" that returns T.class. But since T isn't a thing past compile time, I have to repeat this same method implementation for each concrete type. Gross. This has the added irritation of forcing all the parent classes, if they implement this interface, to be abstract, since they can't implement the method appropriately. This isn't necessarily a bad thing, but the limitation is annoying.

In an ideal world, I can declare some ParentHandler<T> that has this method, and have all my handlers just extend it, with no duplication.


> It'd be REALLY nice if I could just say "Hey handler, what's your generic type? Is the message I just got an instanceof your generic type?

No, it wouldn't be nice, it would be unsafe. If one day you end up adding a new type in your container, you need to update your runtime check as well or your code will fail in mysterious ways.

Erasure keeps you honest by asking you to think carefull about the types so that they can be checked by the compiler, and once the compiler has done this verification, you are guaranteed that your code will work.

Any language feature that encourages the use of reflection , such as reification of types, should not be supported by a language that wants to claim to be sound.


It's not necessarily unsafe, but it is very difficult to do safely. The design suggested is certainly unsafe, however---there's no way to ensure that the values don't lie about their self-reported type and so using that to trigger a coercion is liable to explore things.

If the compiler provides a couple things:

    data TypeRepr
    typeReprEq :: TypeRepr -> TypeRepr -> Bool
    
    typeOf :: Typeable a => a -> TypeRepr
such that TypeRepr cannot be generated (e.g. faked) by users, typeOf is guaranteed to be genuine, and (this is the hardest) such a thing doesn't violate parametricity then you can use that interface to write

    safeCoerce :: (Typeable a, Typeable b) => a -> Maybe b
which is guaranteed to only allow the coercion when `a` and `b` actually happen to be the same type.


Ah, gotcha. This "project out of a polymorphically typed heterogenous container" problem shows up in Haskell from time to time, too. Generally, people just learn to do without it even to the point of considering it a bad practice to try. I'm not going to claim that this is the right way for Java... but it's interesting to note that this is a pain point that's perceived as a problem with type erasure in the Java world. In the Haskell world it's considered to be a good design principle enforced by the "obvious" step to erase types.


It's not necessarily that people regularly see it as a problem, it's just that sometimes you get yourself put into a type corner and the lack of it causes you to have to repeat yourself. I'm not sure what the alternative world would look like, but I feel like this means the type system simply isn't expressive enough to cover this particular problem.

Upon thinking about it a bit, this problem seems to be functionally equivalent to pattern matching on the type of the message (which Java also lacks). I'm not a Haskell guy, but the immediate solution I'm seeing is to just have separate cases for each one. This is still an inferior solution in this particular case, because it forces you to modify the code in two places; luckily the type system helps here and makes sure you do.

But consider a problem of a different form:

Imagine I have an object Bar<S , T> and I want to have a frobnicate(S s) and a frobnicate(T t) method. Since Java erases, I can't do this. frobnicateS and frobnicateT it is! This is particularly annoying because you know S and T are different, but you can't express it to the type system! This seems like it'd be solvable by some sort of disjoint union, but again, Java lacks this. Fun fact, it does have a way to bind a generic on multiple classes: Bar<T extends Baz & Bat>. It would be natural to add a | for this sort of operation, if they ever figured out how to do this sort of operation.


You can take advantage of lambda structural typing to work around the frobnicate overload issue http://benjiweber.co.uk/blog/2015/02/20/work-around-java-sam...

I don't think the same erasure problem is actually a consequence of type erasure - it is just defined in the spec.


I think generally the point is that "forgetting to a generic" is a dangerous operation because recovering that forgotten information is risky. It's inconvenient, but the alternatives are worse. Reflection, especially universally, blows up the amount you can trust your types very significantly.

Your Frobnicate example is interesting since you want an extra level of polymorphism in there, but getting this can be bad for inference (see MultiParamTypeClasses without functional dependencies). In any case, it's not clear why you ought to expect it to work.


> It'd be REALLY nice if I could just say "Hey handler, what's your generic type?

There are better ways of resolving type arguments, such as on Handler implementations, at runtime.

See https://github.com/jhalterman/typetools


Right. And now you're pushing what should be a compile-time error to runtime, and slowing down performance with runtime checks to boot.


Indeed, but it allows library authors the ability to write nicer APIs without having to pass Class references around - they can be resolved.


One common case that I've encountered:

There's no (good) way of going "this is a Comparable<Foo> and a Comparable<Bar>, but not a Comparable<Baz>" that's detectable at runtime. (Also true with other interfaces.) Especially for trait interfaces.

There are also nasty cases where you end up crashing randomly somewhere else in your code at runtime because somewhere something passed a List<Foo> and it expected a List<Bar>, but somewhere in between the type information got lost.

Ditto, I find myself wishing to be able to do instanceof T / T[] (that's an array of T) annoyingly often.

Look at the mess that is EnumMap for an illustration of why type erasure can be frustrating. The amount of reflection done at runtime for what should be (and is, if you write a non-generic implementation, or if you're using a language that's sane) purely compile-time work is... frustrating. (And while you're at it, the fact that you can even attempt to pass in a non-enum into EnumMap. I thought Java was supposed to try to push such errors into compile time?)


> There's no (good) way of going "this is a Comparable<Foo> and a Comparable<Bar>, but not a Comparable<Baz>" that's detectable at run

It's a shame that various libraries/frameworks have resorted to building APIs around Type Tokens - where users have to create anonymous classes - to create something that effectively reifies some generic type information. It's a lame solution, but a solution.


It's a dangerous solution as well given that it relies on user land extensible code to be correct in order to ensure type safety. Generally this is true whenever you have coercion, but type tokens done wrong give an (invalid!) excuse to coerce.


People, including those who gave us Java's generics, have always argued that erasure is a problem. It wasn't something that was desired, but rather a necessary evil for easier backwards compatibility.


As I understand, it turned out to be also useful for more than backwards compatibility (though that was entirely the reason for it) -- its why you could have Scala which its richer type system and a good interop story with Java, whereas the attempt to provide the same thing on .NET faltered in large part because of the .NET platforms reification of generics.


It was never about "backwards compatibility". It was always entirely possible to introduce generics and still have both the compiler and the runtime perfectly backwards compatible. C# did this going from version 1 to 2 of the CLR. You can still run any bytecode compiled for version 1 on the version 4 of the runtime to this day.

No, the issue was a much more esoteric one (and an invented and self-inflicted constraint by the designers themselves) of migration compatibility. Consider entity A using compiled libraries (no source available) from vendor R and from vendor S in the same product. Vendor R releases a new version of his library and uses the fancy new generic collections - and refuses to maintain a non-generic version. Vendor S releases a new version of his library but refuses to provide a version that use generic collections. Library S calls code in library R. All of these need to be in place to require migration compatibility.

> its why you could have Scala which its richer type system and a good interop story with Java, whereas the attempt to provide the same thing on .NET faltered in large part because of the .NET platforms reification of generics.

Do you have a source for this? In my view it is the opposite: Reification is the right solution as it makes reified types part of the first-class system with no strange corner cases. It is type erasure that treats realized generic types as second-class citizens, with strange constraints bubbling up through the entire system.


> Do you have a source for this?

There's a lot of second-hand discussion of this readily locatable on Google, but I haven't saved and can't readily find the posts from people actually involved in implementing Scala on .NET that I saw years ago that directly pointed to this as a problem.


Surely this is sarcastic?


What is it that makes Java style generics valuable to you? Many other languages have, ah, generic generics.


There's a common misconception that erased generics are wrong. It's just that, a misconception: erased generics are just a bad fit for introspectable and runtime-heavy languages like Java. The GHC Haskell implementation also uses erased generics.


Can anyone read Angelica Langers FAQ with all the strange limitations and corner cases, and still call it good design?

I can understand it as a compromise for migration compatibility. I disagree with the goals of JSR14 - and history has shown that the concerns that lead to the migration compatibility constraint was non-issue.

Taken as an isolated type system where you want orthogonality and principle of least surprise (among others) it is not good design. And in that sense, yes, it is wrong


> Can anyone read Angelica Langers FAQ with all the strange limitations and corner cases, and still call it good design?

Again, you're confusing erased generics and erased generics applied to java.


I'd understood this subthread to be spawned from someone saying they love "Java-style generics". Focus on Java therefore seems appropriate, although I think we're still waiting for G*P to let us know what aspects they were actuallly referring to...


Scala was designed by one of the original GJ team members.


Public, bare getters/setters! Hisss!!!!!

To elaborate, this is one of the most common annoyances I have with Java ecosystems. In an attempt to make everything Objectionable (ho-ho!), coders write classes with their internal data structure marked as private but then make getters and setters (which do nothing other than set or retrieve the value) which are public.

The result is that their internal data structure is actually public (since its advertised through the getters/setters), but with the added bonus of extra function calls and member offset resolutions (i.e., it makes execution slower while single-handedly destroying the main benefit that OOP offers—backwards compatibility through isolation of the internal data structure).

While it doesn't surprise me to see new-to-OOP coders making this kind of mistake, I find it troubling that many FAQs/tutorials/classes actually advocate for these patterns.

On the bright side, the web page looks nice :)


Their internal data structure is not public, since the details of how the value returned by the getter is calculated can be changed later, transparently to calling code. This is the point of encapsulation. Getters and setters can be made to access member variables in thread-safe ways, which you can't do with bare member fields. If you start with bare member fields and then suddenly realise that you need to encapsulate (for example, to obey some internal concurrency invariant) you have a major refactoring on your hands. If the clients of that code are not under your control, you're screwed.

The performance impact is total FUD - the JVM eats that stuff for breakfast.


Well, actually, since the getters/setters advertise the type of the internal data structure, it is public. True, you could change the return type of them (which would require you to do just as much refactoring as if you changed a public internal data structure) or change the internal data structure and cast to the return type but many times that will make the getter/setter meaningless.

This is all not to mention that since the setter is public, you don't have any protection of your internal data structure anyway since anyone can modify it.

It's true that getters/setters offer thread-safety; I'm happy to concede that.

However, the performance impact is not FUD (or at least, I did not mean it as such); ask any systems programmer and they will happily tell you that one of the most common optimizations you can make is removing function calls where they are unneeded. Another comment mentioned that the JVM can JIT this problem away through inlining; I honestly do not know if that it is true—if it is, then great; that would be a significant benefit over this same kind of formulation in, say, Cxx.

Finally though, I don't really understand why Java programmers cling to backwards compatibility so when their own language shows you what happens when you commit to never removing anything to avoid breakage (i.e., you build in a huge amount of cruft).

Keep in mind, none of my posts are meant to be flamey here; I was just giving my opinion of one of the patterns displayed in the FAQ that I find distressing.


since the getters/setters advertise the type of the internal data structure

No, they don't - that's the whole point. You can change the type of your internal data structure however you want, but as long as the getters and setters accept and return the same type, your contract remains the same with your clients.

since the setter is public, you don't have any protection of your internal data structure anyway since anyone can modify it

Of course you do. You can check invariants in your setter before applying any changes, and throw an exception if the passed value violates them. You can take a lock in your setter, to ensure that the passed information is applied in a thread-safe way. Again, this is the whole point of encapsulation.

Another comment mentioned that the JVM can JIT this problem away through inlining; I honestly do not know if that it is true—if it is, then great

The JVM can do this, and much much more, at runtime. Basing your optimisation advice on what a systems programmer might have told you 20 years ago is a really bad idea. All good JITs and compilers have inlined small functions for a very long time now. The JVM is particularly impressive in that it can do that for virtual calls too.


I agree on the performance impact. Where I think parent is right (but this is in no way unique to Java) is that it's much better to hide a class' structure as much as possible. In particular, it is important to avoid the traditional bean pattern, with an empty constructor and plenty of setters. It makes it impossible to reason about which fields are mandatory for the object, and makes it possible for the method foo(yourObject) to change the object behind your back to something which doesn't make sense. Passing all fields through the constructor and/or using a builder pattern is much preferable.


I can't imagine the minor performance penalty of calling a trivial getter makes any noticeable difference in 99.99% of cases.

JIT can inline them anyhow.


I think this illustrates one reason why Go doesn't have generics.

Generics add a lot of complexity to a language. There are two main ways to implement them that I know of--the Java style, with type erasure, and the C++ style, where you essentially generate code for each instantiation--both have significant drawbacks. Either way, it complicates the syntax, complicates the type checker, complicates all the tooling around the language (IDE support, debugger, profiler, linter).

OTOH, if you don't have generics, you either just use concrete types or you essentially do type erasure yourself, explicitly (eg with interface{} in Go or void* in C). It costs you a few casts and maybe some code duplication.

In return, you get a simple language, where it's easier to write robust tooling.

I think erring on the side of simplicity and avoiding generics altogether is more than worth it.


Java generics aren't the best example of how to implement generics. C# or Rust are better in this regard, which both have reified generics. Generics do solve a practical problem identified decades ago, ad-hoc code generation or void* aren't the solution. I don't think the omission of generics has any bearing on how easy writing tooling is.

You can always omit them, if you're prepared to deal with code duplication, but I think you're downplaying its significance: it is not just "maybe some" code duplication, nor is it a "a few casts". It's sometimes a deep, murky swamp you're forced to trudge in, and it's not pleasant.


To be fair, generics are only needed for as long as you insist on static typing. Languages with dynamic typing where types are checked at runtime do not need generics.

And generics is complicated topic. I am all for static typing, especially for large code bases. But sometimes when I venture too far out in generic land I find myself wondering if it is worth it? A well-designed library that uses generics prudently can be a bliss to use. But the implementation code can get near-unreadable.


> There are two main ways to implement them that I know of--the Java style, with type erasure, and the C++ style, where you essentially generate code for each instantiation--both have significant drawbacks.

There is also C#/.NET style, which maintains the type system aspect as Java and the run-time reification and efficiency of C++.

> OTOH, if you don't have generics, you either just use concrete types or you essentially do type erasure yourself, explicitly (eg with interface{} in Go or void* in C). It costs you a few casts and maybe some code duplication.

Exactly like Java pre-generics, I remember the JDK 1.0.2/1.1 days also.

> I think erring on the side of simplicity and avoiding generics altogether is more than worth it.

They don't need it day one (neither Java or C# had it in 1.0), but they'll eventually need to add something for generic programming.


I was dealing with this a couple of days ago: List<List<List<Map<String, Object>>>>

Sigh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: