Hacker News new | past | comments | ask | show | jobs | submit login

> I suspect this would be disastrous for performance. I believe Haskell uses a similar approach though. > > Sometimes you want to store 20 million very small values in an array. Forcing use of bigint would preclude doing this efficiently (in the absence of very smart compiler optimisations that is).

I suspect that it would. I also suspect that I don't care. :p

We're talking about Java. Yes, you can write high-performance Java and I wouldn't want to take that option away. But look at the "default" Java application. You have giant graphs of object instances- all heap allocated, with tons of pointer chasing. You have collections (not arrays) that we don't have to guess the maximum size of.

If you're storing 20 million small values in an array, then go ahead and use byte[] or whatever. But that should be in some kind of high performance package in the standard library. The "standard" Integer type should err toward correctness over performance- the very same reason Java decided to be "C++ with garbage collection".

I'm also not literally talking about the BigInteger class as it's written today. I'm talking about a hypothetical Java that exists in a parallel universe where the built-in Integer type is just arbitrarily large. It could start with a default size of 4 or 8 bytes, since that is a sane default. Maybe the compiler would have some analysis that sees the number could never actually be large enough to need 4 bytes and just compile it to a short or byte. These things should be immutable anyway, so maybe the plus operator can detect overflow (or better if the JVM could do some kind of lower-level exception mechanism so the happy path is optimized) and upsize the returned value size. Remember, integer overflow doesn't actually happen very often- that's exactly the reason people don't typically complain about it or ever notice it (except me ;)), so it's okay if the JVM burps for a few microseconds on each overflow.

All this doesn't matter because it'll never, ever, actually happen. I just think they made the wrong call and it has unfortunately led to lots of real world bugs. It's hard to right correct, robust, software in Java.




I suspect the performance penalty would be so severe it might undermine the appeal of Java. I don't have hard numbers on this though, perhaps optimising compilers can tame it somewhat. Presumably Haskell does.

A more realistic change might be to have Java default to throwing on overflow. The addExact methods can give this behaviour in Java. In C# it's much more ergonomic: you just use the checked keyword in your source, or else configure the compiler to default to checked arithmetic (i.e. throw-on-exception). This almost certainly brings a performance penalty though.


Yeah, I don't have any real intuition about the performance cost, either. But real-world Haskell problems do fine, as you said. And Haskell has fast-math libraries that, presumably, give you the fast-but-risky C arithmetic.

I also agree that a "more realistic" option is to just throw on overflow by default, the same way we throw on divide-by-zero.

But that won't happen either. :/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: