Hacker News new | past | comments | ask | show | jobs | submit login

As someone who comes from the .NET world, this is something that pissed me off about configuring Java applications like Elastic.

If I've got a box with 512GB of ram, it seems I'm supposed to spin up multiple instances to satisfy this, all because the JVM has a hissy fit if you go over ~30GB. This then means worrying about replication and ensuring we don't have both the primary and replicas sitting on the same box.

It seems insane that this is an actual issue in 2017.




Can't you just turn compressed object pointers off, or am I misunderstanding the issue?


> If I've got a box with 512GB of ram, it seems I'm supposed to spin up multiple instances to satisfy this, all because the JVM has a hissy fit if you go over ~30GB.

What is the actual technical reason why the JVM cannot (easily?) address more than 32 GiB of RAM?


I don't believe that's the case (an other commenter notes they run solr processes up to 160GB), however they may have run into the compressed oops optimisation, or more precisely the end of it: because Java is a very pointer-heavy language, if the maximum heap size is under 32GB many 64b JVMs use a variant of tagged pointers where they have 35 bit pointers stored in 32 bits (since the lower 3 bits are always empty they can shift them in/out).

Except once you breach the 32GB limit, your 32 bit pointers grow to 64, and depending on your application you might need to grow your maximum heap into the high 40s to get room for new objects: https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-j...


The ES documentation goes into more detail on this.

https://www.elastic.co/guide/en/elasticsearch/guide/current/...


Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: