Hacker News new | past | comments | ask | show | jobs | submit login

> New versions of Java have made great progress in the memory area.

Elastic uses huge amounts of memory (30GB is the norm). That's not a problem by itself, but Java garbage collections at this size can often take 10 seconds or more to complete on modern hardware. During this time, the server is basically down, the only solution is to increase timeouts or use replication servers.




> but Java garbage collections at this size can often take 10 seconds or more to complete on modern hardware.

For heaps of this size you can use Shenandoah GC and you get pauses well below 100ms. Your post is a perfect example of the type of 'historical reasons' (or historic FUD) the GP talks about.


> For heaps of this size you can use Shenandoah GC and you get pauses well below 100ms. Your post is a perfect example of the type of 'historical reasons' (or historic FUD) the GP talks about.

This GC is relatively new, and Elasticsearch doesn't support it (https://discuss.elastic.co/t/support-for-shenandoah-gc/16237...)

It may be a potential solution, but it's hardly a proven one. The concern is not FUD, it's legitimate.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: