Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do you think the Go GC is better than any the JVM options? From what I've seen, while the Go GC is well tuned for low latency, by picking the right JVM GC parameters you can on balance get a better throughput latency tradeoff. I'm just wondering if you have any reliable benchmarks or evidence to support what your saying? I don't use either language for work, so I think you might have better information than I.


I talk about this in the presentation I linked in another subthread (https://www.cockroachlabs.com/community/tech-talks/challenge...). The key to getting good performance out of any GC is to generate as little garbage as possible, and in our experience Go makes better use of stack allocation and value types keep many objects out of the garbage-collected heap. We've found that idiomatic go programs tend to produce less garbage than similar java programs, and in the presentation I discuss some tricks we use to get that even lower in critical paths. Admittedly, we're not JVM tuning wizards so maybe there's more that could have been done on the JVM side.


As I understand it, Java needs a complicated GC implementation because it produces, by design, a makes a huge amount of heap allocations -- lots of very short-lived little objects.

Much of Java's GC focus has been on correctly partitioning the heap so that long-lived objects can be less aggressively collected than short-lived ones. (An example of a challenging long-lived object is the entire set of classes used by a program, all of which need to available to the runtime for reflection. For many bigger apps, the class hierarchy alone takes up many megabytes of RAM!)

Go can make use of the stack to a much larger degree (structs and arrays can be passed by value), and so it can get by with a much less advanced GC. As a result, Go team's main focus has been on reducing pause times more than anything else.


We got around this by writing our own GC management: https://deeplearning4j.org/workspaces

We write our own GPU algorithms, Java native interface transpiler (eg: we generate JNI bindings) as well as our own memory management.

We've found the JVM to be more than suitable. Granted - we wrote our own tooling and had reasons we can't move (those customers are a neat thing most people don't think about :D)

I understand why you guys did go though. Congrats on pushing the limits of the runtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: