Hacker News new | past | comments | ask | show | jobs | submit login

Yes, this sentence is quite vague and does not give proper info about the topic, you only have to believe me or not, I'm not writing any blog about this, sorry.

Means higher throughput with lower jitter, that in our case was what we were looking at that time. For go defense we were using PooledByteBufAllocator for recycle Output Streams but memory usage was not a concern in benchmark as much as GC does not affect throughput. But I also believe that go http stdlib memory usage is better than netty, sorry for not being more helpful with this topic.




Thanks for going into more detail, I'm interested that you got lower jitter. My experience with high performance JVM networking stacks is that they achieve incredible throughput but are quite hungry (with a hungry baseline too) and that 99P can be not so great due to GC. I'll have to give Netty another look. I'd be interested in the RSS of each process after warmup.


In my experience tunning a jvm application in order to gain better throughput is far from a trivial job, although jvm ecosystem's requires a much more experienced developer on the platform to reach some levels in comparison to go simplicity which results in really high performance solution easier. For instance deciding an appropriate concurrent algorithms and measuring it. e.g. using AtomicInteger vs a per-thread-counter-based is not easy (specially in more complicated scenarios)

We have both java8 and go, high throughput critical microservices with excellent 99pctl, and in general memory is not a concern (as much as we do fine tuning on gc and don't have any memory leak) and generally for the really critical and portable solutions we choose java over go (unit testing and library versatility is a big player in this discussion)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: