Hacker News new | past | comments | ask | show | jobs | submit login

Tried grpc's in a production application where one microservice had to make over a million grpc connections to another microservice. We experienced ton of memory leaks and switched over to http/json which has been working well. Implementation was done in Scalla, Akka. Curious to know if others have had similar experiences or if there is another best practice with grpc's that we're missing.



Obviously a million open sockets uses more memory than having a stateless http backend. However, having worked on the very largest RPC service deployment ever fielded I feel it is safe to say that there is no reason to have a million open gRPC channels.


The problem that gRPC solves for you is versioning your messages between your services.

As your json payloads evolve, you're going to encounter pain trying to keep your services in sync, whether it comes in the form of writing parsing code to crack open payloads and do conditional error checking based on the version (and expected fields), or whether it comes operationally in how you actually deploy updates to running services.


That's solved by protobuf, not grpc.


...why did one service need to make a million connections to another service?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: