I was worried some might not make it that far into the post and come away with that thought, but such is the nature of long-form prose :). This is more than idle WS connections. Every channel client connects to the sever (via WS), then subscribes via our PubSub layer to the chat room topic. Once we connect all 2M clients, we use the real chat app to broadcast a message to all 2M users. Broadcasts to out to all WS clients in about 2s. 2M people in the same chat room might not be a real-world fit, but it stressed out PubSub layer to its max, which was one of the goals for these tests. We were actually shocked at how well broadcasts hold up at these levels.
What this benchmark shows is the lightweightness of the memory needed per websocket. 2M users is really impressive.
2M users per machine is great if they are mostly idle. This is the use case of WhatsApp, and their stats are[1]:
> Peaked at 2.8M connections per server
> 571k packets/sec
> >200k dist msgs/sec
Not every app is meant to have mostly idle users. Can a real-time MMO FPS be done in phoenix is the question (lets limit each user's neighbourhood to the 10 closest players).
I'd be very interested in the other corners of the envelope for phoenix: A requests/second benchmark over websockets, with the associated latency HdrHistogram, like [2]
> Not every app is meant to have mostly idle users though. Can an real-time MMO FPS be done in phoenix is the question (lets limit each user's neighbourhood to the 10 closest players).
Yes, and the next phase for our tests will explore these kinds of use-cases. To give you an idea, our pubsub layer can support 500k messages/sec on a macbook. These tests were specifically around max clients and max subscribers, but more tests are needed for hard numbers around the usecaes you layed out, which we'll be a great fit for. I think gaming will be huge target for Phoenix.
I am a very happy user of Phoenix myself, and also a MMO game programmer, and unfortunately I can say with confidence that you won't implement a real-time MMO FPS on top of Phoenix channels any soon.
Not because of Phoenix itself, but because all modern lag-compensating techniques rely on specific properties of UDP that are not implementable over TCP (and thus not over Websockets or HTTP long poll, which are the currently supported Phoenix Channel transports).
Not to diminish the value of Phoenix: the framework is really pleasant to use both on dev side and ops side. And you could use it today to implement the server-side of most games, even some "massive multiplayer" ones, as long as latency is not your primary concern.
You are right our default transports (WS/LongPoll) are not well suited for FPS requirements, but just to be clear transports are adapter based and you can implement your own today for your own esoteric protocol or serialization format. Outlined here:
http://hexdocs.pm/phoenix/Phoenix.Socket.Transport.html
The backend channel code remains the same and the transport takes care of the underlying communication details.
2.8M connections in a real-world scenario? Wow. I was wondering what kind of technology they used to get that, and I guess I shouldn't be surprised to see that it's Erlang.