I believe, also local IP address, as a machine may have multiple network interfaces with different IP addresses. At least it's something that I've read being used for a demo of simultaneous one million network connections.
What are the factors that made this a hard problem? Isn't having 1,000,000 open TCP connections kind of similar to having a 1,000,000 row key-value database? As long as it fits in RAM it doesn't seem too terrible.
You could even do the TCP protocol in userspace and literally use a key-value database to store all state, I think.
The problem with TCP is usually that you also have send buffers, of data that the client hasn’t acked, and receive buffers of data that the user space process hasn’t read yet. That state is a lot larger than just a tuple stored in a database.
Naive question, is freebsd actually a big factor in this capability (compared to linux)? I'd imagine that erlang(/beam) is the biggest contributing factor. But this is coming from someone who hasn't used freebsd