We had a node.js app where each inbound request would go through a set of HTTP connections to the back-end couchDB. When the concurrency reached around 1,000 then there were 1,000 requests made through 10 couchDB pipelined connections. The result was the web requests were starting to get slower and slower. Think of a 6-lane road merging into a 1-lane road. Congestion.
So your assertion is that if you bumped up the CouchDB connection limit to 1,000 (believe it or not, Erlang--in addition to Node--is capable of this), then this slowdown would not have occurred?
My point is the thing you are connecting in the back-end has to scale at the same level as node - if not better. I'm sure making independent non-pipelined requests to CouchDB will help, but if CouchDB views are slow or it can't handle the concurrency, then the web requests to node.js will start slowing down resulting in a pipeline stall.
Sure, if you're trying to shove 1,000 things through 10 connections you're going to get contention. But you could open 10K pooled connections to several different backends and still stay under the 64K limit; this should be OK up to concurrency of ~100K. Beyond that, there's always SCTP or SPDY...
The author doesn't seem to understand that more connections to a database does not equal higher throughput.