The point I'm trying to make is that you can't share resources easily between all of those processes, even though they're on a single machine, so you usually open a lot more database connections that you would need with a single shared connection pool. So, people often end up dealing with PgBouncer and other inconveniences much earlier than they would otherwise need.
Trying to share a much smaller number of connections between a larger number of threads with fine-grained checkin/checkout is a nightmare, in my experience. You end up with all sorts of difficult resource and lock contention issues. As soon as you need a simple transaction you're stuck holding the connection for the duration anyway.
In my experience, it's all handled transparently behind the scenes... there is no headache. In Rust, checking a connection out is a single function call on the pool, which is easily shared among all threads, and it will automatically get checked back into the pool when the connection goes out of scope... you don't have to do a single thing to check it back in. In Go, the connection pooling is all handled transparently behind the scenes, such that you don't even need to know it's happening. I actually had to do some googling when I started using Go, as I was concerned that no one was recommending the use of a connection pool... creating and tearing down a connection per request is just wasteful when connection pools are so nice to use. It just turns out that Go embraces connection pools so deeply that I don't know of an easy way to avoid pooling your database connections.
If your application gets bottlenecked by the number of connections in your pool, it's easy enough to increase the number, but the more independent pools you have, the more overprovisioned connections (connected but not being used) you will have scattered throughout those pools. It's also usually possible to run a connection pool without an upper limit, if you trust your database to handle large number of connections gracefully.
Rust and Go's connection poolers will also automatically scale down the connection pool when connections are idle for a given period of time, which is nice.
I can't think of any nightmares or headaches that I've encountered with those connection poolers. It all "Just Works"... except for PgBouncer, the ultimate connection pooler. PgBouncer doesn't work with prepared statements or transactions unless you run it in transaction mode, and then you have to run every query in a transaction to use prepared statements.
I'm definitely not suggesting that you try to serve 1000 concurrent requests with 10 connections or something silly like that, but that is what often happens when you get large Ruby deployments which would attempt to establish more connections than Postgres can handle, so you route them through PgBouncer where a small fraction of the number of connections exists.
But, this is pretty off-topic at this point. I didn't mean to point the conversation in this direction.
> Rust, checking a connection out is a single function call on the pool
Still a pain in the arse when you are making function calls inside a transaction and dealing with the the connection reference lifetime.
> Go, the connection pooling is all handled transparently behind the scenes
This actually has a few nasty properties. Firstly, executing two simple queries in seemingly sequential Go code actually execute in parallel. Secondly, it's possible for Go's connection pooling to cause some very nasty failures. Rather than timing out at the first of a bunch of normally fast but now unusually slow queries (because of a lock etc), Go will keep spawning new connections and parking running but not yet timed out queryies until everything is on fire. Max connections is definitely a good idea.
> It's easy enough to increase the number
Only if you can restart your DB. Which, if you're trying to scale up under load, is the last thing you want to do.
> automatically scale down the connection pool when connections are idle for a given period of time
PgBouncer has supported this since release.
> PgBouncer doesn't work with prepared statements
Prepared statements themselves work fine. The problem is many ORMs do fragile, non-deterministic things with caching named prepared statements to improve throughput in simple scenarios.
Using named prepared statements can also cause other issues because it signals to PG that it's OK to use a generic query in some cases. It might not be!
I'm talking about client-side connection count maximums, not server-side. It's just a setting in connection pools like Rust and Go have.
> Prepared statements themselves work fine. The problem is many ORMs do fragile, non-deterministic things with caching named prepared statements to improve throughput in simple scenarios.
Postgres specifically supports unnamed prepared statements as a feature, and PgBouncer's model cannot do anything to help those. One connection creates this statement, and another tries to execute it. In fact, PgBouncer's docs specifically say that they do not support prepared statements, and not to use them, so your claim is contrary to the docs.
I really don't want to even bother with your Rust and Go comments, since they are just nonsense. Lifetimes are not a problem with function calls involving transactions in Rust. At all. I work with Ruby, Rust, and PostgreSQL professionally at my current full-time job. I've written a lot of queries, and many of those involved transactions.
Go will not execute two seemingly sequential queries in parallel. It will execute them sequentially. When you run a query, it's a synchronous process, unless you specifically launch that query in its own separate goroutine... in which case, it is absolutely not a surprise that it runs in parallel, because you did that. Your slippery slope argument is completely nullified by this property. If you don't set a maximum database connection limit and your web server receives another request that requires a connection, it's no surprise that it tries to open another database connection to help service that web request. From the beginning, it appeared you were making the argument that connection pools should not be used, and therefore each request just handles its own connections... which would also be unbounded just like this. Fortunately, Go and Rust database pools provide an option to limit the upper bound. I worked with Go and MySQL professionally at my previous full-time job.
Then... you're defending PgBouncer?! I thought you hated connection pools? PgBouncer is great at what it does, but what it does is a painful headache to deal with, because it breaks half the features any normal Postgres client expects to work seamlessly. You can't just prop it up in front of a database and expect things to "just work".
You're presenting information like you have all this experience, but my experience clearly indicates that what you're saying is just plainly wrong. I don't see any benefit to either of us in continuing this discussion further. I'm out.
No, you're just being unnecessarily rude. I just said I thought that fine-grained checkin-checkout like Go and Rust encourage is somewhat overrated. There's no need to be hostile.
> Postgres specifically supports unnamed prepared statements as a feature, and PgBouncer's model cannot do anything to help those.
PqExecParams (single phase prepared statement) works fine over PgBouncer. That's what I'm talking about. This is different to PREPARE & EXEC. You can't actually do a single phase prepared statement from psql, AFAIK, only via client libraries. https://www.postgresql.org/docs/11/libpq-exec.html
I agree, the PgBouncer documentation could be clearer. I think they just don't want people trying it to file bug reports. PREPARE and EXEC can actually work even over statement pooling but you need to make sure your connection setup statements prepare all the necessary statements.
> it's a synchronous process, unless you specifically launch that query in its own separate goroutine
You're right, I'm getting two things mixed up here. It is a while since I dealt with this problem.
1) The auto-checkout can mean you end up executing related statements on different connections. IMHO this is highly confusing to have as default behaviour and I prefer the Rust approach.
2) By default Go will just keep piling up Goroutines blocked on slow queries and open more DB connections and kill a DB server.
The point I'm trying to make is that you can't share resources easily between all of those processes, even though they're on a single machine, so you usually open a lot more database connections that you would need with a single shared connection pool. So, people often end up dealing with PgBouncer and other inconveniences much earlier than they would otherwise need.
But you're right, it's not necessarily forking.