Coro is just a cooperative asynchronous framework on par with, say, POE. You do not get to use multiple CPUs. You do not get to use a database. You do not get to use a normal multi-threaded webserver.
I would be suspicious of anyone seriously trying to create a high volume Perl website using Coro.
You are right about multiple CPUs. For that, fork and load-balance. Instead of each forked process handling one connection, it handles a few thousand.
If your app is CPU intensive, I have to wonder why you'd use Perl for that. You get a 2x speedup by using multiple cores, but a 50x speedup by switching to Haskell or Common Lisp for the critical section. (You could also use C or C++ or Java, but that's just being crazy.)
As for databases, most real databases have non-blocking interfaces; this means database queries won't stall your threads. (Postgres and BDB are known to work well. MySQL requires hacks.)
And yes, you don't get to use a normal multi-threaded webserver. I am not sure how that works.
Yes, you can use some databases. But I don't think you can use the classic DBI interface.
More problematic, though, is that a single poorly coded function call on a seldom hit page can seriously impact responsiveness for a large fraction of your website. With real threads or processes you can protect against a function with a memory leak with resource limits. (You can set that up BSD::Resource with Perl on Unix systems.) If you try that with Coro you risk taking down a large fraction of your website each time a bad function runs.
If you have a small team and absolutely trust their work, this works. But as you scale up in complexity, mistakes will happen. You will have problems. And your choice of cooperative multitasking will look worse and worse.
Cooperative multitasking is not a new idea. In fact it is usually the first thing people try. Windows through 3.1 (continued under the hood in the 9x line). Apple did it through OS 9. Ruby did it through the 2.8 line. Yet in every case people run into the same problems over and over again and conclude that they are better off with preemptive multitasking. Even badly done preemptive multitasking such as Windows 95 or Ruby 2.9.
Again. I would be suspicious of anyone trying to create a high volume website these days in Perl relying heavily on cooperative multitasking. I'm not going to say that they won't succeed. But they are setting themselves up for problems down the road.
Furthermore it isn't as if there is a real problem that needs solving here. A single decent server properly set up can serve enough dynamic content from a single machine to put you in the top few thousand sites. Buy more machines and you can scale as far as you want.
Also available from CPAN is EV::Loop::Async, which allows events to be handled even when Perl is busy. (It uses a POSIX thread for this.)
(The key to success with Coro is using the right libraries. You write what looks like blocking code, but the libraries make it non-blocking.)
Anyway, the end result is that you use a lot less memory to handle a lot more clients. This may not be an issue if every request is CPU-bound, but you'd be suprised how often your process is blocking on IO, and how many resources a process-per-connection model consumes.
That looks cool, but it looks like that patch is not in the CPAN version. I'm not sure how much I'd trust it. Particularly if you'd loaded some badly behaved XS code. Or run a disaster RE. For instance I ran into one last week which took down Perl 5.8. Losing one mod_perl process occasionally was only an annoyance. Losing a good fraction of my site capacity would be much worse.
EV::Loop::Async lets you handle events, but won't solve the problem of, "I loaded an external library, and it didn't return control for 10 seconds."
Neither addresses the problem of protecting yourself against badly behaved functions that have a fast memory leak.
BTW you're assuming wrong when you assume that I'd be surprised at how often my processes are blocked on IO or how much resources they take. I am painfully aware of both factors. However it is easy to plan for that. I've personally seen 2 servers pump out a million dynamic pages/hour with real traffic on a website with only obvious optimizations. I know for a fact that the application code had memory leaks, bugs, and the occasional segfault. I'm happy to buy 4x the RAM to go with an architecture that makes those non-issues to the overall function of the website.
Maintaining a hacked-up-piece-of-shit is a different problem from starting from scratch and Doing Things Right. In the situation that you're in, you probably made the right decision -- throw RAM at the problem so you never have to think about it again.
When writing an app from scratch, though, you have some control over the quality of the code, and can aim to serve more users with less hardware. System administration is hard, and the less systems to administer, the better.
Assuming that your codebase will continue to be a work of elegance is challenging. Particularly if you're loading CPAN modules that are written and maintained by other people to a different standard. Of course if you reject those CPAN modules, then what's the point of writing Perl?
But, you say, we'll just limit ourselves to high quality CPAN modules? The real standard ones that everyone uses? Surely nothing will go wrong?
Fine. Last week I ran into a segfault coming from the Template Toolkit triggering a regular expression bug in Perl. (I am waiting on the bug report until I get official permission to submit the patch with the bug. I'm careful about copyright these days...) That's about as standard as you can get. Assume that an extremely popular pure Perl text manipulation module on top of Perl works as documented and enjoy the core dump.
The moral is that unless you are personally writing the whole software stack you're using, you never know what will trigger a bug somewhere. And no sane web company is going to rewrite their whole software stack. (For the record the most painful bugs in the application I described previously were at the C level, and none of that code was touched by anyone in that organization.) However there are architectures that let you mitigate classes of problems before they come up. What is that protection worth to you? Given how much traffic you can get per server, what lengths do you need to go to to optimize?
I would be suspicious of anyone seriously trying to create a high volume Perl website using Coro.