> Consider the challenge of running PHP programs on servers. We have two primary options:
> 1. Wrap the PHP interpreter with a layer that instruments each HTTP call
> 2. Use the existing php-cgi program and simply compile it to Wasm
> Option 2 is not only faster, but it also enables any web application on Wasmer more efficiently.
I’m confused. This seems to be suggesting that php-cgi, which has to initialise the PHP environment every time, would be faster than the likes of php-fpm, which, well, I understand and presume it has significantly less overhead per request, though I’ve never benchmarked it.
I have PHP 5.6 installed on my VPS for one old site, and it takes around 27ms to start¹ (compared to under 30μs for just plain `echo`, as a closer indicator of actual process spawn overhead). PHP 8.2 might be faster, but it’s still going to be much slower than `echo`.
By simply compiling php-cgi to WASM, it will surely be doing all that initialisation for every request. Because CGI starts everything from scratch for each request, it’s inherently less efficient. In theory you could coordinate a time to snapshot the process/VM/whatever, forking from that point, but that’s not CGI any more.
All up, what they’re claiming is so completely contrary to what I would expect (and without any explanation or justification whatsoever), and kinda follows the “dust off something old to laugh at it again” trope, that I’m honestly having to check that it’s not the first of April any more (the article is dated the 6th).
So as I say, I’m confused. Option 2 seems very clearly slower and much less efficient, by the very nature of CGI. No one targets CGI (it’s been basically dead for… I dunno, close to twenty years?), because CGI is considerably worse than the alternatives. Can someone enlighten me? Have I missed or misunderstood something?
—⁂—
¹ Measured by running this in zsh and reading the “total” figure (across sixteen runs, I got between 2.671 and 3.032 seconds):
time ( for i in {0..100}; do php56 <<<'<?="."?>'; done )
The comparative echo test uses `echo -n .` and takes one thousandth as long.
To me this seems a little closer to the architecture of AWS Lambda than OG CGI, though that is not a perfect analogy either since this is in a WASM runtime within their server process, rather than a separate process. But the programming interface is a handler function you provide with an interface that looks like this in Rust:
`fn handler(request: Request) -> Response `
My understanding is the main function is called only once, and registers that handler. So `main` is where you'd initialize the majority of the environment, and no that is not truly CGI; definitely no process is being created for each request, but it may be the case that this is more like FastCGI where you have a pool of single-threaded runtimes all setup that way that can handle requests.
This still seems inefficient compared to a threaded or event polling process that can handle multiple requests concurrently without having to marshall data back and forth, but I'd think it can get closer to that than FastCGI or Lambda do.
You’re misunderstanding it. This is just recompiling CGI-speaking binaries to WASM, meaning that it’s effectively spawning a new process for each request. Being WASM it’s not a new native process but just a new instance of the WASM module, but in practice process-spawning is not the slow part, but what happens inside main, which is being run for every new request.
> 1. Wrap the PHP interpreter with a layer that instruments each HTTP call
> 2. Use the existing php-cgi program and simply compile it to Wasm
> Option 2 is not only faster, but it also enables any web application on Wasmer more efficiently.
I’m confused. This seems to be suggesting that php-cgi, which has to initialise the PHP environment every time, would be faster than the likes of php-fpm, which, well, I understand and presume it has significantly less overhead per request, though I’ve never benchmarked it.
I have PHP 5.6 installed on my VPS for one old site, and it takes around 27ms to start¹ (compared to under 30μs for just plain `echo`, as a closer indicator of actual process spawn overhead). PHP 8.2 might be faster, but it’s still going to be much slower than `echo`.
By simply compiling php-cgi to WASM, it will surely be doing all that initialisation for every request. Because CGI starts everything from scratch for each request, it’s inherently less efficient. In theory you could coordinate a time to snapshot the process/VM/whatever, forking from that point, but that’s not CGI any more.
All up, what they’re claiming is so completely contrary to what I would expect (and without any explanation or justification whatsoever), and kinda follows the “dust off something old to laugh at it again” trope, that I’m honestly having to check that it’s not the first of April any more (the article is dated the 6th).
So as I say, I’m confused. Option 2 seems very clearly slower and much less efficient, by the very nature of CGI. No one targets CGI (it’s been basically dead for… I dunno, close to twenty years?), because CGI is considerably worse than the alternatives. Can someone enlighten me? Have I missed or misunderstood something?
—⁂—
¹ Measured by running this in zsh and reading the “total” figure (across sixteen runs, I got between 2.671 and 3.032 seconds):
The comparative echo test uses `echo -n .` and takes one thousandth as long.