> Because on most Unix systems Apache httpd runs under the root user, any threat actor who has planted a malicious CGI script on an Apache server can use CVE-2019-0211 to take over the underlying system running the Apache httpd process, and inherently control the entire machine.
Maybe I am conflating things or mixing something up, but I was under the impression that it only used root-privileges to obtain access to restricted ports and immediately afterwords lowered its privileges lever to something sane/non-risky.
And if that is how it operates, then this exploit should not be effective.
"this is a bit of embellishment on the part of the exploit author"
Apache is saying the same thing.
"In Apache HTTP Server 2.4 releases 2.4.17 to 2.4.38, ...code executing in less-privileged child processes or threads (including scripts executed by an in-process scripting interpreter) could execute arbitrary code with the privileges of the parent process (usually root)"
This exploit lets an unprivileged child worker (for example running mod_php on regular shared hosting clients' .php files) hack back into the root parent process and thus escalate from unprivileged to root.
I still don't get it. CGI scripts would run with reduced privileges. The CVE description says "code executing in less-privileged child processes or threads (including scripts executed by an in-process scripting interpreter) could execute arbitrary code with the privileges of the parent process (usually root) by manipulating the scoreboard"
It’s basically the list the master process keeps of the worker processes it has spun up. The workers can report back some stats that the master tracks about each. Apparently something in that report back process could be exploited to run arbitrary code.
While it might not be relevant to this particular issue (though it might, I didn’t dig too deeply), there’s a long standing history of running everything inline with Apache. I mean look at mod_php... sure we’ve all (hopefully) learned our lesson there, but it’s a hell of a hole to dig back out of as long as things keep working.
It’s one of the reasons I’m always suspicious any time I run into a company that runs Apache rather than nginx or one of the many other alternatives. There is absolutely nothing wrong with running Apache, but it always makes me wonder exactly why they are... is it because they’ve got this one app that won’t run on anything else because of crap like this? More often than not...
I think it's been very common for 15 years to use Apache's "suexec" to run each CGI process as the individual user rather than as a user related to Apache.
There's also suPHP which allows you to do the same thing for PHP processes.
You can't really have a setuid "script" anyway. But you can, at the bare minimum, launch CGI scripts via suEXEC. This prevents them from being able to attack the httpd worker processes, since they won't be running as the same user.
Because, depending on your configured concurrency model, it may need to be able to spawn new processes/threads as other users, and this is not possible unless you have at least one process running with some privileges beyond that of the normal users. This usually means root. Even if not root, it needs to be a user/group privileged enough to be able to impersonate the other users, or it can't do its job.
The code running with greater privilege is kept to a minimum but at least some needs to be there, and this exploit potentially gives a route through to manipulate it.
FastCGI and similar can help here - it can push the creation of user specific processes away from Apache, making it harder to cross the barrier.
Thanks. I never used that feature, nor I ever heard of anyone using it. IMO there should be some configuration option to turn that feature off, which will allow to run all processes as a designated user. Or, if those users could be enumerated at startup, it could spawn one process for each user.
The usual reason for this sort of design is so you can support loading a new configuration without having to restart the whole thing, if the new config might need you to do additional things that require privilege (maybe bind to new low ports, or read key files).
I would expect restart to bind to new ports. It's not something that should be supported on-the fly (it's good thing, of course, but not at expense of keeping root process). Reloading key files is useful, but I think that it could be implemented with other means (e.g. reload program which runs under root, could read those files and pass them via some kind of IPC).
Maybe I am conflating things or mixing something up, but I was under the impression that it only used root-privileges to obtain access to restricted ports and immediately afterwords lowered its privileges lever to something sane/non-risky.
And if that is how it operates, then this exploit should not be effective.
Is that wrong? Have I been misled?