Probably a stupid question, but are the version #'s mentioned here ( "From version 2.4.17 (Oct 9, 2015) to version 2.4.38 (Apr 1, 2019), Apache HTTP suffers from a local root privilege escalation vulnerability due to an out-of-bounds array access leading to an arbitrary function call." ) the only ones affected, or is it possible earlier versions are affected as well?
Also, I just want to throw out there that the name of this one is great:
"Why the name ?
CARPE: stands for CVE-2019-0211 Apache Root Privilege Escalation
DIEM: the exploit triggers once a day
> but are the version #'s mentioned here ... the only ones affected
Usually when a range like that is given, yes.
> From version 2.4.17 (Oct 9, 2015) to version 2.4.38 (Apr 1, 2019)
This case implies that they know the bug was introduced in a particular change, which went public with version 2.4.17 and was either fixed or otherwise mitigated in 2.4.38.
The only earlier or other versions that I would expect to see affected are dev/alpha/beta branches.
Basically they don't consider the engineer exploiting the interpreter to be a security vulnerability. That seems a bit dubious, but I can see where they are coming from in treating the script author as a trusted party.
> Because on most Unix systems Apache httpd runs under the root user, any threat actor who has planted a malicious CGI script on an Apache server can use CVE-2019-0211 to take over the underlying system running the Apache httpd process, and inherently control the entire machine.
Maybe I am conflating things or mixing something up, but I was under the impression that it only used root-privileges to obtain access to restricted ports and immediately afterwords lowered its privileges lever to something sane/non-risky.
And if that is how it operates, then this exploit should not be effective.
"this is a bit of embellishment on the part of the exploit author"
Apache is saying the same thing.
"In Apache HTTP Server 2.4 releases 2.4.17 to 2.4.38, ...code executing in less-privileged child processes or threads (including scripts executed by an in-process scripting interpreter) could execute arbitrary code with the privileges of the parent process (usually root)"
This exploit lets an unprivileged child worker (for example running mod_php on regular shared hosting clients' .php files) hack back into the root parent process and thus escalate from unprivileged to root.
I still don't get it. CGI scripts would run with reduced privileges. The CVE description says "code executing in less-privileged child processes or threads (including scripts executed by an in-process scripting interpreter) could execute arbitrary code with the privileges of the parent process (usually root) by manipulating the scoreboard"
It’s basically the list the master process keeps of the worker processes it has spun up. The workers can report back some stats that the master tracks about each. Apparently something in that report back process could be exploited to run arbitrary code.
While it might not be relevant to this particular issue (though it might, I didn’t dig too deeply), there’s a long standing history of running everything inline with Apache. I mean look at mod_php... sure we’ve all (hopefully) learned our lesson there, but it’s a hell of a hole to dig back out of as long as things keep working.
It’s one of the reasons I’m always suspicious any time I run into a company that runs Apache rather than nginx or one of the many other alternatives. There is absolutely nothing wrong with running Apache, but it always makes me wonder exactly why they are... is it because they’ve got this one app that won’t run on anything else because of crap like this? More often than not...
I think it's been very common for 15 years to use Apache's "suexec" to run each CGI process as the individual user rather than as a user related to Apache.
There's also suPHP which allows you to do the same thing for PHP processes.
You can't really have a setuid "script" anyway. But you can, at the bare minimum, launch CGI scripts via suEXEC. This prevents them from being able to attack the httpd worker processes, since they won't be running as the same user.
Because, depending on your configured concurrency model, it may need to be able to spawn new processes/threads as other users, and this is not possible unless you have at least one process running with some privileges beyond that of the normal users. This usually means root. Even if not root, it needs to be a user/group privileged enough to be able to impersonate the other users, or it can't do its job.
The code running with greater privilege is kept to a minimum but at least some needs to be there, and this exploit potentially gives a route through to manipulate it.
FastCGI and similar can help here - it can push the creation of user specific processes away from Apache, making it harder to cross the barrier.
Thanks. I never used that feature, nor I ever heard of anyone using it. IMO there should be some configuration option to turn that feature off, which will allow to run all processes as a designated user. Or, if those users could be enumerated at startup, it could spawn one process for each user.
The usual reason for this sort of design is so you can support loading a new configuration without having to restart the whole thing, if the new config might need you to do additional things that require privilege (maybe bind to new low ports, or read key files).
I would expect restart to bind to new ports. It's not something that should be supported on-the fly (it's good thing, of course, but not at expense of keeping root process). Reloading key files is useful, but I think that it could be implemented with other means (e.g. reload program which runs under root, could read those files and pass them via some kind of IPC).
Almost everywhere I’ve worked ends up turning off Selinux rather than learn how to actually configure/use it properly, which of course is ridiculous when it’s not actually that hard to learn or configure at all generally, but such is the reality of many places it seems.
For something this severe, there doesn’t appear to have been coordination with large shared hosting providers to ensure the patch was applied before public disclosure of a PoC. Is there a system in place for this?
Well, most shared hosting providers are also running cPanel on CentOS 7 and are getting their httpd packages from EasyApache 3/4, not from the CentOS base repo.
My guess is because they're on an older version (2.4.6) that isn't affected. But that all depends on what patches have been applied to the RHEL/CentOS version.
You're sure? From what I've experienced, shared hosting is owned by Debian/Devuan and Ubuntu, typically on a Debian-style Apache installation with modular config file directories and enmod/ensite utilties for management.
> Vulnerabilities always seem to be related to CGI scripts somehow.
To the contrary, CGIs, running separate per-request processes, are one of the few mainstream mechanisms to create per-request isolates transparent to the host's security infrastructure and process monitoring. You have to go to great lengths to achieve similar isolation if you're starting with eg. FCGI-like multithreaded or evented dispatch in a single process [1].
Perhaps you've get that impression because of the wild west shared hosting scene (made possible by CGIs and isolation in the first place) eg. popular PHP-based packages WordPress, Drupal, Joomla, etc. Then I'm with you - the security record of these plugin monstrosities is truly in a league of it's own. Just so we're on the same page, the most recent WP wtf involves theme developers DDOSing their competitors (who are reselling their themes) via your site.
I'm surprised FastCGI isn't more popular for that reason. Using PHP as an example, Apache + mod_php executes all code under Apache's user/group. If, instead, you use PHP-FPM, you can lock down the execution environment much more.
PHP-FPM supports an arbitrary number of named pools, each of which may be configured with its own user/group pair. Ideally, each virtual host gets its own pool.
e.g.,
[poolname]
user = poolowner
group = poolgroup
listen = /run/php/php7.2-fpm-poolname.sock
listen.owner = www-data
listen.group = www-data
...
> I'm surprised FastCGI isn't more popular for that reason.
FastCGI is more faf to configure where mod_php is more likely to be configured out of the box. Most shared hosting environment are configured entirely out-of-the-box as the market is so saturated and margins so low there is no way to justify anything else. Potential customers who care much about security will most likely be looking for their own dedicated server or VM instead so there isn't really much of a market for a more secure shared host.
Also if packing as many users as possible into as little hardware as possible, which is the only way to make any margin in shared hosting these days, you'll find mod_php more efficient by this measure. You don't have a pool of processes that only specific users can take advantage of, taking up an amount of RAM (however small) each even when not actively in use.
So FastCGI tends to be used less. When I ran PHP I used it, usually as you suggest one pool per vhost, but I was only hosting my own projects and some bits for friends & family, so I didn't need to justify the setup effort against any "bottom line".
More people are starting to realize that FastCGI is one of the keys to good and consistent performance (as much as PHP will allow, anyway) and are recognizing that mod_php for the disaster that it is.
So now more and more popular environments support it - cPanel gained native PHP-FPM support, and commercial shared web hosts are mostly using Litespeed (drop-in replacement for Apache httpd that is evented and invented its own FCGI-like called "LSAPI").
Not sure if I totally agree with mod_php being more efficient for a greedy host either - attaching the PHP runtime to the threaded/forked httpd request handler is very expensive.
You can set the per-pool minimum worker size to 0 with FCGI with a short keepalive, which allows you to keep a low average per-tenant memory footprint, but better performance when many requests arrive at once. That's also basically what Litespeed does.
"one of the keys to good and consistent performance (as much as PHP will allow, anyway)"
I've found that caching any results of logic that's repeated per request makes PHP pretty damn fast. I use apcu, it saves state with an mmap backed cache.
Ugh, seriously. I mean, who wants a stable http daemon that's well supported, well documented, and well understood by a giant community of users? Sounds boring to me...
It's also copiously documented and extremely well-understood. There will always be bugs in software but you can usually fix problems quickly, there's few surprises or gotchas. I trust it.
Rubbish? You mean like keeping the web working and making sure URLs never die? Not everything needs to be upgraded to the latest hot thing. HTTPD is proven & stable; this is a rare bug.
For example with RHEL you have apache in base repository, but for nginx you must either add unsupported epel or nginx own repositories. If you want commercial support from RHEL, but you don't want to buy another commercial support fro Nginx, you have to use apache httpd.