Arch Linux already has the 1.0.14 release available in the community repo[1].
There are still no new patch level version of 0.7.67 available for Debian Squeeze[2] or Ubutu 10.4 LTS' 0.7.65 version[3]. EPEL for RHEL and derivatives also lack a new upstream version[4].
I may be misinterpreting the patch, but it seems like this is a NULL byte injection vulnerability, all stemming from '\0' termination in C-strings. I wonder if length-buffer pairs are more practical when security is a consideration?
IMO, zero-terminated strings are, and always have been, a bad idea. Discriminating against a certain character instead of just regarding it as opaque blob of data will bite you in unexpected ways, and strlen() taking O(N) is another drawback.
Something to bear in mind regarding non-standard string libraries: syscalls (like open[1] on POSIX or CreateFile[2] on Windows) use null-terminated strings, which means that you have to be careful about embedded nulls, no matter what library you use, when a string gets passed to a syscall somewhere in the chain.
It may be interesting to note that Windows syscalls (i.e. to the NT kernel rather than the Win32 layer wrappers like CreateFile) do not actually use null-terminated strings - they use UNICODE_STRING[1], which is a structure containing a 16-bit length, 16-bit buffer length, and pointer to a buffer of 2-byte characters.
NtCreateFile[2] (and the kernel-side implementation of ZwCreateFile[3]) take a file name in the form of OBJECT_ATTRIBUTES, whose ObjectName field is of type PUNICODE_STRING. CreateFile is implemented in terms of NtCreateFile; CreateFile enforces Win32 semantics like case insensitivity that NtCreateFile does not; POSIX semantics can be implemented on top of NtCreateFile, but not easily with CreateFile.
Yes. You are right. In general, you have to be really careful when passing user-provided input to syscalls. Embedded nulls are only one of the many pitfalls there (another one from the top of my head is unicode handling). Especially if you're writing a webserver.
I'm not sure I follow you here. You can't have NULL characters in filenames on POSIX systems, so there's not an issue here (that I know of). What is the risk you're worried about?
The risk only arises if the component of a system that accepts and validates user input does not use (or account for) null-terminated strings. That validator will see a different string than the syscall will; this is called null-character injection, and while it is difficult to craft effectively, it can lead to accessing resources that you thought you had protected by validating the string.
You are quite correct that nulls are not legal characters in POSIX filenames; however, that is irrelevant. The nulls are only an issue in the processing; once they reach a syscall, the first one is treated as a terminator.
Not exactly. Some function expecting a c-string could actually read too short. What happens is some attacker sends a message using a channel/format that specifies a length and a buffer. But, the buffer contains a "\0". For example, (10, "wtf\0extra") is passed as a 10 char string. But, some c function might only see it as (3,"wtf"). The (usually) simply solution is to fail validation when a string of N bytes contains a NULL char before the N+1 byte.
Before people panic too much, while people should definitely upgrade ASAP. This exploit does require a hacked or otherwise broken backend. People cannot use this exploit to hack nginx remotely without already being able to manipulate an upstream.
Applications are broadly vulnerable to this problem. It's true that your app server and your web server will share the blame, but that's going to be cold comfort.
(This comment sounds more disagreeable than I mean it to; sorry, it's tricky for me to comment about this stuff).
In Soviet Russia, your SSL private key becomes public key. (the private key is held in memory by the nginx process. If you can dump memory, you can dump key)
(you may also want to generate a new key and get a new certificate if you use nginx, concurrent with patching this...)
You can't dump arbitrary memory with that bug. If you're very lucky and backend server already hacked or have holes, you can get some random part of previously send responce.
You can safely trust me on this. Also, if you're concerned, the patch is very straightforward, minimal, safe, and fairly unintrusive. It's not going to break anything.
Don't you think there should be a term for the phenomenon where someone makes it more or less clear that they have said all they're going to say about something on a message board, and still people come out of the woodwork to write comments cajoling them into saying more?
I'm not complaining. It doesn't happen to me that often. Way more often, it's someone complaining about some anonymous employer or service provider and 20 people writing comments about how it's irresponsible for them not to say who it was. But it's the same kind of annoying every time.
Maybe the term ought to be in German. German works great for concepts like this.
Why did I leave the comment? Because it's hard to patch server software and people will often wait on patches until maintenance windows (advise you not do that this time) or take some time to figure out if they're affected. Especially with Apache, where oftentimes you aren't affected because the bug is in some random module most people don't use.
> Don't you think there should be a term for the phenomenon where someone makes it more or less clear that they have said all they're going to say about something on a message board, and still people come out of the woodwork to write comments cajoling them into saying more?
Don't take this the wrong way, but I think it was the tone of the response.
I.e.
"What are the implications of this?"
"It's a bad bug, patch it ASAP"
"....."
It's the kind of non-response one would expect from a management type to a low level engineer. Somewhat odious to the average hacker, in other words.
(I could be COMPLETELY off the mark here, and if so, please disregard this entire message)
Given how much Thomas contributes to this community, it seems fair to give him the benefit of the doubt and assume he has good reasons to say no more than what he has said. Also, this comment alone probably counts for thousands of
dollars worth of value to people running business on nginx. Thanks Thomas.
As to the average hacker, yes we want to know everything, but there are valid reasons not to be told everything. In this case, the information given is useful and sufficient, and the implications of what he said and how he said it are very clear indeed.
I think everyone has given him the benefit of the doubt, but this particular thread could have been much shorter if tptacek had just explicitly stated, "It's important and I won't help attackers by elaborating on the details." If that's not made clear then it's only natural for someone to ask for the specifics.
What if you had written that as the response to the second question. "I understand you are curious, but that is all I will say about it," would have made it clearer and been far less pompous than "no really, trust me."
The severity of the problem was pretty clearly and concisely conveyed: "This is a very bad bug, and you should fix it ASAP. Don't wait."
As for how many people it practically affects, that could well hurt. Saying anything more than "Applications are broadly vulnerable to this problem." like he did elsewhere in this thread could very well point out specific, detectable vulnerable instances. That's a bad thing. Just wait and more info will be out, but heed his advice!
RPMs for 1.0.14 are available in koji at those second two links, or you can grab it via "yum --enablerepo=updates-testing update nginx" once the mirrors all pick it up.
Nah, I don't feel like it. I'd have to download the nginx source, apply the patch, and compile it, and all that seems like a lot of work for not much benefit.
This is particularly interesting because in a lot of deployments, nginx sits out in front of a lot of other stuff as a load balancer, where it is nicely exposed.
You REALLY should be using multiple boxes if you're running load balancers (especially sw load balancers) with some kind of heartbeat failover. That way you can upgrade single boxes easily, and are ok in case one of them dies. With a bug of this severity, you won't have time to test the patch, so it's probably best to upgrade one at a time in production.
Remember, even if you're running Apache or something else for your actual web server, you can easily have something like nginx sitting in front as a proxy/load balancer. Often in front of your security monitoring devices... and you may have forgotten about it.
This only matters if your backend is going to set a header that contains a null byte. Since some people echo back user data in headers (ugh) this could cause an issue. Rails is more than happy to let you put NULLs in response headers, btw. Of course, all of the ASCII CTL characters (0..27) are forbidden by the spec: http://www.ietf.org/rfc/rfc2068.txt
But if you're going to learn a DSL for building debs, why not just learn how to build debs? Your distro already provides the necessary build formula, along with well-integrated startup scripts, etc.
To start out, just build as given:
apt-get build-dep nginx
apt-get source nginx
cd nginx-$VERSION
dpkg-buildpackage
Then to customize, go into the source tree and update whatever you want to update, and rebuild. You'll see there's already a debian/patches directory where you can drop patches to apply automatically.
I tend to just keep the "debian" directory in source control, so I can take a fresh upstream tarball, check out my debian rules into it, and kick off the build.
I've done custom debian packages from scratch and always found it a pain. I've also done customized source builds and still prefer this approach because its simple, slimmer, easily version controlled, and wrapped up in a single command.
But there are many ways to fry an egg. I prefer this way, but overall having a package is a big plus for commonality between systems and convenience vs simply building from source.
If you don't care about uptime just replace the old binary with the new binary and then restart the server.
This is from a book called "Nginx HTTP server":
1. Replace the old Nginx binary (by default, /usr/local/nginx/sbin/nginx) with the new one.
2. Find the pid of the Nginx master process, for example, with ps x | grep nginx | grep master or by looking at the value found in the pid file.
3. Send a USR2 (12) signal to the master process—kill –USR2 , replacing with the pid found in step 2. This will initiate the upgrade by renaming the old .pid file and running the new binary.
4. Send a WINCH (28) signal to the old master process—kill –WINCH , replacing with the pid found in step 2. This will engage a graceful shutdown of the old worker processes.
5. Make sure that all the old worker processes are terminated, and then send a QUIT signal to the old master process—kill –QUIT , replacing with the pid found in step 2.
The makefile in the nginx source shows you how to do a hitless upgrade. This is what I do from the shell after correctly installing, essentially translated from the makefile.
This is poor advice that could be potentially damaging. Instead of manually building and installing from upstream source code you should bump the package version locally to ensure your system isn't littered with orphaned files later on. You'll get the benefit of portage's sandbox and other package management features too. If Gentoo's ebuild had Gentoo-specific patches, you'll get these compiled into the latest version as well. Sometimes these patches are very important.
A better approach to follow is (note this is only a rough guide from memory):
I agree that it's a bad way to install new software but I was merely suggesting a way to upgrade nginx that was installed this way in the first place, I assumed the OP meant that by asking for a "stable way to update nginx installations from source.". The fact that I did it on Gentoo is irrelevant here.
I love portage, I use it whenever I can. In this case, I think I've did it like that because something was messed up with Passenger support in the port. It's the only package installed from source (bypassing portage) on my system and it's not an orphan in a way that /opt was dedicated purely for such scenarions. I can see all such packages by listing /opt assuming I keep the install prefix convention.
In any case, thanks for pointing that this is a wrong way to install packages, someone might benefit from this indeed.
Indeed this is what I was looking for (however I will be updating the software on CentOS) sprinkled with a few snippets of wisdom from the above posters.
There are still no new patch level version of 0.7.67 available for Debian Squeeze[2] or Ubutu 10.4 LTS' 0.7.65 version[3]. EPEL for RHEL and derivatives also lack a new upstream version[4].
[1]: http://projects.archlinux.org/svntogit/community.git/commit/...
[2]: http://packages.debian.org/changelogs/pool/main/n/nginx/?C=M...
[3]: http://changelogs.ubuntu.com/changelogs/pool/universe/n/ngin...
[4]: http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/nginx...