Hacker News new | past | comments | ask | show | jobs | submit login

> But if you do that every image looks wrong and it's vanishingly unlikely to get into a release.

The code doesn't have to be wrong for every input. It may be wrong just for pathological cases that don't occur in the field unless specifically crafted.

> Since it's a one-shot write into the buffer, if your intent is using it as an exploit step then you might as well encode an actual image with your exploit-assisting bytes.

The assumption was that the code tries to clean up the buffer immediately after use.




> The code doesn't have to be wrong for every input. It may be wrong just for pathological cases that don't occur in the field unless specifically crafted.

I could argue this more but it doesn't matter, that was just a little tangent, getting the size wrong will not let anything out of the sandbox.

> The assumption was that the code tries to clean up the buffer immediately after use.

Cleaning up would be removing the mmap. How are you going to exploit that? Your scenario is not very clear.

I think you're going for a situation where the sandboxed process can write to data in the host process outside the buffer? In a general sense I can imagine ways for that scenario to occur, but I can't figure out how you could get there via mmapping a buffer badly. A buffer mmap won't overlap anything else. If the mmap is too small then either process could read past the end, but would only see its own data (or a page fault).


> Cleaning up would be removing the mmap. How are you going to exploit that? Your scenario is not very clear.

If a buffer is going to be reused across calls, then cleaning after use is not the same thing as unmapping. One example for cleaning up a buffer after use would be zeroing.

If there's a bug in the calculation for the amount of zeroing needed, then leftover attacker-controlled data can bleed back from the sandboxed into the unsandboxed process and survive beyond the current transaction (because the code failed to zero the buffer correctly after use).

In other words, the attacker can now write arbitrary data into the unsandboxed process's memory at a semi-known location (known page offset) inside the mapped buffer. That data may not be very useful on its own, because it's still confined to the mmapped buffer. But it's now relatively well protected from reuse (until the next decoding task arrives).

That's plenty of time to do shenanigans. For example, you can combine it with an (unrelated) stack buffer overflow that may exist in the unsandboxed process, harmless on its own but more powerful if combined with an attacker-controlled gadget in a known location.


It's hard to see why the buffer wouldn't be per image. There's no reason to reuse that.

> In other words, the attacker can now write arbitrary data into the unsandboxed process's memory at a semi-known location (known page offset) inside the mapped buffer.

But what is the arbitrary data going to be?

1. If it's gadgets with known lower bits, then you could put that into a plain-old image file, no decoder exploits needed. Also this requires the second dumb mistake of the coder going out of their way to mark the buffer as executable.

2. If it's data you want to exfiltrate, you could just gather that after you trigger your unrelated exploit. This is only useful if everything aligns to drop the private data you want in that specific section of memory, and then the buffer is reused, and then the private data is removed from everywhere else, and then you run an unrelated exploit to actually give you control. This is exceptionally niche.

> harmless on its own

Ha.


> It's hard to see why the buffer wouldn't be per image. There's no reason to reuse that.

Premature optimization is a thing. Most software developers are prone to it in one way or another. They may just assume a performance gain, design accordingly and move on. They may be working under a deadline tight enough so they never even consider checking their assumptions.

Or maybe the developer has actually run the experiment and found that reusing the buffer does yield a few percent of extra performance.

> But what is the arbitrary data going to be?

An internal struct whose purpose is to control the behavior of some unrelated aspect in the unsandboxed process. The struct contains a couple of pointers and, if attacker-controlled, ends up giving them an arbitrary process memory read/write primitive.


> An internal struct[...]

It sounds like you picked option 1 then, which means you don't need to take control of the sandbox. "Create an image that put arbitrary bytes into the buffer that stores its decoded form." simplifies to just "Create an image." There is no vulnerability here. This is just image display happening in a normal way. It's something to keep an eye on but not important itself. You have to add a vulnerability to get a vulnerability.

The original problem of preventing image decoding exploits has been solved in this hypothetical.


> you don't need to take control of the sandbox

Your original request was: “If you've seen an exploit caused by a big pre-allocated array of untrusted RGBA data, please explain how.”

> It's something to keep an eye on but not important itself. You have to add a vulnerability to get a vulnerability.

Which is exactly how exploit chains work.

A single vulnerability usually doesn’t achieve something dangerous on its own. But remove it from the chain and you lose your exploit.


> Your original request was: “If you've seen an exploit caused by a big pre-allocated array of untrusted RGBA data, please explain how.”

I asked that in a context of whether you can contain vulnerabilities in a sandbox. If something doesn't even require a vulnerability, then it doesn't fit.

Also please note the words "caused by". A few helper bytes sitting somewhere are not the cause.

> Which is exactly how exploit chains work.

> A single vulnerability usually doesn’t achieve something dangerous on its own. But remove it from the chain and you lose your exploit.

Being part of an exploit chain doesn't by itself make something qualify as a vulnerability. (Consider arbitrary gadgets already in the program. You can't remove all bytes.) And I've never seen "you can send it bytes" described as a vulnerability before. Not even if you know the bytes will be stored at the start of a page!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: