Hacker News new | past | comments | ask | show | jobs | submit login
How to unc0ver a 0-day in 4 hours or less (googleprojectzero.blogspot.com)
346 points by GuardLlama on July 9, 2020 | hide | past | favorite | 116 comments



> So, to summarize: the LightSpeed bug was fixed in iOS 12 with a patch that didn't address the root cause and instead just turned the race condition double-free into a memory leak. Then, in iOS 13, this memory leak was identified as a bug and "fixed" by reintroducing the original bug, again without addressing the root cause of the issue. And this security regression could have been found trivially by running the original POC from the blog post.

Yikes. Especially looking at the diff of the original problematic fix, it seems like they slapped a quick patch on there and called it a day, instead of investigating to find the underlying architectural issue. Doesn't really inspire a lot of confidence that the resolution for unc0ver is any more thought-through. I wonder if they've identified the root-cause? That'd be the real interesting piece to me.


What’s wrong with Apple? Why is modern iOS so buggy?


A friend at Apple told me that the testing story for iOS is complete shit, and they actually rely on hundreds of humans to test their software to make up for poor automated testing.

Apple takes the approach of throwing humans instead of automation at a problem quite frequently [1]:

> The press release mentions RMSI, an India-based, geospatial data firm that creates vegetation and 3D building datasets. And the office’s large headcount (now near 5,000) [used to create Apple Maps]

The lack of automated testing is something Apple is working on fixing, but they're a ways away from having anything substantial. The terrible iOS 13 release quite significantly bumped up the internal priority of stability and testing. iOS 14 is likely to be far less buggy than iOS 13 because of this culture change.

[1]: https://www.justinobeirne.com/new-apple-maps


Don't worry, the exploits are being used only against oppressed minorities[1].

[1]https://arstechnica.com/information-technology/2019/09/apple...


Isn't the root cause that two entities can free the given memory and have no high level coordination of it? It basically states this in the article.


That's the category of bug (use after free), but that's not the root cause. The root cause would be found from an analysis of the kernel design to understand why it was possible to get into this scenario in the first place. Uncoordinated mechanisms accessing the same data structure (like you mention) might be the root cause, but it didn't feel like this article explored it (not that they need to, since P0 is focused on the exploit itself - I'm just really curious what 'went wrong' with respect to the architecture here).


(I’m trying to recall the Lightspeed bug from memory so I may have some of it wrong.)

It all boils down to poor state management in a single algorithm.

The algorithm allocates a kernel object, then sends off a subroutine to do some work. (The subroutine happens to run in another thread but that’s not really relevant to the bug.) As part of its job duty, the subroutine is supposed to free the object after its work is done, but only if condition A is true.

If A is false, the subroutine won’t free the memory, and it’s implied that the main routine is supposed to free the memory instead.

Now the issue is that there’s no common code that checks condition A. Instead, the main routine and the subroutine have slightly different ideas about whether condition A is true or not. The condition’s logic is pretty simple so it’s understandable that the kernel developer decided to write the same condition in two different places and two different forms (instead of e. g. factoring it out into a macro). Still, they managed to get it wrong.

The result is that in one particular case, the subroutine thinks A is true. So it frees the object. When the main routine gets back control, it thinks A is false (due to the duplicated, slightly wrong logic), and frees the object, too.

There’s only a small time window between those two frees. But the window is large enough that a userspace thread, if it tries often enough, can force its own object into the place where the kernel object used to be, just in time before the double free happens.


That's a bit like saying the root cause of an automobile accident is that two people were driving without a physical barrier between them.

The statement is true but is not a root or a cause. Its a high level description of what happened.


> By 1 AM, I had sent Apple a POC and my analysis.

> Still, I'm very happy that Apple patched this issue in a timely manner once the exploit became public.

Sh- should we be happy Apple fixed this so quickly? unc0ver allows consumers to get more out of their Apple devices, and Apple's fix isn't really optional (unless you disable auto-updates and tap "Later" on every update notification). Is this exploit even an issue? Apple's probably not going to let an app exploiting this zeroday into its App Store and sideloading is difficult; it's very unlikely someone malicious is going to trick people into installing malware that uses this exploit. It sounds to me like Apple is purposefully limiting consumer freedom by actively trying to prevent jailbreaking.


It’s always a snake eating it’s tail scenario with jailbreaking. Apple takes popular tweaks and integrates them with the next IOS. Side-loading isn’t that bad but the method keeps changing...Usually for the better.

Jail breaking cuts into their profit a small amount because the community is small.

https://www.reddit.com/r/jailbreak

The benefits are very much worth it though. Most have had iOS 13 features since iOS 11/12. They have iOS 14 features now. Then there are other features that may not be released ever but people find them invaluable.

Ex: Per app specific Firewall per website. (Block tracking/ads)

-Disable apps ability to spy on your clipboard.

-Disable apps from accessing things you do not want them to but still launch.

-Themes, so many options: remove your status bar or put new things there.

-Custom widgets

-Detailed wifi, phone information

-Download old versions of apps because the company broke something.

-Detailed phone/memory/cpu info

-Terminal access.

Those are just a few off the top of my head.


This raises the question, who will come up with Apple product features when all the 0days have been patched?


Yeah, I swear. They're a hardware company more than a software one... it shows in their utter lack of imagination for their software. Even their UI/UX design is really suffering compared to their competitors.

Disclaimer: I continue to use an iphone primarily because I'm happy and fine with the walled garden.


With a codebase as large as iOS its highly unlikely that all 0days will ever be fully patched. Apple is constantly iterating on their code and introducing new features(and bugs) and security engineers are always coming up with new methods to exploit code.


Also, given the age of XNU, kernel developers come and go, and chances are that typical mistakes (for example, improperly using the MIG) get repeated.


Or you could just buy an android and not worry about it.

Not even to fan boy, but half of those are things that android did from the go and the rest have been added or are generally easy to do.


But then we miss out on Apple's hardware quality, industry-crushing A-Series processors, and (for the most part) rock solid and extremely efficient OS.


>But then we miss out on Apple's hardware quality, industry-crushing A-Series processors,

To what end, my 5 year old midrange phone still loads everything instantly (Snapdragon 801). Is there actually any benefit of 'top of the line' mobile CPUs except for mobile gamers?


I was an Android fan for years, but as a kid I'd always wanted an iPhone 4 so I got one recently. At the time there were some weird blocks on everything (I still don't really know what was happening but everything works now) so eventually I just got an iPhone SE (the good one, with the headphone jack). It blew my mind. The animations actually run at high framerates, the keyboard has very little latency, and it just works. I've never tried out a high-end Android that could hold up to my SE based on keyboard latency alone. It makes a really big difference.


You better hope they don't ever bring their premium-priced laptop "hardware quality" to their phones -- failing GPUs, failing monitor ribbon cables, failing keyboards...


ipad pros are already there, they're so thin they can arrive bent right out of the box.

ipads and iphones are amazing hardware in every other respect but Apple really needs to chill with the ultra-thin fetish.

(same goes for their PC hardware, but the hardware is not particularly amazing there.)


And worry about something else, like it exploding, or not getting security updates after the first year, or boot loop, etc.


> Like it exploding

That was an issue with one phone, once, and it was a problem with the battery, not Android

> Not getting security updates after the first year

That's actually solved by rooting since you can update from any source instead of just signed packages

> Boot loop

I guess that could be a problem but I've never had that issue and I've rooted all but one phone that I've had


That's NOT solved by rooting - I wish people would stop saying this.

There's two parts to Android patching - the kernel/Android OS itself - yes, this gets patched with a rooted/custom ROM.

Part two is the hardware drivers themselves, the modem driver etc etc. None of that is patched with a custom ROM. You still have to use the same binary blob/drivers that came with the latest official phone firmware.

When security updates stop for you phone, all the updates for the underlying hardware drivers stop too. So yea, you can slap some bandaids on it, but you're not really up to date even though you might be on a later/more recent version of Android.


Your definition of “solved” is pretty identical to OpenBSD user recommending to “just patch your BIOS”. Demographically this is not a solution, as 99.9999% of the potential audience will never be able to even understand the requirements of that skillset.


> That was an issue with one phone, once, and it was a problem with the battery, not Android

If my house burns down, I personally won't care if it's android in general, the model, or the battery in the android phone which destroyed everything I own.

> That's actually solved by rooting since you can update from any source instead of just signed packages

Installing software downloaded from xda-developers is, what I like to call, malware as a service (tm).


That's like saying that all Apple products suck because they bend, antenna doesn't work, keys stop working and screens start to stain.

Like the person said, it was literally one Samsung model that had issues and it could've just as well have been an iOS device. Looking at how many recalls and repair programs they have for design flaws, I don't think anyone should paint Apple as the better side here.


Can you link me to a custom ROM on XDA that was shown to have Malware?


That's a poor argument. That's like saying you didn't read this page's article, because the malware being discussed wasn't already known to you. Absence of evidence is not evidence of absence. Formally, this is called an argument from ignorance.


This seems like a pretty weak argument, are you sure you want to trust a bunch of hobbiest devs with the security of your (probably) most valuable computer?

Just because something doesn’t have any know issues, doesn’t mean it’s not wise to avoid it because it flat out smells.


I trust hobbiests more than I trust Google or Apple to handle my security.


That really depends on your threat model, are Apple and google going to use marketing data against you? Probably.

Are they going to plant malware which steals your banking details? Probably less likely than a random binary package you found on some forum.


You need to do a bit more research because of the sheer number of options, but there's absolutely manufacturers that have none of those issues (battery exploding was one samsung model, I haven't heard of boot loop issues. Poor security update lifetime is absolutely a common issue though, although fixable by rooting as pointed out in another comment). See: Android One^.

^https://en.wikipedia.org/wiki/Android_One


Also iPhone lifetime is a bit of a joke. I mean, Apple got caught literally slowing their older models down on purpose in order to have people switch into newer models.

I haven't gotten as much mileage on my Android phones as compared to my 4S, but the 4S cost about 3 times as much as the android phones I usually buy and 3 lower end-ish Android phones serves me easily for 10 years with no issues.


> "Apple got caught literally slowing their older models down on purpose in order to have people switch into newer models."

"Apple denied wrongdoing and settled the nationwide case to avoid the burdens and costs of litigation, court papers show." - https://www.reuters.com/article/us-apple-iphones-settlement-...

They haven't been "caught" doing that; they have been accused of that. Why is it the stupid conspiracy theory which wins the popularity contest, instead of the much more annoying true story - Apple cheaped out on batteries which couldn't provide enough current to run the phone as they aged? Or, just as factual, Apple slowed down the phone to keep it working longer on the same hardware thus making people avoid having to buy new phones.


> They haven't been "caught" doing that

They settled out of court. Must have though the case against them was pretty strong. If that's not an admission of guilt...


> 3 lower end-ish Android phones serves me easily for 10 years with no issues

Are there any lower-end phones that get security updates for 3 years?


> buy an android

> not worry about it.

I’m sorry, but this feels like a lie. I’m not trying to troll here, but most Android IT-level users I know flashed a custom ROM already and actually buy their devices based on whether or not LineageOS has support for it.

Almost all Android devices are non-Google devices, which makes their OEM state bloated with custom sync crap nobody even wants to use but have no choice.

Xiaomi, Motorola, Lenovo, Huawei, HTC ... all force their own shitty half-working synchronization platforms up their users’ phones. And I bet that this happens with major GDPR violations.

As an Android “user” I do not recommend using Android for people that do not want to “worry about it”. And product wise I think that’s a huge quality issue.


Look at the attacks on various human rights activists- those are using the same exploits that jailbreaks use.

Fixing bugs used to attack people means fixing bugs used for jailbreaks. There isn’t some magical mechanism by which a jailbreak exploit isn’t exploitable but anyone else.


In other words:

The current model fails to protect people anyway while providing an extremely strong incentive for the community to publish software that undermines the “security” of the device.


Of course, removing the need to jailbreak for such control would mean that this dichotomy would not have to exist…


If you remove the need for a jailbreak in order to allow arbitrary code to run on any device, you're allowing arbitrary malware to run on any device.


The problem is not running arbitrary code, but running arbitrary code without informed consent. Malware runs without consent. Apple's solution for iOS is removing the ability to run anything completely, bypassing the need to figure out how to obtain consent.


> bypassing the need to figure out how to obtain consent

How do you propose getting “informed” consent from an audience who doesn’t care and willingly expose everything about themselves and everyone they know to find out which Star Wars character or 80s pop song they are most like? Genuine question, as this doesn’t seem the least bit a solved problem anywhere.


It's not an easy problem! But Apple should work on solving it–they have already put in some effort in this direction on macOS, although they have their hands tied behind their back there because they're going from unrestrictive → restrictive and such changes usually break things and make people angry. On iOS they pretty much have a "clean slate" with which to start with.

Usually, solutions in this area generally have a couple of characteristics: the first is that the "secure" case is generally useful for 95+% of people, to the point that they might not even know that there are other "modes" that are more permissive. The second is putting surmountable but significant barriers in place to prevent disabling these features, in an attempt to prevent casual/unintentional deactivation. Strange key combinations that lead to scary text and wiping the device seem to be fairly effective in keeping out people who cannot give informed consent. And a third is allowing a user-specified root of trust: for example, one can imagine an iPhone that is every bit as secure as any other iPhone today because I have enabled all the security features, but it's using my keys to attest against instead of Apple's. There's a lot of interesting work being done in this area: one I personally like is Chromebooks, which have the dual purpose of being secure, "locked-down" devices for general consumer use, but also for being useful for development work. And there we're seeing interesting solutions such as using KVM to run an isolated Linux, developer mode, write protect screws, …


I don’t understand why people think that because some people are clumsy the rest of us have to live in a straight jacket.


> why people think that because some people are clumsy

Nobody said anything about clumsy people. As one of many examples, look at all the guides that tell folks to disable SIP and don’t explain the risk and really don’t even need to disable SIP, the app should just be fixed properly.

There are exceptions of course and good reasons to disable it, so I’m glad Apple has the option, but I’d venture to say 85% of the time it’s done by a person who isn’t really making an “informed consent”.


To be more specific, the problem is that malware is a separate category from useful and harmless application code that people want to run but that Apple doesn't want to allow for a variety of reasons, but Apple forbids both types of software.

(Focusing on user consent obscures the actual problem; people often consent to running malware. What matters is whether the software to be run is useful and harmless.)


> people often consent to running malware

Ah, but not in an informed way–users don't typically run software they know to be harmful/useless :P (And no, telling them that it is harmful isn't apparently enough to inform them…) But I agree with the first part.


Bootloader unlock is the way that works on Android. It forces user to erase data when unlock it and unlocked status is shown in boot screen.


> Look at the attacks on various human rights activists

I believe all these begin with browser or existing-app based exploits.

None of them seem to rely on tricking the user into installing a new app. That would be too suspicious for the user, and would entail the attacker uploading their exploit code to apple, and giving apple a full list of users who they exploited...


Not sure if this call is allowed through the new syscall filter in WebKit, but before it this was one JavaScriptCore bug away from achieving the same thing.


> Apple's probably not going to let an app exploiting this zeroday into its App Store...

Are you sure about this?

I'm far removed from the app store development world, but a cursory glance at the description and the original lightspeed bug seem to indicate this is a problem within the kernel interface, and as such I assume callable by any application??

Sorry, I could be missing something, just curious why this couldn't occur in the app store.


I think the implication was that App Store review would catch such things. Personally, I'm not so sure, considering that Snapchat currently ships a binary with syscall instructions embedded in it.


It is likely this is a heuristic; Apple would lose out disallowing major companies, so rules are sometimes shifted, if not explicitly for all. Snapchat being pulled would cause a minor exodus, I imagine, especially if it was heavily leaned on that Apple were responsible for their removal.


That is likely true, but slipping in a call to list_lio would be really easy to do. Even if they had a check for the syscall instruction in this case, you could just ROP your way to libc…


It would probably kill snapchat and force many groups to other platforms.


It’s possible for both to happen. People might switch chat apps in the short term, but reconsider buying an iPhone in the next cycle (esp. if Apple makes a habit of killing off popular apps).


The second paragraph of the article covers this:

> I wanted to find the vulnerability used in unc0ver and report it to Apple quickly in order to demonstrate that obfuscating an exploit does little to prevent the bug from winding up in the hands of bad actors.


Maybe this should read "that obfuscating an exploit does little to prevent the bug from winding up in the hands of a talented full time security researcher".

Of course if he was this talented, surely he would routinely diff new kernel versions and realize the old bug had been reintroduced before having to rediscover it in a jailbreak?


I'd make sure I had a solid foundational basis before calling anyone on the PZ team untalented ;)

He abused the jailbreak to cause a crash. A talented researcher would try that before diffing kernel binaries.


> talented full time security researcher

If a single security researcher can de-obfuscate it in under a day, then a nation state with huge funding can too. Maybe not in a day, but eventually.


The existence of jailbreaking fundamentally breaks the security of the device. It means any malicious app that manages to get on your device can turn into a full system compromise. It means any RCE can as well.


It always depends on which side of "security" we're talking about. You could argue that not having access to security tweaks & not being able to see what's going on because the OS is so locked down is a security issue in itself which can be solved by jailbreaking.

Currently, sharing security issues with Apple is a guaranteed that the tooling you are using to get access to your device won't work anymore, there's definitely an bad incentive to not report security flaws at the moment.


That's right, which is why we called it "jailbreaking" in the 1990s when someone got mad at you on IRC and jailbroke your machine and stole your mail spool. I mean, jailbroke your mail spool.

I take your point, which is that jailbreaking is good if what you want is to run random unapproved code on your machine. But you didn't seriously engage with the comment you rebutted, because it is also true that jailbreak prevention prevents persistent kernel compromise --- is in fact a predicate for preventing persistent kernel compromise --- which is a thing that really does happen; in fact, it's far more relevant to the overwhelming majority of Apple users than running unapproved code is.


I don't really have the same opinion on this, I consider the obscurity of the platform a security issue by itself. At the end of the day, remote jailbreak exploits are pretty rare nowadays so you need to have a real access to the machine.

To have an idea if an app is sharing your data you need to be jailbroken, to have an idea of what is being sent from your device you need to be jailbroken, to force a stricter control on apps you also need to be jailbroken. I mean, you get the point. Any action you could do regarding security requires you to be jailbroken first.


We're discussing this on a story about an untethered jailbreak --- a kernel RCE.


Yes that's true indeed, I was talking in general. Maybe having a more opened device would help getting security fixes faster? One of the main reason this exploit was heavily obfuscated was to avoid Apple to patch it.


I would doubt that. More likely, it was to keep the script kiddies away. (There’s currently drama going on in the community right now about stolen code…not that this us anything new :/)


This is neither untethered nor RCE.


Intentionally jailbreaking your phone isn't untethered or RCE. But this particular jailbreak could be combined with an RCE in any application running on the device in order to compromise the system.


No disagreement that a separate, unrelated remote RCE would indeed be a remote RCE. This is still a tethered LPE.


Thethered, not untethered. There hasn’t been a tethered jailbreak in quite a while.


There are lots of ways apps can run arbitrary code, so stopping them via the app store is not feasible. Myself I'm happy my phone is safer.


> unless you disable auto-updates and tap "Later" on every update notification

Some of us do that for this exact reason. I wish there was a way for me to just pick software to give root to though, this is way less secure.


This is how it works on Android. Generally on a rooted phone you have a 'manager' like Magisk or Superuser. The first time an application tries to use root, the manager makes a popup and allows you to grant permission temporarily or forever.


I 100% agree but then again, people choose to buy those products so..


Fantastic write-up. It's great to see this level of information sharing, complete with a walkthrough of the author's thought process and strategy for confirming the exploit. It's also interesting that this was a regression of a previously-fixed bug rather than a new exploit.

As a side note, it's disappointing to see so much unfounded criticism here in the comments. Apple was going to find and fix this bug quickly, regardless of the author's efforts. In this case we get a peek into the inner workings of the exploit discovery process that would otherwise remain secret. The author and Apple both clearly noted that unc0ver was the source of the exploit, and the author made no attempts to hide that fact. Calling the author of this blog post "lazy" or an "informant" is out of touch and uncalled for.


TL;DR: reverse engineer a jailbreak exploit.

> By 7 PM, I had identified the vulnerability and informed Apple

I don't know why this rubbed me the wrong way. Like, it feels "lazy" (for lack of a better way) to disassemble an exploit and run off to tell the vendor. If anything, the exploit writer should get the credit. I don't know.


> If anything, the exploit writer should get the credit.

They did: https://support.apple.com/en-us/HT211214


But they didn't report it did they? Playing devil's advocate a bit here, but they could have reported it for a bug bounty but they instead chose to use it to create a jailbreak.


Maybe reworded, the value accrues to the explainer of the issue to the code writer. Therefore, this dude did something valuable. Ecosystem works. Perhaps.


They did, the article points out that this was caused by a regression. Fixing a memory leak made it such that it reenabled an old bug.


The exploit was public, but obfuscated to make it harder for bad actors to make use of. Apple likely didn't need help to identify the vulnerability, but I'm sure they welcomed it.


To me, seems like informant territory.


All this has taught me is that if I find an exploit to unlock <insert DRM'd device> I need to obfuscate the heck out of it to make it as onerous as possible for low-effort bug bounty do-gooders to scoop up a reward from it.


Project Zero researchers don’t take bounties, to my knowledge.


Nor have they been ever offered one, to my knowledge: https://twitter.com/i41nbeer/status/1027339893335154688. I'm actually not sure Apple has ever paid a bounty for anything that wasn't a web issue…


If memory serves, they've been offered but the bounties are always been given to charity.

I'm guessing that's a policy/requirement of Project Zero as, presumably, the P0 folks are making "enough" already.


I don't think anyone is getting a bug bounty, especially from this one.


Checkra1n, another iOS exploit (although it's more impressively a bootrom exploit), is mentioned. You can see slides on it from 2019 here: https://iokit.racing/oneweirdtrick.pdf (The One Weird Trick SecureROM Hates)


Which just goes to show how useful it is to have these kind of exploits. Imagine if there was a way to fix Checkra1n, and it was fixed a while back. Then, figuring out the details of this exploit would have taken much longer.


Interesting, from that slide I should always null my variables after I'm finished with them.


If they're globals, then yes you should. Having dangling pointers anywhere, even in supposedly unused areas, tends to come back and bite you.

For locals, why bother? The optimizer will probably discard the writes, and worrying about stack addresses being reused is a waste of mental space and clutters the code.


Since this always comes up, here's an overview I made several weeks ago about where Project Zero focuses their efforts:

All counts are rough numbers. Project zero posts:

Google: 24

Apple: 28

Microsoft: 36

I was curious, so I poked around the project zero bug tracker to try to find ground truth about their bug reporting: https://bugs.chromium.org/p/project-zero/issues/list For all issues, including closed:

product=Android returns 81 results

product=iOS returns 58

vendor=Apple returns 380

vendor=Google returns 145 (bugs in Samsung's Android kernel,etc. are tracked separately)

vendor=Linux return 54

To be fair, a huge number of things make this not an even comparison, including the underlying bug rate, different products and downstream Android vendors being tracked separately. Also, # bugs found != which ones they choose to write about.


Project Zero has uncovered 2033 issues... The majority of those could be used alone to ruin your life. The rest might require 2 (Eg. one for the sandbox, one for the kernel).

Thats a team of ~10 security researchers over many years...

Considering how many are being discovered each day/month/year, chances are that there are at least hundreds undiscovered...

If it only takes one to ruin your life, and a good security researcher can find one in a few weeks, or months at most, the barrier to someone evil is really really low...


> good security researcher can find one in a few weeks

s/good/extremely good/

This doesn't change the fact that someone evil will still probably find one.


Most experts have expertise on only one or two different OS's or bits of software.

The found issues will strongly depend who happens to be on the Google Project Zero team at the moment.


I generally agree, although I think from what I’ve seen, their researchers are pretty flexible.

My post was to counter folks thinking P0 is a Google hit job, which seems to come up frequently on HN.


Even if Project Zero exclusively focused on competitors, they'd still be providing a valuable service. Maybe Microsoft and Apple should have the same sort of project. If they're all competing at who can break each other's code the worst, that'll end up with better products from all of them.


I have nothing to add but the author of this was my best friend in elementary school. Interests included robots, crazy science experiments, dinosaurs, general mischief, and Perl programming.


TL;DR background for this one: there existed a zero day bug in iOS 11 related to how the kernel processed the lio_listio call. Apple fixed it then but introduced a memory leak. In iOS 13 Apple fixed the memory leak but reintroduced the vulnerability. The regression was found and packaged in a obfuscated jailbreaking tool (unc0ver); this post explains how the tool was deobfuscated. This resulted in an "emergency" iOS 13.5.1 update to fix the issue. Interestingly this fix still does not fully fix the memory leak: https://www.synacktiv.com/posts/exploit/the-fix-for-cve-2020...


Why is he doing that work? Does Apple not fix every jailbreak exploits by themselves?


In this case, it looks like there is a point to it:

> My goal in trying to identify the bug used by unc0ver was to demonstrate that obfuscation does not block attackers from quickly weaponizing the exploited vulnerability.


We all know obfuscation isn't some magic "no one knows how this works now" trick: the goal is to buy time while people are forced to work though your defense and to slow down the proliferation. Now, the "problem" with this is that some people are just really good at pulling things apart, and so one person can spend four hours attacking it and then tell the world how it worked. But then it is more a matter of incentives, and it still isn't the case that there is much universal incentive for it to both be reverse engineered and then documented for others so quickly (even in the world of piracy; the incentives there are fascinating, but still selfish).

And in fact, I will argue that this looks like it worked great: yes, someone--and of course, likely many people working in shadowy areas of organized crime, arms dealers, and government contractors--figured it out in hours, and they could have been malicious and used it to attack others. But the real question is then how many such attackers you enable and what their goals are. If you publish an exploit as open source code along with the tool (which some people have done in the past :/), you allow almost any idiot "end" developer to become an attacker: millions of people at low effort instead of thousands or hopefully even only hundreds (when combined with incentives, not just ability).

If you publish a closed source binary with obfuscation--one which is restricted to a limited usage profile (like if nothing else it isn't in the right UI form to "trick" someone into triggering it, or where what it ostensibly "does" is too blatantly noticeable) you limit the number of people who both have the time and incentives to work out the vulnerability and then rebuild a stable exploit for it (which is hard) down to a small number of people, almost none of whom (including the attackers) who are then incentivized to publish a blog post (or certainly code) until at least months after it gets fixed (as was the case here).

And so, as someone who had been sitting in the core of this community--where everyone is wearing a grey hat, the vendors are the "bad guys", and "responsible disclosure" is being complicit in a dystopia--and dealing with these ethical challenges for a decade, my personal opinion is "please never ever drop a zero day on the world without it being a closed source obfuscated binary" unless you want to drop the barrier to entry so low that you have creepy software engineers quickly using the exploit against their ex-spouse as opposed to "merely" advanced attackers using the vulnerability for corporate or government espionage.


> And so, as someone who had been sitting in the core of this community--where everyone is wearing a grey hat, the vendors are the "bad guys", and "responsible disclosure" is being complicit in a dystopia--and dealing with these ethical challenges for a decade, my personal opinion is "please never ever drop a zero day on the world without it being a closed source obfuscated binary" unless you want to drop the barrier to entry so low that you have creepy software engineers quickly using the exploit against their ex-spouse as opposed to "merely" advanced attackers using the vulnerability for corporate or government espionage.

Obviously you have a better understanding of the iOS jailbreak scene than I ever will, but I still have to say I disagree with this ethical viewpoint. Personally, I'd rather run an open source exploit chain than obfuscated binaries from parties I do not know that are difficult to be sure are safe. Thankfully in the case of unc0ver that is not an issue anymore, but in the past it has been an issue for longer time periods. OTOH, if there is really a moral dilemma in releasing 0days as open source specifically because of the small time abusers and not nation state adversaries, I don't understand how this moral quandary doesn't mean you can never ethically release an iBoot/more generally any bootrom exploit, for example.

I'm genuinely curious how many abusive people are motivated enough to come up with a creepy use for a tethered jailbreak. I know it's possible, but short of rolling your own stalkerware, it really doesn't seem too straightforward?


Project Zero makes it their job to find things even if manufacturers don't (I'm sure they were in this case, though). In this case I would assume it was just curiosity.


If it only 4 hours, it might well have been "just because I could".

Google doesn't micromanage employees by the half hour like many companies...


There is nothing to brag about. I want to own my device. I want to install on it whatever i like.


Working for the wrong side snitchy snitch


FTA: "...the LightSpeed bug was fixed in iOS 12 with a patch that didn't address the root cause and instead just turned the race condition double-free into a memory leak. Then, in iOS 13, this memory leak was identified as a bug and "fixed" by reintroducing the original bug, again without addressing the root cause of the issue..."

Ooof. Talk about running in circles. Either this was someone who is swamped with work and spaced out, or a new programmer who wasn't familiar with the original. Oddly, I feel bad for both of them!


Reguardless of how bad the original fix was, this is why testing is important. The original person should've added tests to make sure that specific issue doesn't come up again, and it would've caught the regression.

> Thus, this is another case of a reintroduced bug that could have been identified by simple regression tests.


> Talk about running in circles.

So maybe writing and reading useful commit logs is not such a bad idea after all :)

Reminds me of this quote: "Those who don't know their history are doomed to repeat it."


Or comments in code. They are very useful when pointing out gotchas.


...or the more interesting and perhaps less plausible explanation: someone who doesn't toe the line, someone who was trying to "take down the system from within"...

I often wonder what goes through the minds of those whose work helps companies exert more control over their customers. Maybe some of them are not so "obedient" after all...


Hanlon's Razor suggests otherwise. People said that the Windows Metafile bug in 2005 was a backdoor, which was obviously wrong.


So proud to have reverse engineered an 0day. Ok, move on. Nothing to see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: