Why are people out to crucify Apple for a story that's still being resolved? The article clearly says:
"...Due to a processing issue, your credit will be included on the security advisories in an upcoming update. We apologize for the inconvenience," Apple told him when asked why the list of fixed iOS security bugs didn't include his zero-day..."
"...We saw your blog post regarding this issue and your other reports. We apologize for the delay in responding to you," Apple told Tokarev 24 hours after publishing the zero-days and the exploit code on his blog...
"...We want to let you know that we are still investigating these issues and how we can address them to protect customers. Thank you again for taking the time to report these issues to us, we appreciate your assistance..."
The company hasn't denied the bounty, they're just incompetent / slow on this process.
Feels like everyone is out to paint <x> company with just confirmatory bias using whatever half-baked story is available. Even I feel for company leaders in this kind of shitty journalism environment. And the rest of the comments here are just autopilot piling on the echo fest.
You're framing this as if it's all about the bounty and Apple just hasn't gotten around to it yet. That's only a small fraction of the story and could easily be forgiven. If I wanted to make Apple look good, I'd focus on that part, but that would be rather biased to ignore the whole picture...
Tokarev discovered 4 iOS 0-days, then reported them all to Apple back in May. After months of Apple's continued refusal to fix or even publicly acknowledge all four of the issues, Tokarev made all of them public on GitHub.
Weeks passed, and now it's today. Apple has yet to fix or publicly acknowledge two of the four security vulnerabilities. That should make Apple look bad because it's some fundamentally irresponsible security practices.
Yes, I'm biased. I'm human, not a computer, and it's stuff like this that makes me biased towards Apple. They should receive negative publicity for this, then they should change how they do things. At the very least, app developers and users should be warned about the two issues that have yet to be fixed.
I don't know anything about Apple's bug bounty program except that there's a prevailing attitude that it is not the well-oiled machine that Google's bug bounty program is perceived to be, and I'm not super interested in making a case for Apple here. But because this is a recurring theme in every discussion about every bug bounty run by anyone:
* There are valid reasons that bugs can take longer to fix than you'd expect; the most notable of them is when the bug you found is actually systemic, or has a deep root cause, and the real fix for the vulnerability is more complicated than the surface bug. Without a hard timeline, some shops will work to get the root cause fixed on some bugs even at the cost of an increased timeline, because the patch for the surface bug reveals the pattern and amplifies risk to customers.
* As a reporter, you can take some measure of control over the process back by providing a fixed timeline (like the P0 90 days). There's no negotiation needed; you give the vendor time to fix and they either do or don't, but either way you're going public. That is a valid way to go about things, but may cost you the bounty.
* These things are bug-dependent, and the process that runs for a zero-interaction RCE won't be the same as the process that runs for a bug that requires a malicious app store app and only gives access to the contact database.
* Message boards tend to expect that big vendors can just shell out for the bounty as a show of good faith. It's easy to see why they believe that. It makes sense. But it also creates broken incentives. The limiting reagent on bugs isn't bounty dollars (these are indeed barely even rounding errors to major vendors), but rather programmer time. If you pay out for weak, stuck-in-process bugs, you create incentives that redirect programmer time to those weak bugs and away from more significant bugs; as angry as you can reasonably be about a malicious app being able to snarf your contacts, if you're rational, you're a lot more concerned about memory corruption flaws, which is what you really want people spending their time on.
> If you pay out for weak, stuck-in-process bugs, you create incentives that redirect programmer time to those weak bugs and away from more significant bugs; as angry as you can reasonably be about a malicious app being able to snarf your contacts, if you're rational, you're a lot more concerned about memory corruption flaws, which is what you really want people spending their time on.
I don't understand how this isn't already reflected in bug bounty pricing tiers? Like, if I access your contacts, I get $x, but if I can access all your photos, I get $xx. As a user, I couldn't care less for how my data got accessed, whether it's a logic bug or memory corruption…
Right, and responder is saying in a competition for resources, pay less for non-mem corruption bugs, which should mean they aren't as hard to verify.
Is your point that all bugs soak up time to fix? If so pay the bounty and add it to the backlog. Or is it too much time to verify? That seems to be something you can kick back to the reporter.
Apologetics for a bug bounty program dragging their feet on a payout because "it's not a mem corruption bug" alone is unconvincing.
Well, that circles back around to my point: if Apple wants people to submit more complete device takeovers, they should pay more for them (and they do, or at least they claim they do).
Well, yes, that's why Apple won't pay a million dollars for it. But there are tiers lower than that ("Unauthorized access to sensitive data from a user-installed app") which do cover the kind of bug described here.
You've lost me. What does this have to do with the point I made about which bugs we want security developers at Apple spending their time on? You seem to be making my point for me.
The whole point of the bug bounty program is that anything that qualifies for it is something that Apple considers to be important enough to pay out money for, because they (ostensibly) want people to report these to them. I understand if they are overloaded with "this function leaks the country of the device" bug reports but the ones that they actually include in the bug bounty program are those that have the potential to significantly damage their stated goals around security, so it's kind of the point that they review all of them.
If Apple wants to nudge people towards submitting more serious bugs, they can pay more so people are incentivized to work towards those rather than mucking around in gamed. But, they don't really get to say "we didn't really have time to get around to this Contacts bug, sorry": people expect them to have it fixed. That's the whole reason Apple offers a bounty at all: so that researchers tell them about it early so they can fix it.
> * These things are bug-dependent, and the process that runs for a zero-interaction RCE won't be the same as the process that runs for a bug that requires a malicious app store app and only gives access to the contact database.
It would be interesting to understand at what point this becomes a GDPR issue, and if the GDPR legislation can be used to pressure companies in expediting this process.
There's a "be careful what you wish for" argument here, because my understanding is that the FAANG vendors are snapping up security people just as fast as they possibly can, and, again, ceteris paribus you'd rather have those people working on the actual most serious vulnerabilities rather than the ones causing the noisiest bounty drama. But you could reasonably go either way on this I guess.
The degree of damage done by particular vulnerabilities is different from person to person. For one individual losing their contact database to some un-identified third party might be a 'meh' event, for another it could be a disaster.
Devil's advocate here: I've worked the other side of managing bug bounties.
It is entirely possible the researcher found something but didn't realize how deep the problem went. Apple may have released an incremental patch and is working on fixing a larger issue they found when digging into it.
When this has happened in the past, from the researchers perspective things seem quiet/delayed because we obviously can't share details of a larger vulnerability with them. All we can really do is ask for more time. In the end it all works out and they get paid out/credited for the original+follow on bug.
Why wouldn't the company communicate to the researcher "we found a larger issue related to this. your bounty will be upgraded to X. Please restart the clock for public disclosure" or something along those lines. Seems like better communication would create a win-win situation.
My first thought would be that the team within Apple may worry the researcher may resell the vulnerability to an exploits site.
Not part of the security industry so not sure what is common or not, but I would understand Apple being worried about sharing too much with a researcher they may not be familiar with.
I would also understand the researcher's point of view that this fell through the cracks or Apple is not willing to fix; and is likely what happened.
Apple should be paying enough money that that issue is not a consideration. If I’m Apple (or anyone else for that matter) I’m paying absolute top dollar times two to resolve these issues. And I’m not even thinking twice about it.
that is, unfortunately, not at all how the bug-bounty market works. Apple (or any other tech company) can't outbid three-letter-agencies, certainly not on a regular basis. Open market value is at least 10x higher than companies will pay directly.
Apple will pay a million bucks? Fine, NSA TAO will pay $10m. Apple can't pay $10m or $100m a bug on a regular basis, for the customers whom this matters the check is basically blank, as much as it takes.
>One person who will share those sales numbers is a South African hacker who goes by the name “the Grugq” and lives in Bangkok. For just over a year the Grugq has been supplementing his salary as a security researcher by acting as a broker for high-end exploits, connecting his hacker friends with buyers among his government contacts. He says he takes a 15% commission on sales and is on track to earn more than $1 million from the deals this year. “I refuse to deal with anything below mid-five-figures these days,” he says. In December of last year alone he earned $250,000 from his government buyers. “The end-of-year budget burnout was awesome.”
For those who figure this is a great way to monetize their security skills and actually have the chops to do it:
It should probably be pointed out that once you do this, you’re in the weapons industry. Your work will likely be used, directly or indirectly, to put a bomb through someone’s roof or put them in prison for a very long time. Make sure you’re okay with the ethics of it.
> It should probably be pointed out that once you do this, you’re in the weapons industry. Your work will likely be used, directly or indirectly, to put a bomb through someone’s roof or put them in prison for a very long time. Make sure you’re okay with the ethics of it.
By this logic, americans should stop using cars at all, cause all that oil is coming from middle east, saudi arabia.
What? This does not follow at all. You’re implying that any degree of involvement in activities that have negative consequences is equivalent. That’s incorrect!
One doesn't, I don't think; I think one sells to one of several grey-market brokers who in turn sell to DOD. But I think it's more productive to substitute "the IC" for "NSA TAO", because there are several countries (on the "sort of legitimate" side of this market) buying. All of them can pull any plausible amount of cash for a vulnerability out of their couch cushions (then again, so can small countries).
They can’t be that worried - the bug that allows you to input behind the lockscreen on OS X has now been a thing for… six years, over three or four versions of macOS? I’ve reported it, but they each time said it was a feature, not a bug.
Damn strange feature that allows me to compromise any screen-locked mac.
Surprisingly straightforwardly - when you open the lid, focus is initially still on the desktop, not on the Lock Screen. You may have noticed if you’re too quick with your password the first few characters don’t appear. They don’t go nowhere - they go to the desktop.
If you get a certain key combination in before focus switches, it stays on the desktop, and you can continue to input - fire up a terminal, do whatever you fancy. It’s all blind, but still perfectly dangerous.
Certain full screen apps, if they have focus when you lock, retain focus indefinitely. Paradox interactive games, for instance - stellaris exhibits the behaviour nicely. This even includes mouse focus, and if you Apple-tab, then focus goes to whatever you Apple-tab to. You can only restore focus to the Lock Screen by clicking in the password field.
Both times I reported this I got a pedantic “locking the screen does not terminate applications, which may continue to run in the background” response.
I would assume that trying to get a bounty means accepting that you're not selling the vulnerability elsewhere. No maybe the shadiness inherent in the field means no one trusts that agreement anyway, but I suspect there's generally some amount of trust for researchers submitting vulnerabilities, particularly if they have some kind of history of good faith.
Also, I'm not sure that saying "we have discovered a deeper problem that here beyond what you reported" really delivers much information beyond perhaps telling the researcher to keep investigating (although if they're already getting the bounty, the additional investigation wouldn't really be useful).
> would assume that trying to get a bounty means accepting that you're not selling the vulnerability elsewhere
Having an e-mail from the company confirming the bug is serious and systemic massively raises its market value. Security is necessarily trust less. These game dynamics are unavoidable.
None of us are thinking of any of this before the brokers already have; this work has been going on for many, many years. You can resell vulnerabilities, but the contract terms you get from brokers are tranched, and they stop paying when the vulnerability is burned. Your incentive not to spread your bug around is that you you're cutting your nose off to spite your face, not to mention that you're probably making the terms you'll get for future bugs worse.
> I would assume that trying to get a bounty means accepting that you're not selling the vulnerability elsewhere.
That is a very mistaken assumption. Even NDAs backed by threats from nation state intelligence agencies aren’t sufficient to keep exploits from being resold multiple times.
> In the end it all works out and they get paid out/credited for the original+follow on bug.
I've worked on the company end of bug bounties too, and it does happen that a report just falls through the cracks. Seemingly-inactive reports do need a certain amount of maintenance; you don't want to just trust that everything will work out in the end. (That said, as long as you get responses when you ping the company, things are working in the background.)
(edit to followup: in about 18 months of this, I encountered one report that had fallen through the cracks. Obviously, there might have been others that never came to my attention at all, but the companies are tracking things much more carefully than researchers often assume.)
I have also worked on managing bug bounties and that is why you keep lines of communication open with the researchers. Not to throw stones in glass houses, but there are a number of ways Apple could improve on their approach to how they do their bug bounty program.
I have heard of many researchers having extremely long delays, poor communication and simple things like not acknowledging the bug submissions.
Aftering filing multiple bug reports with Apple now against WkWebView, I can say that extremely long delays and poor communication is not isolated to the bug bounty team :)
Security bugs are much more important than other bugs. A security bug that's affecting no one currently could turn into a disaster tomorrow. A functionality bug that's affecting few people today will almost certainly affect few people tomorrow.
"Since then, Apple published multiple security advisories (iOS 14.7.1, iOS 14.8, iOS 15.0, and iOS 15.0.1) addressing iOS vulnerabilities but, each time, they failed to credit his analyticsd bug report."
"Two days ago, after iOS 15.0.2 was released, Tokarev emailed again about the lack of credit for the gamed and analyticsd flaws in the security advisories."
They didn't give him credit in the last 5 advisories. Really no excuse for that imho. If Apple keeps this up then why would anyone report bugs to them when you can just post it online and get credit for it right away? Or sell it on some 0-day site.
If credit is what you care about, it's straightforward to ensure you get credit without working with Apple's bounty program. You can do what P0 does and provide a fixed timeline after which you're publishing, and nobody credible is going to hold that against you (in part because P0 has established this norm).
Obviously credit is important for them as a proof of competence. If the company does not give them credits how can they build their business, portfolio. You can jus say I was the one who discovered this.
Every field works in a certain way and when it comes to bounty you want to make a name for yourself. You can't just pull up and say you are the one
But yeah may be they should just sell it to third-parties
>Plus Apple has a history of being incompetent and slow on this
Have they, really? Just because you find this instance here and there of such a story where they were, doesn't mean they have a history of being incompetent and slow on this (the same way someone who hit 99% of their three-pointers doesn't have a history of being an awful shooter).
Apple's bug bounty is notoriously slow to respond, to the point that it has posed legitimate security concerns in the past: particularly their rhetoric around Thunderspy amused me.
Is this not a potential Sharpshooter fallacy? Are you sure that we aren't mainly hearing about bounties when Apple doesn't handle them smoothly and not so much when they are handled smoothly?
Or to put it another way, when you poll people who complain Apple is notoriously slow to respond, don't be surprised if your conclusion is that Apple is notoriously slow to respond.
How can we reasonably say Apple is moving slowly to fix bugs when we don’t know how much work is going on behind the scenes? A slow response can just be a slow response.
Probably because the 0days have been fixed and the public have received the benefit of the security researcher's report to Apple, but he hasn't received any recognition or compensation through the bounty program via which he reported the vulnerabilities?
And iOS users should be grateful that they report those bugs to Apple to get paid. They could also sell it to some spying companies and may be those pay well and very fast
Pay very well? Often, assuming they actually pay, sometimes you can get stiffed there too. Very fast? Nope. Easy to work with? Nope. Communicate with you any better than Apple through the process? Not usually.
> They could also sell it to some spying companies
The above type of organization is what I was referring to. So if you consider the grey market buyers[⋁] of exploits “terrorist orgs”, then yes. If you use the normal definition of “terrorist orgs”, then hell no.
[⋀] three letter government agencies, defense contractors, and those who sell to them
Why? What would be valuable about that? Specifically, what would be valuable about getting someone's GameCenter contacts? These aren't business or family contacts. These are people the person games with. It's a privacy violating bug but not a show-stopping bug. Important but not valuable.
You're misunderstanding the vulnerability. The bug is in gamed, the Game Center daemon, but it allows access to the entire CoreDuet database, which does on-device intelligence stuff. Duet essentially logs everything you do on your phone, which means that if you look at the database it'll contain logs for all your interactions, not just those with Game Center contacts.
You are correct. I did misunderstand that portion of it but that doesn't change the fact that Apple is willing to pay $100k for a bounty like that but no one else does and Apple's actions don't suggest at all that they won't pay out the bounty. If you've ever been part of a program like this, on the developer side, it's possible that they discovered a larger bug that this was a part of or that someone else had already reported this bug. Tracking down the origin of this and the tickets involved takes time and fixing the bug is always the priority. They haven't acknowledged that he's the originator of the report so his insistence that he be credited immediately is a big immature and premature.
There's nothing here that suggests Zerodium, or someone similar, would pay the same amount Apple is offering and there's nothing that suggests that Apple doesn't intend to pay this or credit him. That's all completely conjecture.
I have no idea if Zerodium would pay out that much for this bug (probably not) or anyone else (maybe?) but Apple in general has a poor track record of paying out bug bounties. They do sometimes, but it seems like it is far rarer than their website says they should.
>Apple in general has a poor track record of paying out bug bounties
What is this based on? My understanding is that Apple pays out 99% of the reported bug bounties and that's only because they include multiple submissions in the totals but not in the payouts (they only payout the first discovery or root discovery).
Those are my favorite recent examples, but specifically Apple has huge issues with turnaround time. They also don't communicate with or assist the researchers who found these exploits either, which makes things particularly frustrating for people who ultimately both want to secure Apple's systems. Their overt hostility, history of poor communication, and frankly pathetic bug bounties are all contributors to how people perceive Apple's relationship with security experts.
> The company hasn't denied the bounty, they're just incompetent / slow on this process.
With 0-day security vulnerabilities, slowness equals incompetence. Companies with unbounded resources like Apple have absolutely no excuse for not being able to move as quickly as a small startup on issues like this, unless their message to shareholders is "yes invest in us so you can see our performance literally decrease with every dollar invested".
Some security people love to hand-wave and issue prophecies of doom for attribution and attention. It’s great chum for writers — easier to run with some guys grievance than research a more substantive story.
There's a spectrum of quality to these stories, and one sign that you're tending towards an end of that spectrum is the use of the term "zero-day" without qualification. These are bug bounties; all of these bugs are zero days, no matter how severe (or not) they are. It's literally the least important detail in the story about how a bounty is being handled.
Are there lots of articles that describe zero-days but don't include the actual term "zero-day"? I may have been underestimating the state of journalism...
Because those emails are probably lies, they're just delaying and delaying and hoping it will just go away, and writing fake "We're sorry"'s when they're forced to.
What a slap in the face. This guy is owed a boatload of cash, and typical Apple just kicks the can down the road. Next time I hope he sells his next vuln to the highest bidder.
Who's the next highest bidder after Apple for a bug in `gamed` that allows you to access GameCenter and download contacts? It's a significant vulnerability, but there's e.g. no price list entry on Zerodium (you can take Zerodium more or less seriously, this is just a data point) for anything but code execution, which this vulnerability isn't.
Who? Speculate as to who they might be. The six figure numbers you're familiar with are for code execution bugs. This is obviously not that. So they're not anybody that quotes prices for bugs, or anyone directly comparable to them.
Governments can already pay prices comparable to the supposed bounty valuation of this bug for code execution. They're probably not shelling out six figures in gold bars for a bug that exfiltrates contact lists from apps that have to be installed from the app store.
The non-bounty market clearing price for a lot of scary sounding vulnerabilities is $0.
Cellebrite and all the surveillance-as-a-service shops might be interested in information disclosure bugs. You maybe will not get the $100K Apple promised, but maybe you can sell it four times for $30K or something like that if the bug is still "good enough" for certain uses.
RCEs in Windows or iOS go for a lot more than a measly $100K if you can manage to get in contact with the right people. Think 10-20 times that.
Full chain RCEs in iOS go for 1MM from the Apple bounty program, so you'd imagine they'd have to go for more than that from a tranched grey market contract.
This is a bug that allows you to read contacts from a malicious app installed from the app store. It's not drive-by contract exfiltration; it's intensively interactive. I'm surprised the Apple bounty terms are so generous-sounding about bugs like these, but I read them, and I'm not contesting the $100k the article claims this is worth.
Apple can't really outbid the grey market on RCEs, but they have clearly outbid it on this bug.
Yes, I agree with this assessment of this particular bug. As far as this bug goes, from the description, this particular one probably not very valuable to anybody. You have to get an app into the app store and then trick people to install it for not that much information you can exfiltrate. If it was a bug that allowed attackers to exfiltrate contacts and email addresses and such just by having the victim visit a website or open an email, that would be another matter, and still quite valuable even tho it wouldn't be RCE.
But who knows, it still might worth a few bucks to you because you already figure out the delivery. And your relationship to somebody with the skills to find interesting bugs and willingness to sell to you might be even worth more, so you might pay money not just for the bug but for the relationship.
Your comments are the only ones here which aren’t divorced from reality. It’s weird. Who are these supposed guys paying six figures for this sort of thing? It’s just not a valuable thing.
I don't know that it's not valuable! It's a good bug, from what I understand of it. The issue here is subtle.
It's not that bugs of all stripes don't have plausible value. It's that there isn't a market for most of them. Bugs are small parts of the enterprises that exploit them. To purchase a bug for significant amounts of money, it has to slot into some kind of business process that will profitably† take advantage of it.
What people are subtextually observing with these $250k vulnerabilities is that there are a bunch of well-scripted playbooks for profiting from RCE vulnerabilities on widely distributed, unevenly patched devices. There may be no meaningful cap to how much a phone RCE is worth, since the IC's alternative to RCEs is human intelligence work that will dwarf any RCE cost just in health and benefits overhead for personnel.
But there just aren't a lot of business processes that profitably exploit stolen contact lists from malicious application installs. You can come up with lots of stories about those processes, but the key thing is that for a bug to be worth a bunch of money, that process already has to exist and be working; the cost of building all the business process stuff around the bug will rival the cost of the bug itself. Bugs have finite lifespans, and "snarf contacts from malicious app" bugs are idiosyncratic, and tend not to be pin compatible with a steady stream of similar bugs that would justify keeping that exploitative process up and running.
This is for what it's worth all my own personal weird theory of the situation, and I don't sell or buy bugs. But I have repeated the theory to many people who do either or both, and nobody has told me I'm totally wrong about it.
† For some possibly non-economic definition of "profitably"
I'm with you. I think these cheap, Apple-bashing blogs want to make this into a bigger deal but there are a few things that don't add up to make this the huge issue they think it is:
1. The bug lets you download contacts. Nothing else. As you've said, no one's going to pay six figures for a bug that lets you get someone's contacts from GameCenter. If this was even slightly more abstract and didn't specifically deal with just GameCenter, I could see it being valuable for companies that do phone-to-phone transfers, for example, because you could download someone's contacts from a locked device. This isn't that, though.
2. Nowhere in the article, or even the original tweet, does Apple acknowledge or state that he was the person that initially found the bug. They just confirmed that the bug exists and asked him to keep it confidential. It's entirely possible that the reason for the delay is because someone discovered a lower-level bug that rolled up to this and it's a bit less clear-cut who is owed the credit for the fix that they released. They're not just going to go around and pay everyone who claims to have found a bug. They have to verify it and make sure that it's not something they've already discovered through another report or on their own.
3. Sometimes this stuff just takes time. When you're dealing with codebases that as large as this, changing a small thing to fix a bug can unintentionally break a bunch of other things. It's not always a simple matter of "this small piece is broken and is completely independent of everything else". We have no idea if this is or isn't one of those cases because we don't really know much about it (and for good reason).
> 1. The bug lets you download contacts. Nothing else. As you've said, no one's going to pay six figures for a bug that lets you get someone's contacts from GameCenter. If this was even slightly more abstract and didn't specifically deal with just GameCenter, I could see it being valuable for companies that do phone-to-phone transfers, for example, because you could download someone's contacts from a locked device. This isn't that, though.
I, the evil overlord, would very much like to know who is ratting out my secret initiatives to those nosy journalists. Maybe some of them don't keep a good information hygiene and will download my simple but quite addictive game? At least I get to know some names and addresses.
It's not just that it's contact data; I agree, exposing contact databases is a big deal, and that this (as depicted) is a significant bug. It's that to accomplish that with this bug, you have to install a malicious app from the app store. That is a very high hurdle towards operationalizing the bug, especially since, as I said elsewhere, Apple does API-level surveillance of apps in the app store.
But isn't it the reason malicious actors buy legit apps made by small shops to insert what at best is adware/spyware into something that is useful and made a name for itself already?
Or make useless copycats, as Kosta Eleftheriou proved already is a way of choice in the iOS App Store.
I'm not saying you can't do anything with this bug, just that it's much harder to do something with it than with a drive by vulnerability. It is a little surprising to me that Apple's bounty sort of implies they'll pay so much for a bug like this.
The reason attackers buy or compromise existing apps is exactly because they want to sidestep the hurdle of tricking people into downloading their malicious app. Using an existing app gives you access to the existing users.
The GDPR indeed has provisions to fine companies for "avoidable" data leaks due to lacking security practices. The regulators will not pay you a bounty for reporting companies, and there is a big difference between a normal "bug" and "bad practices".
E.g. one of the first GDPR fines here in Germany was issued against a company that had their customer DB dumped[0], specifically for still storing some user passwords in plaintext.
Shady advertising SDK? (Also, don't people use code execution to exfiltrate contacts and messages? This is basically what this bug does, albeit with "several clicks" involved.)
As a first-order effect, sure.. but Apple is not immune to the damage that this causes either. More importantly, their failure to pay or honor their commitments would be the root cause of this in the future.
They opened this "bug bounty" door on their own, they are solely responsible for it's success or failure.
Sure there are secondary affects. However they are mild comparatively
The two options:
- someone full discloses a 0-day. Apple is embarrased, users can take mitigating action until its patched. Apple is probably forced to patch. End result: really embarasing for apple. Small risk to users that's pretty ephemeral.
- sell to highest bidder. Black market or at best grey hat. Exploit is used against users. Nobody really knows its happening. Maybe that eventually comes back to give apple a bad reputation, but not likely to happen in the short term.
One of these courses of action disproportionately hurts users a lot and apple not very much. The other hurts basically only apple and users very little. Even if you argue that the black market might eventually hurt apple a little bit, its still a very small hurt.
If your goal is to piss off apple, it seems clear that full-disclosure is the thing to do here. If your goal is just to clear your concious on the morality of selling exploits to bad people who intend to use them to do bad things - while i'm sure you'd find a way to justify that no matter what apple did. The human mind is good at self-justification.
> More importantly, their failure to pay or honor their commitments
What commitment? A bug bounty program isn't a commitment to do anything. Its not a contract or a work agreement. At best its sort of like a contest.
But even disregarding that, i'm not sure this bug even is in any of the categories they list. What they say is: iOS user installed app can access sensitive data including Contacts, Mail, Messages, Notes, Photos, or real-time or historical precise location data. i'm not sure this fits.
Is apple being a dick? Yes. Are they breaking commitments they made? Not super clear.
> Next time I hope he sells his next vuln to the highest bidder
And thereby accomplishing what, exactly? There is still merit, albeit not from a material wealth standpoint, for doing the right thing for the right reasons.
I agree somewhat, the users which include me get caught in the crossfire if someone releases a zero day out in the wild, but I also think there needs to be a negative feedback to these tech giants that expect work for free.
But in the grand scheme of things, does it even punish the tech giants? They have so many claws in a users life, and in the case of apple, your only other choice is google or a bunch of shady oems.
At the end of the day the only people who pay for it are users themselves, their data is comprised and irreversibly out there
Unfortunately exercising consumer choice works a lot better in a healthy free market than it does in one controlled by an oligopoly. I totally agree that people should try to avoid buying from companies that do immoral things, but it can be quite hard in a consolidated market.
You can rely on the vast majority of people to do the right thing when it's in their best interests.
You can likely rely on a good majority of people to do the right thing when it isn't for or against their interests in any substantial way.
I'm not sure the amount of people that will do the right thing when it's not in their best interests by some small but noticeable amount.
I'm also not sure the amount of people that will do the right thing when it's vastly against their best interests, but it's bound to be far less that the prior group, and I suspect it's way below a majority.
The point isn't that these people aren't doing the "right thing", it's that these programs are designed to align doing the right thing with the best interests of the researchers, so noting that we might get more results that are not in the best interests of society at large or the company in question if they don't hold up to what they agreed to is not only a valid observation, it's the likely outcome if we're to expect these programs exist for a reason.
To put this in perspective, say you find a suitcase with a million dollars in it. You can turn it in, or you can keep it for yourself. If there's no real expectation you'll get anything if you turn it in, how does the reasoning go in your head? What if you know you'll get 10% for finding it and turning it in? What if you live in abject poverty? What if you have $60k worth of medical bills for a family member to pay off?
What suggests that any of this is "free work for a trillion dollar corporation"? Apple hasn't acknowledged that this person discovered this bug yet. They've only acknowledged that it existed and that they were going to patch it in the future. Crediting someone for a bug bounty isn't as easy as you all are making it out to be.
Robbing a bank is immoral, selling information about how a piece of software works, in my humble opinion, is not. Or if it is, then it's not even close to the level of "wrong" that is robbing a bank.
Also the software to subvert security on mobile devices, build firewalls on a national level, and inspect every internet packet for disloyal statements. All that is packaged and sold to our authoritarian allies with the full approval of the government the majority of people support.
Selling a zero day would probably fall under a weapons clause.
If you sell a knife, and it's used for a stabbing, are you culpable? Thousands of people buy knives every day, and most of them don't stab anyone. So unless there was good reason to suspect something, we would say no.
Most zerodays are probably not bought by china to spy on dissidents, they are more like knives. On the contrary, when we sell bombs to Saudis we can be 95% sure they will be used in Yemen.
Zero-days, unlike knives, are not dual-use instruments. The only people buying those zero-days are incorporating them into surveillance and monitoring systems. Literally, how else could an exploit be monetized? Stealing crypto-wallets?
You're also either a victim or an oppressor. No wrong can be done by anyone in the former group, and no good by the latter. Such ideas of victimization do nothing but justify the use of power and engender intergroup conflict.
Except that as this thread demonstrates, there is no realistic possibility of this researcher actually making more $$$ in real life by trying to find another bidder.
Not for this exploit. They pay more for code-execution exploits. No one is going to pay the sums you guys are thinking they will over a bug that gives you someone's GameCenter contacts. It's not a trivial bug but it's also not a very valuable one.
No, he's not. Those are low-priority bugs and the only thing that made them stand out was the fact that he dropped them online without a patch. RCEs get priority in patching, and his priv esc issues were not as important.
Zerodium (https://zerodium.com/program.html) pays out $2 million dollars for an iOS “full chain with persistence” exploit. $500k for an iMessage RCE. Up to $100k for an iOS “information disclosure” exploit (likely what this would have fallen under). Paid for via bank wire or Bitcoin/Monero/Zcash in 1 week or less. And legal.
Next time someone finds one of these, I wonder where they will report it to….
Zerodium doesn't list "information disclosure" for smartphones. "Information disclosure" from an email server means exfiltrating the emails. Zerodium will almost certainly not outbid Apple for the `gamed` vulnerability here (maybe for publicity).
Is it moral though? What do they do with these exploits? If it is to help advance the agendas of countries like Israel and Saudi Arabia how would you feel submitting exploits to them?
You can choose between a rich murderous dictator and an arrogant IT company that does not give you a proper credit. Either way you are screwed as a security researcher :(
Uh, if your consideration is purely what happens to you, sure. If you have any thought in your mind about what will happen to other people due to your work, then it's nowhere near the same.
There is a really simple solution to this. Just not put your time in analyzing software from Apple. There is other software you could analyze. Just go to a walk or count your toes. Everything makes more sense then searching security bugs in Apple software when you care about moral.
So far as I know, essentially all grey market vulnerability sales are tranched, which is an important consideration when comparing bounty payouts to the grey market.
The report to the manufacturer with the remark that there is a existing weaponized exploit will lead to a much faster fix.
And why you are so sure that there was no weaponized exploit out there before?
So you are okay with submitting the exploit on a silver platter to people who murder dissidents because “you can’t be so sure that there wasn’t an existing weaponized exploit”?
Complete file system read access to the Core Duet database (contains a list of contacts from Mail, SMS, iMessage, 3rd-party messaging apps and metadata about all user's interaction with these contacts (including timestamps and statistics), also some attachments (like URLs and texts))
I thought it included SMS and iMessage contents, which does very much sound like what Zerodium is looking for. But reading it again it’s not actually all text messages.
If you're ever in Amsterdam and feel up for drinking a beer [0] with someone interested in netsec [1], feel free to email me.
[0] Or coffee, tea, your beverage of choice.
[1] I did a couple of fairly good security courses and about 300 hours of hackthebox.eu. So while I'm not a professional, at least I've scripted with IDA Python and defeated fun boxes like PlayerTwo.
>iOS “information disclosure” exploit (likely what this would have fallen under)
Based on what? There's nothing to suggest that this falls into the category for an "information disclosure" according to Zerodium's eligibility guidelines.
Seems that no credit, no bount, nothing, has become the way that Apple deals with the iBugs Hunters.
And all it takes is one of those unsong heros giving up on reporting to Apple and, instead, reporting to some 0-day company, and some ransonware go brrrr
I'm surprised it hasnt happened yet. IT seems apple is possibly openly hostile and uses "red tape" as an excuse to obfuscate communication and frustrate those trying to do right by them instead of just posting it to github and calling them out on twitter.
Its one of a few reasons I have made some in-roads into moving off the platform.
They demonstrate a clear lack of knowledge of actual security–like unzipping a firmware update and calling it "decompiling" or connecting a device to a computer and using the file transfer functionality as proof of "RCE".
>Seems that no credit, no bount, nothing, has become the way that Apple deals with the iBugs Hunters.
Based on what? You can't possibly know anything about this bug or what stage of the reporting process it's in and Apple regularly pays and credits people for reporting vulnerabilities and other exploits. This specific bug just isn't a high enough priority to elevate it to the level you're describing.
Could it be that maybe they try to uphold the reputation of Apple by doing so?
But it doesn't make sens either way because they could have just paid those guys and then everyone is happy.
That's just some ass-backwards logic.
I am an Apple fan and I'm not going to defend this.
It's just plain bad behavior on their side.
It seems like the last person you would want to dick around would be someone who seems to be extremely good at finding extremely valuable vulnerabilities.
Here's a fun conspiracy theory proposed entirely in jest:
Apple doesn't want to patch zero-days used by US authorities in order to alleviate pressure on its encryption practices.
So they really only want to fix zero-days that are known broadly or get media attention. And they don't want to give too much incentive to researchers to report zero-days to Apple instead of selling them to the highest bidder (which may ultimately be the largest governments with the greatest ability to regulate Apple).
This is indeed silly, because the zero-day vulnerabilities that the IC exploits provide full kernel-level access to devices, and this just lets you read contacts from an app you install from the app store (which does local API-level surveillance already) on the device.
Apple is not that different from the status quo on end to end encryption as to necessitate a conspiracy (probably even in jest ). They have no icloud encryption, no photos encryption, no device backup encryption etc etc.
One small correction. They do have backup encryption. As a matter of fact, your various account passwords are only backed up if you keep the encrypt option turned on.
As far as the original article, I agree completely that not paying and crediting these folks in a timely manner is just stupid and will reduce Apple security long term.
Yes, they do have some way of using your iCloud account credentials to get to the backup key. Given the level of customer support needed for forgotten backup keys, they have probably chosen this as the lesser of two evils.
If you don't like that "feature," don't do iCloud backups. I do direct backups as described in the support link. Apple doesn't have those keys.
I do wonder how much longer local, machine-based backups will continue to be supported. Could easily see a future model dropping the cable entirely, dropping local backup and modestly upping the free iCloud storage.
That automatically syncs when both your laptop and phone are plugged in and on the same WiFi network. (I think you need power nap enabled if you're using an Intel Mac.)
It doesn't matter if you don't do iCloud backups. All of your iMessages will be backed up to Apple in ways that Apple/FBI can read because iCloud Backup (not e2e) is on on every device held by the people you are iMessaging with.
Depending on how cheap your friends are, they might not have effective backups because they wont spend money for the iCloud space to have enough space for their backups in the first place.
It would indeed be a next-level conspiracy theory to suggest the NSA planted an employee at Apple to introduce a GameCenter bug that lets you read a cache of contacts, rather than, you know, just taking the whole device over, which is what "zero day" usually implies.
That's true, but the implication of "zero day" in a new story is code execution, because those are the only zero days that make the news. You can just read this thread and see that several people commenting here clearly believe these bugs were comparable to code execution. The article, uh, does not go out of its way to clear up the ambiguity.
The first day that anything is available is technically “day 0”, but if I told you I had a PS5 zero day you’d assume I’d discovered a major compromise of the PS5, not that I’d managed to purchase a console the day they went on sale.
Believe me when I say that all major tech companies have spooks in their ranks. They don't really need to insert backdoors or ignore bugs. They have human assets in the teams.
Ask anyone who works in one of these organisations and they will attest to how many former IC people fill the ranks in certain areas.
If Apple was complicit in US authorities breaking into their devices they wouldn't be doing so via publicly exploitable vulnerabilities. Bugs making their way into the "wrong" hands decreases trust in their ecosystem. It makes much more sense to just add a backdoor.
You've got the wrong ideas. The government isn't using the same exploits as you and me, their backdoors are hidden much better and offer far more comprehensive control than just a silly Gamecenter vuln.
Yes! Another option -- however, Purism are pursuing two challenges at once, developing both the software and the hardware. Pinephone just focuses on hardware.
Apple the corporation religiously and sometimes aggressively portray themselves as virtuous, safe, and honest, but this behaviour suggests those qualities are marketing tools.
My friend has just looked on Find My Mac and he can remotely format and track the whereabouts of the Macbook Pro of a guy called “Jason”. He’s never sold or had this MacBook model on his account.
The people in the Apple store provided no fixes but suggested formatting the machine which I guess would be illegal in most places.
It makes me worry that I could be Jason and someone could remotely format my computer… it’s scary that something like this is possible.
The advisory credit and bug bounty fiasco aside, when I reported a security vulnerability to Apple in 2010, they wrote, "Because of the potentially sensitive nature of security vulnerabilities, we ask that this information remain between you and Apple while we investigate it further." It seems to just be a standard inclusion in their correspondence, and not unique to this exchange.
I wonder what would happen if everyone that knows about this just starts pounding Apple's online presence (twitter, fb, etc) in protest of their actions. Would they relent and pay up?
There's an incentives problem here: If whitehat researchers are disincentivised from working on iOS, the only people researching iOS vulnerabilities will be the bad guys.
I wish Apple would do a feature freeze for iOS and macOS for a couple of years, then focus on fixing bugs, improving security and optimizing performance instead.
Because not everyone has updated yet, not everything related to the vulnerability is fixed yet, the vulnerability is kind of bullshit, people are still figuring out if there are lots of adjacent or similar vulnerabilities to handle, etc. Not sure which of these is the case here.
You can, of course, disclose vulnerabilities whenever you like. But don't expect to get a bounty if you piss off the vendor in the process.
Big companies move slow and given the pervasiveness of these devices, it's fundamentally irresponsible to allow the google-apple duopoly in mobile OSes to keep their platforms closed, forming a bottleneck on all security fixes. Trust-busting in this space is long overdue
iOS 15 from release date has had the most bugs I've ever experienced with any former versions of iOS. Siri would revert back to Dragon style text to speech randomly is one thing I noticed frequently.
A former Apple person reminded me
that the new major version iOS must ship on the same day as the new hardware, as the new hardware (this year, the new iPad Mini, and iPhones 13) will not run iOS14. They are developed in lockstep with the next OS and will ship with it, and cannot ship without it.
The software is going out the door alongside the latest version of their biggest money-making product.
When it comes to phones I feel stuck behind a rock and a hard place - choose iPhone, with poor Linux integration and threats to passively scan files on my phone and forward them to LEO? Sure, they have a decent record with security but these bug bounty reports haven't been great.
Or choose Android, with its poor privacy record, a result of being built by an ad company that's already scanning my phone and mining it for data?
edited to add - While I'd love to see a true competitor in this space (i.e. not based on Android - those projects don't seem to work out well as a result of being half-in/half-out of the ecosystem) I don't see how it's possible without the support of the large tech players - Facebook, Instagram, Snapchat, WhatsApp, Twitter, and Spotify at minimum, to say nothing of the long tail.
> I don't see how it's possible without the support of the large tech players - Facebook, Instagram, Snapchat, WhatsApp, Twitter, and Spotify at minimum, to say nothing of the long tail.
This wouldn't be much of a problem if it wasn't for Google's SafetyNet that prevents Android apps from running on hardware and software platforms that Google doesn't approve of. You wouldn't need support from large companies if you were able to run the apps they already release for Android.
Compatibility layers like Anbox or Waydroid that allow you to run Android apps on Linux can't run SafetyNet-enabled apps, despite having no problem running other Android apps.
SafetyNet prevents compatibility layers like WSL 1 & 2, Proton or WINE with Android support from coming to Windows or other platforms, as well.
I have the same problems with SafetyNet as I do with Widevine.
> What's SafetyNet have to do with anything?
I thought I made that pretty clear in my post. I was addressing a post about competitors to Android facing challenges because of lack of app support from major companies.
SafetyNet helps Google maintain their mobile OS duopoly with Apple by preventing other mobile operating systems from running Android apps, despite there not being any technical reason why the apps can't run on other operating systems.
Steam was able to bring Windows games over to Linux via Proton because of projects like WINE. SafetyNet precludes running Android apps on devices and operating systems Google doesn't approve of.
Microsoft was able to bring Linux apps over to Windows via WSL 1 & 2. SafetyNet precludes running Android apps on Windows unless Google gives Windows a pass in their DRM.
Likewise. I'm in the market for a new phone. I want to get something top of the line and then keep it for at least 5 years, so good updates etc. But I have serious issues with both Google and Apple at this point.
For me it isn't really the tech companies that need to buy in to make an alternative phone OS viable, but things like banks. Online mobile banking is one of the main things I use my phone for after web browsing and messaging. The probability that it won't work on some third OS is what puts me off trying some of the alternatives.
Great point. I recall some banking apps placing restrictions on logging in without SafetyNet, e.g. which also puts some of the other Android-based OS's out of the running.
I get that a lot of the Android-based projects don't pan out, but LineageOS has been going for quite a while now, CalyxOS is relatively new, and GrapheneOS (previously CopperheadOS) have successfully established themselves as the defacto hardened Android platform.
You can download the source, modify it, and build them all freely. Hopefully more people can get involved and move the needle instead of only lamenting how they don't succeed while not actively trying to help them succeed. I mean this in a respectful way.
> Hopefully more people can get involved and move the needle instead of only lamenting how they don't succeed while not actively trying to help them succeed.
That's totally fair. I don't really have the time or Java/Kotlin/mobile familiarity to jump in here, and these aren't skills I can easily apply elsewhere in my career, personally.
> LineageOS has been going for quite a while now, CalyxOS is relatively new, and GrapheneOS (previously CopperheadOS)
My impression of these OSes is that they still rely on Google Play Services - or if not, micro-G which has many shortcomings. When most of the ecosystem doesn't work until you invite Google back in, it doesn't seem like a true alternative IMO.
Admittedly I haven't personally tried running any of them. Which one would you recommend trying if I were aiming to rid myself of Google's omnipresence?
You can run one of those os's without any Google services. Paid apps are almost certain to not run, and a bunch of other apps that rely on the services. But almost all the apps I use work great.
I have used CalyxOS and it's a great os. I did use microG and 98% of the apps I use worked flawlessly. If you have a supported phone, I'd definitely give it a shot.
You can install Lineage without Play, and only use Lineage.
Graphene does not require Play Services, but it only runs on a small subset of Android phones that are ~$500+ here (heavily discouraged to use anything older than 4a 5G)
Android doesn't scan your phone and mine it for data. Apps on Android scan your phone and mine it for data. Apps on iOS also scan your phone and mine it for data. The major difference between the two is that Android lets you choose which apps to put on your phone.
Google Play Services and bundled apps don't have to be enabled and sends less data to Google than the equivalent services on iOS, which must be enabled.
To install any apps on an iPhone at all, you must give Apple your billing information. Even if you install apps from the Google Play store, you do not need to give up that much privacy.
How about this: to get your location on iOS, your location must also be sent to Apple. You can get your location on Android without sending that location to anybody.
None of the apps on Amazon App Store or F-Droid use Google Play Services.
Are these comments real? They are surprisingly close minded for a hacker news site.
If you can't see the value apple offers, that's fine, but to be blind to what they offer others seems odd.
I've yet to be scammed by apple's app store. Ie, I can cancel my subscriptions easily, bad apps you can even get a refund on if prompt etc.
I have been repeatedly screwed by websites run by developers outside of apple. These websites have been LOADED with trackers, they have impossible to cancel subscriptions, they do all sorts of dirty tricks (I'm tired of the intercom type follow-up emails - sorry I missed you, give me one last chance etc).
I get it, the dog eat dog crapfest is appealing to some, but Apple offers an alternative, and for some people that has value. And yes, I get it, the folks making these eye blinding slow websites have lots to say about apple, but my weather app opens promptly on apple, whereas the ad littered weather pages online bog my machine (with 100x the memory) down.
I have a feeling that HN recently had an influx of users from other sites. It seems like the exodus from Reddit, for example, has resulted in a significantly larger signal to noise ratio of comments. There's a lot more impassioned nonsense that's based on article headlines rather than detailed discussion of technology and either it's just more pronounced because of the pandemic or it's actually new users that are diluting the comments.
It's especially pronounced in Apple-related threads (one of the few topics that I browse HN for as there are a lot of topics I have no experience or expertise in) where all the nuance seems to have been zapped away as of late. It's either you're a complete Apple hater or a complete Apple fanboi and there's no in-between anymore. I have lots of criticisms of Apple but it seems like there's nowhere to discuss them anymore because they're immediately taken over by "Apple wants to scan your phone" and "Apple is suing mom and pop repair shops" or other hot takes that completely misunderstand their situations.
The scan your phone, repairs shops should be able to fake battery replacements or change faceID sensors without controls type stuff is also in this, or no reason for apple to make certain decisions (despite obvious reasons) etc.
It's gotten to just BLIND reaction - ie, apple is done when reality is apple remains far more trusted than almost any other company (or govt) brand wise.
There is no better alternative (at least for some people)?
Just some examples:
- Integration between their devices cannot be matched by others
- Apple Watch has the largest app collection, great integration between iOS and Watch apps, smooth animations/UX, the most accurate GPS of smart watches
- Handover of AirPods between Apple devices is a lot better than with other Bluetooth headphones
- (subjective) iOS has a lot better UX/animations than Android
Apple definitely needs to improve its processes in order to ensure he and others gets credit.
But he is over-reacting about the confidential line. When I worked at Apple years ago I added a similar line when dealing with external people. And in every email I have sent whilst working for telcos, banks etc over the last decade a similar line has been included automatically at the footer. It's more a boilerplate polite request not a demand.
I didn't mean for my comment to come across as disagreeing with your overall point - just highlight that there is (in my mind) a fairly big difference between auto-generated footers attached to outgoing emails that you referenced (re banks, teclos) and explicitly written instructions at the beginning of an email.
"...Due to a processing issue, your credit will be included on the security advisories in an upcoming update. We apologize for the inconvenience," Apple told him when asked why the list of fixed iOS security bugs didn't include his zero-day..."
"...We saw your blog post regarding this issue and your other reports. We apologize for the delay in responding to you," Apple told Tokarev 24 hours after publishing the zero-days and the exploit code on his blog...
"...We want to let you know that we are still investigating these issues and how we can address them to protect customers. Thank you again for taking the time to report these issues to us, we appreciate your assistance..."
The company hasn't denied the bounty, they're just incompetent / slow on this process.
Feels like everyone is out to paint <x> company with just confirmatory bias using whatever half-baked story is available. Even I feel for company leaders in this kind of shitty journalism environment. And the rest of the comments here are just autopilot piling on the echo fest.