Hacker News new | past | comments | ask | show | jobs | submit login
iPhone Bugs Are Too Valuable to Report to Apple (vice.com)
201 points by cinquemb on July 10, 2017 | hide | past | favorite | 137 comments



But...this is true of every software vendor. Does anyone think Microsoft is paying market value for a remote code execution exploit in Edge? They're not, they'll give you $15k for it[1].

I find this particularly interesting:

> [the security researchers] asked Apple's security team for special iPhones that don't have certain restrictions so it's easier to hack them [...] these devices would have some security features, such as sandboxing, disabled in order to allow the researchers to continue doing their work.

If I go to Google or Facebook and ask them to, say, turn off some key security features on their site so I can find more bugs they're gong to tell me to go take a hike. It's unclear to me why a security researcher thinks Apple would give them access to a device with the sandbox bypassed. Why would they possibly trust them?

[1]: https://technet.microsoft.com/en-us/library/dn425036.aspx


Your analogy to me falls extremely flat given that in this situation you own the iPhone in question and are just asking for the right to put arbitrary software... on your own phone.

(My comment is now at -1. I challenge anyone considering to downvote this response to actually answer how asking for the right to modify the software on the one phone in your possession--the one whose security features, if you really want to insist that this is about a security feature, only affects you--is even remotely comparable in an honest setting to "go[ing] to Google or Facebook and ask[ing] them to, say, turn off some key security features on their site".)


This has been my biggest complaint about the iPhone from the very beginning of the app store and remains my primary reservation about committing to Apple devices to this day.

Thankfully the UX on Android has caught up to the point where it's as good as or even better, in some respects, than iOS. At least for my purposes, anyway.


I didn't down-vote, and don't know enough of this field to be sure, but I think there's an edge case that can affect other users: opening any door a bit may make it easier to retrieve keys that allows hackers to break into other phones.

For example, if they open the secure enclave so that security researchers can see in full detail how it works, that may expose some master key.

Countermeasure would be to have separate keys, ideally for each such phone, but having them for only this class of phones might be sufficient, too, if they also restrict access to this class of phones. I think that's technically easy, but may be expensive, process wise.


A system designed like that is flawed and insecure, and someone else with more resources--probably someone much more evil, sadly--is going to have figured out a way to get to that key... we should want to air that to the light of day and get that fixed.

I mean, at some level your comment turns into "we should try to hide the security flaws we have as much as possible, even from the people who are actively trying to help the world be more secure"... if you aren't serious about finding bugs, why bother with security?


I don't understand that logic. Apple built a safe. If I want to check that it is secure, it would be helpful if they built one from glass, so that I can see how the mechanism works, and thus verify the design. My argument is that that may expose a secret stored in that safe that then can be used to break into 'real' safes.

Your argument seems (to me) to be there should be no secrets on that phone that make attackers any the wiser (might well be possible, given the existence of asymmetric encryption). If so, why did Apple go to the effort to add the secure enclave?


The goal of the secure enclave is to protect your information (the signature of your fingerprint, the encryption key of your flash memory, etc.) from people who want to steal your device and access that information without your passcode by decapping the chip and reading its content. It is not there to protect some kind of global security key from Apple (we live in a world with public/private key cryptography... there's almost no reason why some key would exist on the device that is somehow critical to its security, and if it did exist it would mean a complete security failure).


Saurik is correct about security through obscurity not really being helpful, but for what it is worth, this:

> Countermeasure would be to have separate keys, ideally for each such phone, but having them for only this class of phones might be sufficient, too, if they also restrict access to this class of phones. I think that's technically easy, but may be expensive, process wise.

...is already something Apple does with the special development devices. Different keys and certificates used, probably to allow easier access for engineers while tightly monitoring and auditing production infrastructure.


You own an iPhone designed to work in a specific way. You don't need Apple's permission to put arbitrary software on your own phone.

You can just do it! If you can figure out how and don't mind maybe breaking your iPhone.

Apple doesn't sell infinitely customizable devices. Anyone who buys an iPhone knows this going in or is being intentionally obtuse. The idea that it's my phone I should be able to do anything I want with it is completely antithetical to the brand Apple has built over the last 40 years.


I am not certain what conversation you think you are joining, but this is one about "people invited by Apple to take part in a bug bounty program--one with an in-person onboarding process and which involved a meeting with Apple's security team, all expenses paid, in Cupertino--with the stated goal of working together to improve the security of the device have asked to be given the ability to modify the software on a limited number of test devices to aid in their security research". Your response is trying to tackle the general argument for arbitrary customers. I still think you are wrong, but that isn't the argument today.


Here's the concern I have, if you're able to "unlock" a single device, how do you do it where it's useful to the researcher, but can't be done to an unsuspecting 3rd party's phone?

The only way I can think of that doesn't have an endless array of problematic edge cases is for Apple to have a "researcher edition" of the phone, but even that isn't problem free.

Thoughts?


Apple already solves this problem by way of their TSS update server: they can whitelist specific ECIDs to be able to sign and install developer customized firmwares. What the argument was was "if we sign on to your program, can you add us to your whitelist?".

The only issue I can come up with would be "what if a researcher who was registered with and known to Apple gave their phone to someone else as a fake gift to spy on them". If you really want to go there we could argue it, but it seems a little far fetched in terms of "amount of damage that can be done via this route that couldn't be done via other ones".


> The only issue I can come up with would be ....

If it's locked to specific hardware, then no, I definitely wouldn't argue the edge case scenario you mentioned.


Apple has DEV-fused devices which use separate development certificates and keys.

The bootloaders and kernels for production devices still retain the code for the "special" functionality, that is how researchers are aware of it, but it simply will not work without an actual dev-fused device.


I agree with you here, you can get to other parts of the OS with this ability and will find different flaws. I would say that this comparable to an external penetration test vs a more formal internal practices and code review. Both are acceptable approaches finding different results.


Many of the phone’s security features don’t only benefit the user - they also make widespread piracy virtually impossible, and they ensure that nobody can set up a rival software ecosystem on Apple’s hardware.


Your comment agrees with my world view (and why I even pointed out that "if you really want to insist..."), but comes across as an attempt to answer my question or refute my point, so I will then respond by saying "and giving a handful of security researchers the ability to run arbitrary software doesn't lead to rampant piracy" (but you know what does? the "free developer" tier added by the Swift team ;P).


> If I go to Google or Facebook and ask them to, say, turn off some key security features on their site so I can find more bugs they're gong to tell me to go take a hike.

It isn't that uncommon in bug bounty programs to get specific restrictions (say rate limiting on web apps) turned off.

When I do security audits i'll often ask for shortcuts (give me access to X machine as if I phished it, etc). These shortcuts will shave a month of the low level grunt work off so I can spend my limited time working on the core vulnerability.


Yes but you are presumably doing these audits on request, not cold-calling them, so to speak.


> Does anyone think Microsoft is paying market value for a remote code execution exploit in Edge?

No bug bounty program has to pay market rate. They just have to pay enough so that $bounty > $market_rate - $fear_of_getting_arrested

Edit: there are actually other non-monetary benefits to being a white hat. Some of them get high-paying contracts or fantastic jobs. Some just like the prestige of finding flaws. So it's a more complicated equation than I presented above.


I did a surface comparison of the Apple, Microsoft and Google programs and while the top lend is the same everywhere (150k~200k) Apple still looks pretty unrewarding when you go below that: Google can pay up to 150k for a critical Android Kernel compromise (not including Trusted Environment compromise) reproducible on a Pixel, Pixel XL or Pixel C.

If my reading is correct, at Apple that tops out at 50k, you have to compromise the Secure Enclave to go even reach 100k

If I'm not mistaken for MS it's covered under the Mitigation Bypass and Bounty for Defense Program, which is up to 100k + 100k (100k for mitigation bypass, 100k for bounty for defense)


You're right, but it's still puzzling to me. That at a certain point, wouldn't it still be cheaper for MS (or whoever) to pay a more serious amount for critical bugs.

I don't quite understand the demand side of this, is it just complacency among customers of these companies not demanding more secure systems.


Wouldn't the black market always be willing to pay more?


The government markets will always be willing to pay more because money is free to them. In general, the financially motivated criminal black markets would probably be willing to pay less (if hackers use an exploit to steal $250m from Apple users, that probably hurts Apple by more than $250m). However, a $250m bug bounty is hard for Apple to pay, because the harm is hard to quantify, so they generally don't, leaving the criminals to pay $1m for it.


Figures like that IMHO indicate that the black market doesn't sell the exploit to teams doing financial fraud (it doesn't pay back that much) but that instead they act as resellers / auction houses for nation states stockpiling cyberweapons.


Wouldn't there be a limit at some point? If apple starts paying 1 million, how many shady companies can afford paying 10 million? Also, if you can make 5 million with a single exploit through bug bounties, would you even bother selling to the blackmarket if you can easily live off of that money for the rest of your life?


I guess that's my question. We're talking about companies with hundreds of millions of customers here - which would make me think that any minor security issue's impact is magnified. Which I would think would mean MG/Amz/Goog/Apple would be willing to pay more.


If you noticed your friend and neighbor left their car unlocked with the keys inside, would you steal their car, or would you tell them? I bet you would tell them.

Let's say instead that there were some shady people who liked to hang out outside of the local gas station, and you have heard that they will give you a cut of any heist they pull off based on "tips". Do you call them over to come steal the car? I bet you don't do that either.

Let's say that it is some random person's car. I still doubt you steal the car, and I still doubt you tip off someone else to steal the car. However, I also doubt you go far out of your way to find and tell the owner of the car something is wrong.

What if, though, you knew that you could get a reward; something sizable enough to be at least worth your time, but nowhere near the value of either the car itself or what a band of thieves would be willing give you if you called them and tipped them off?

That's all this bug bounty program is: it is designed to provide a reason for people who come across bugs to even bother coming to Apple at all rather than just putting it in a "pile of fun bugs".

Only, instead of the moral issue being "someone's car might get stolen", it is more like "you found a bug in the Tesla's computer locks, which makes it trivial to walk up to any Tesla anywhere and just drive off (or even tell the Tesla to steal itself!)".

The companies that offer large sums of cash for key bugs, such as Zerodium, tend to be pretty "black hat"... their clients are doing stuff like corporate and governmental espionage; they might even have mafia-like organizations as clients for all you know.

https://www.wired.com/2016/09/top-shelf-iphone-hack-now-goes...

So that's the real ethical question involved here: do you go to Apple and get your $50-200,000, knowing that Apple will give you credit for the bug, let you talk about it at the next conference, and seems to care enough to try to fix these things quickly...

...or do you sell your bug to a group that resells it to some government which then uses it to try to spy on people like Ahmed Mansoor, "an internationally recognized human rights defender, based in the United Arab Emirates (UAE), and recipient of the Martin Ennals Award (sometimes referred to as a “Nobel Prize for human rights”)".

https://citizenlab.org/2016/08/million-dollar-dissident-ipho...

FWIW, I have severe moral issues with this bug bounty program: I am a strong advocate of simultaneous disclosure, and while Apple does tend to fix bugs quickly, they have made it clear that they are not prepared to commit to timelines even while keeping users in the dark about what they need to do to protect themselves.

However, this article makes it sound like the entire concept of the bug bounty program is incompetent or something, as it is failing to pay as much money as the black market... while I have met a few people in the field who are more than happy to sell a bug to literally anyone with cash, the vast majority of people (even the ones whom I have sometimes called "mercenaries" for being willing to "switch sides"), have a pretty serious distaste for the idea of selling a bug to the highest bidder.

The real reasons you don't hear much about people selling their bugs to Apple are that they are like Luca (who started doing this at the age of 17--he's now either 19 or 20?--which is context that I think is really important for this evaluation) and are sitting on bugs because they are personally valuable to them (as without at least one bug, you don't even own your own phone enough to look for others; so there's a really big incentive to not disclose your last bug: this is the thing that Apple should really care to fix), that they are intending to release a public weaponized exploit in the form of a jailbreak (which, given the demand from legitimate users due to Apple's insistence on locking down their devices for reasons that are more about business models than security, can be a ticket to world-wide fame that money just can't buy, and which will also net you at least some donations on the side), or simply that they actually have been but they haven't told anyone (a situation that seems so likely that it seems weird that this article discounts it).


Article quote-ee here. You're not describing the same thing the people in the article were describing.

The bugs that Apple is looking for take time and resources to find yet you can't depend on the $200k of revenue to feed yourself. E.g., even after you spend months of real work you sometimes fail and get no payout. That's an acceptable risk for a $3k bug bounty. It's not at $200k.

Every other issue you cited is secondary to the economic one.


I am well aware of how finding these bugs work (I recommend looking at who I am, if you don't know, as your response kind of tells me you don't ;P). I believe the article is not actually focusing on the same problem as Apple's bug bounty program.


I know precisely who you are, and I think you're jumping to conclusions in this case. People love to read morals, ethics, and politics into analyzing bug bounty submissions, but none of that is the primary factor here.

In general, it's not possible to "stumble upon" a $200k bug like the kind Apple is looking for, you have to set out with the intention to find one and work for it. But you're not on a contract and you're not getting paid the whole time. You only get paid on delivery and there's a high chance you won't have anything to deliver. It's just too much risk for an individual or company to commit months of time at potentially $0/hour.

You alluded to it, and it bears mentioning that there is a balancing act with future revenue streams too. If you set up the process to find one $200k bug then presumably you want to use all that know-how to find a second one. But reporting the first one to Apple will, generally, kill your entire revenue stream as Apple knows all your secrets now.

In general, working on $200k bug bounties is a bad business proposition. Period.


We don't disagree on your conclusion: where we disagree is on the goal of these bug bounty programs and (thereby) how we can judge if they work. The value of a bug to the black market can be insanely high: it just isn't possible for Apple to outbid the negative ramifications a bug can have on the outside world. You are always going to get more money by going to the black market than by selling it to Apple.

So, I am going to say it again: I believe the article is not actually focusing on the same problem as Apple's bug bounty program. The issue you were quoted for in the article is "how do you get someone already doing this to give the bug to Apple instead of selling it at market rates". Well, if someone is willing to sell it at market rates, then Apple is kind of screwed as that person likely does not have much in the way of morals and Apple is never going to outbid the market rate.

Now, you are focused on a different issue: "how do you incentivize people who otherwise wouldn't work on this stuff at all to work on this stuff". This still isn't something Apple is focused on. If you want to do this but just need a steady salary, I bet Apple would love to hire you.

What a bug bounty program does is it makes the situation that does sometimes come up where you stumble across an issue while working on something else, or you just got really curious about a thing and find yourself two months later having "wasted your life" on scavenging a bug, or you have some other motivation (consulting or academic work) to be panning for gold, and now you don't know what to do next: rather than doing nothing, you give it to Apple for some reasonable payout.

If you were morally OK with selling your bug on the black market, Apple can't be going around trying to outbid evil, so that can't be a goal of this program and you can't judge it for not solving it.

And if you wouldn't have spent your time on this at all, and would have instead gone off to work on some other project, Apple isn't trying to replace "jobs": they have a massive security staff of some extremely talented people (many of whom at this point used to play for the other side).


> You are always going to get more money by going to the black market than by selling it to Apple. [...] , then Apple is kind of screwed as that person likely does not have much in the way of _morals_ [...] If you were morally OK with selling you really bug on the black market, Apple can't be going around trying to outbid _evil_,

(_emphasis_ was mine)

I agree with you that dguido has framed Apple's Bug Bounty program incorrectly. The money amounts are deliberately not structured as a incentive to build a remote "red team"[1] to do speculative work. (I would go farther to say it would be folly to attempt to create such a salaried red team to uncover security bugs since history has shown that many types of security exploits that come to light had leaps of creativity that exceeded the imaginations of the systems' designers.)

However, I'd add some nuance to your moral scenarios by stating that a sophisticated buyer of Apple bugs would not present it to the hackers as "selling to evil entities". Instead they're actually selling to the "good guys" -- it's just a different set of good guys from Apple. (E.g. Zerodium may in some instances act as a front to buy the bug on behalf of NSA/FBI/CIA).

The article talks about hackers that are willing to go to dinner with Apple and meet-&-greet Craig Federighi. Since these "security researchers" are willing to show their faces to Apple, we'd probably classify them as "white hat" or "grey hat" hackers instead of evil "black hat" hackers. Therefore, one way to persuade these researchers to sell the secret to the black market is to convince them they are still serving "the greater good"... it's just the black market group is paying $500k instead of Apple's $200k.

Summary of my interpretation of Apple's Bug Bounty:

1) Apple did not set it up to incentivize speculative work. This looks like an economic design flaw but but it actually isn't.

2) Apple is not trying to persuade immoral black hat hackers to sell to Apple. Apple already knows the amounts are too small to compete there. The audience they are trying to appeal to is the white/grey hats.

[1] https://en.wikipedia.org/wiki/Red_team


Apple could easily set aside 50+ million per year on bug bounties. They don't do this for many reasons such as reputation hits etc, but cost is not the most important one.


>People love to read morals, ethics, and politics into analyzing bug bounty submissions, but none of that is the primary factor here.

Just disregarding these things and justify it with "economics" or "it's bad business" doesn't make ethics disappear. I'm guessing you wouldn't sell exploits to ISIS even if they were the highest bidder..? What about North Korea? China? Russia? Syria? The US? The US with Trump in charge?

I personally like tptacek's comment on this from last year:

>None of them are adequate compensation for the full-time work of someone who can find those kinds of bugs. Nor are they meant to be. If you can, for instance, find a bug that allows you to violate the integrity of the SEP, you have a market value as a consultant significantly higher than that $100k bug bounty --- which will become apparent pretty quickly after Apple publicly thanks you for submitting the bug, as they've promised to do.

https://news.ycombinator.com/item?id=12230677


> which will become apparent pretty quickly after Apple publicly thanks you for submitting the bug, as they've promised to do

This is the same twisted logic that people try to use to hire photographers or graphic designers to work for them for free, because they will get "exposure". If someone does something valuable for you, you should pay, no matter how famous your brand is. And Apple has enough cash to spare.


It's not though, since the alternative to disclosing the bug to apple is to either hoard it for yourself or sell it to someone, both of which keep the attack vector open and millions of users at risk. That's where the ethical discussion comes in, and there's not really a parallel to the case with the photographer/graphic artist.

(And just to be clear, I do think that fair compensation is a part of that ethics discussion, but it doesn't trump other concerns.)

(edit: typo)


I was not addressing the ethical question involved - just the logic behind "Apple thanks you, therefore you should be happy".

If Apple is not providing reasonable money compensation to white hat security researchers then they are willingly leaving this space opened for black hats.


I read that more as "Apple thanks you, at which point you realize you were smart enough to have made more money doing something else", not "therefore you should be happy".


This is similarly distorted though - You make enough money anyway, so you should do this for Apple on discount, because "exposure".

No, Apple is making money of their walled garden, they should be the ones to pay the bills for wall maintenance.


You are trying to read the comment as positive of Apple's thank you and exposure, when it sounded to me much more like a reality check: I would go so far as to say Thomas's point might have been "when Apple thanks you you will come to regret wasting your time--which you now know was always valuable--on them". That isn't "you should thank Apple for making you realize that"...


> hoard it for yourself or sell it to someone, both of which keep the attack vector open and millions of users at risk.

I think this logic is inherently flawed.

If there was no monetary incentive and the only ROI was a thanks from Apple, maybe the bug in question would not have been found in the first place. Becoming aware of a bug does not suddenly put people at any more risk than they were previously in, prior to bug discovery.


>Becoming aware of a bug does not suddenly put people at any more risk than they were previously in, prior to bug discovery.

I agree, which is why I said "keep[s] [...] millions of users at risk" not "puts millions of users at risk". An unfound bug is still a potential zero-day. With something as valuable as an iphone exploit, we know multiple entities are desperately looking for it, so I wouldn't err on the side of assuming that any exploit would not be found. (or put more succintly: If you've found it, someone else might've to)


What do you think the payouts should be?


There's a variety of models that others have experimented with, and they all tend to fail at bugs this deep. For instance, Google tried a thing where you open a bug ticket once you think you've found an issue and you stream out everything you tryalong the way. I think there were ongoing or partial rewards and it was Android-centric. I'm pretty sure approx. 0 people took them up on it.

Microsoft also had/has mitigation bounties for around the same dollar amounts and it turned out nearly the same: lower than expected interest given the price point. Most of the interested parties tend to be academics, for fairly obvious reasons when you think about the economics of it all.

I think that if Apple wants to find bugs this deep in specialized, hard-to-audit surfaces within iOS, they ought to hire experts at expert rates and provide them the tools they need to look. In my perfect world, I would hire an expert consulting firm at their market base rates and then offer bonuses for findings on top of it. I would make the engagement low intensity and long in length, to build competency and familiarity with the codebase over time.


> they ought to hire experts at expert rates and provide them the tools they need to look.

I'm really curious where this idea that the existence of a bug program means they must have no in-house bug hunting team. A competent, deep pocketed corporation (which I think we all agree Apple is) would certainly do both.

Want to hunt Apple bugs but not work for them? Do that. Want to hunt bugs with a steady salary? Do that.

Edit: now with link to 197 job postings! https://jobs.apple.com/us/search#&ss=Sr.%20Security%20Engine...


> I think that if Apple wants to find bugs this deep in specialized, hard-to-audit surfaces within iOS, they ought to hire experts at expert rates and provide them the tools they need to look.

...Apple does this. The people I complain about and call "mercenaries" are the people from the grey hat hacking community who decide to go work for Apple. If I wanted to go work for Apple on this I could probably get myself hired there in a matter of days, if it weren't for the issue that I am morally opposed to it (and they know that at this point, so they would rightfully be highly skeptical of me finally "coming around" ;P). The bug bounty program is not focusing on this issue.


You are saying that you do not agree with security experts helping Apple secure iOS? Or am I misunderstanding? (Pretty surprised to hear this).


When you work to help Apple lock down their operating system, you are helping an oligopoly (Apple along with Google and Microsoft) to control the future of all software development. The mitigations that are put into the operating system serve two purposes: sure, they lock out invaders... but they also lock out the user and rightful owner of the hardware.

The moral costs of the latter do not pay for the benefits of the former. I look at people who work for Apple on security as similar to people who work for the TSA: yes, they are probably in some way contributing to the safety of people... but they are actively eroding the liberties of people in such a frightful way that the benefits are not worth the cost.

So when I see people working for either the TSA or for Apple, I ask myself "did they really really need this job? or could they have gotten work elsewhere"; and if the answer is that they didn't absolutely need to work for Apple, I model them as someone who has left the Resistance to go work for the Empire because they thought laser guns were cool and the Empire happened to have a larger laser gun for them to work on, and hell: the Empire is managing to maintain order on a large number of planets, right? :/

In case you think that this is just some future concern, it is a war which is already playing out today in countries like China, where the government knows that Apple has centralized control of what can be distributed on that platform and uses that knowledge to lean on Apple with threats of firewalls and import bans to get software and books they dislike redacted.

https://www.nytimes.com/2017/01/04/business/media/new-york-t...

> Apple, complying with what it said was a request from Chinese authorities, removed news apps created by The New York Times from its app store in China late last month.

> The move limits access to one of the few remaining channels for readers in mainland China to read The Times without resorting to special software. The government began blocking The Times’s websites in 2012, after a series of articles on the wealth amassed by the family of Wen Jiabao, who was then prime minister, but it had struggled in recent months to prevent readers from using the Chinese-language app.

Having a centralized point of control at Apple is not helping the lives of these Chinese citizens, at least according to the morals that I have (and I would have guessed you have, but maybe you are more apologetic to these regimes than I; we have never really spent that much time talking as opposed to just kind of "sitting next to each other" ;P).

So, yes: I absolutely am against security experts "helping Apple secure iOS" when what we know is actually happening is that they are "helping Apple enforce censorship by regimes such as the Chinese government on their citizens". There are tons of places in the world you can go work for where your work on security will actually be used for good: go work for one of those companies, not the oligopoly.


Thank you. This sounds like is a viewpoint I strongly disagree with, as I believe potential misuse of vulnerabilities is a concern which outweighs device freedom in this context (example: https://citizenlab.ca/2017/07/mexico-disappearances-nso/), but nonetheless I really appreciate the detailed explanation.


FWIW, I don't disagree with you that the security issues here are real. My issue is that Apple has managed to tie together the security of the user with the maintenance of their centralized control structure on computation. I think that people should lean on Apple to provide a device which is both open and secure. Part of this is what amounts to a bunch of people refusing to work for their company, particularly on the parts of their products which are essentially weapons (so the same argument about refusing to work for governments). Hell: even if they didn't seem to go out of their way to be actively evil about some of this stuff, it would be better :(.

I mean, here's a really simple one: they already have "free developer" account profiles. However, you can't install an app that uses the VPN API using a free developer profile. So if I build a VPN service with a protocol designed to be used by people in China to help people bypass the Great Firewall, as VPN services are illegal in China, China has complete control over Apple's app distribution, and Apple not only polices the use of their enterprise certificates but has in the last year or so started playing whack-a-mole on services which use shared paid developer certificates, users in China are not going to be able to install it on their iOS devices.

Why did Apple go out of their way to block access to the VPN API from free developer accounts? I can't come up with any reasons for this that make me feel warm and fuzzy :/. So yes: the US military does a lot of good protecting people on foreign soil, as does the FBI here at home, and I'll even grant that the TSA probably does something good ;P. You can show me a ton of reports of active terrorism in the world, and say "look, this stuff is important, peoples' lives are on the line"... but as long as working for those groups is tied to mass surveillance, installing puppet regimes, and maintaining resource imbalances, the moral issues remain :(.

(I'm also going to note that I find it a lot less weird if someone is consistent and always worked for Apple, whether directly as an employee or indirectly by handing them information and bugs than if they "switch sides" and go from simultaneous disclosure to "responsible" disclosure or even forced disclosure by being an employee. That's why this thread was spawned from me noting that I have at times used the term "mercenary". It makes some sense to me that there are people who work for the Empire because they believe in the goals of the Empire; it just irks me, though, that there are people who once worked for the Resistance who get a job offer from the Empire and are like "wow, that sounds great!" and go work for them without seemingly believing that anything has changed about what they are fighting for... it tells me that, at the end of the day, they really just thought "working with lasers is fun!" and the moral issues of which side they were on never mattered.)


I get where you are coming from. It is worth keeping in mind that not as many folks see it in a political manner, so if their viewpoint is a choice between "working hard to release free research/tools and getting people angry/complaining in response" versus "working hard and getting a decent salary to do what they love" then it makes some sense as to why people would go that direction.

I'm in agreement with you regarding Developer ID. I have no idea why Apple would want to limit that, I know they recently relaxed restrictions on NetworkExtension though (except for Wi-Fi helpers) - Are you referring to the old "e-mail here to apply for the entitlement" process they had in place? Or do they still not allow the entitlement for free developer IDs now?


> Are you referring to the old "e-mail here to apply for the entitlement" process they had in place? Or do they still not allow the entitlement for free developer IDs now?

I really do mean the latter: they still block this entitlement from use by free developer IDs. If you try to activate it you get an error #9999 with the following message.

> The 'Network Extensions' feature is only available to users enrolled in Apple Developer Program. Please visit https://developer.apple.com/programs/ to enroll.


It seems to me the actual solution is to hire experts as employees. That solves the "can't put food on the table" problem. It's probably a lot more efficient and effective use of their cash. As far as I know Apple has already done this over the years.


They absolutely do and have continued to do so vigorously. The bug bounty program is in addition to this.


Doesn't a high bounty create a perverse incentive for employees to introduce or leave bugs they spot so that they/some accomplice can claim the bounty?

I know silicon valley employees' honnesty is above all suspicions and rogue employees is a monopoly of the financial industry but one has to consider the risk.


Do you believe this is possible? Introducing or leaving such bugs would need more than one person, probably from different departments and even if it gets past through code-review, there is still a risk someone notices the connection between the one who introduced the bug and the one who found it.


The thing is it often only takes one line of code running with the right privilege to make the whole system insecure. And a dishonest employee wouldn't even need to write it. Having access to source code, all it would take is to spot it but not fix it.


Developers unimpeachable morals aside, I think the risk of getting caught would outweigh the benefit. Most developers at Apple hardly need a secondary sort of income.


Is everyone in this thread being sarcastic about developers morals? No one here really thinks that developers have as a whole are more moral than any random subset of people, right?


> Most developers at Apple hardly need a secondary sort of income.

I would have said the same of bankers and management consultants but the fact is things like insider dealing is a thing.


In this case, the reward doesn't outweigh the risk.

Insider trading can earn very large amounts of money, and the higher your current position, the larger scale of insider trading becomes available.

Now, for Apple iOS core developers - does it make sense to risk a $200k/year job and possible jailtime to get a $50k bounty (the limit for kernel exploits) that you'd have to share with whoever helps you to launder it?


No, it doesn't... but the context of this subthread was "Apple should pay a lot more for this bounty" which turned into "but if they do that then it will create a perverse incentive to do this evil thing". The people you are arguing against thereby agree with you: $50k is not enough of an incentive; but they feel that at some point as that number gets higher the incentive will make sense.


Yeah, okay, with a factor of 10 increase, that would start to make sense, and there could certainly be people willing to go for it.

Many high profile bugs seem to have plausible deniability where they can reasonably be errors but might have been deliberately inserted. Anybody can make mistakes similar to Heartbleed, especially if they want to.


Logicians on HN: Is there a special name for Appeal to Authority when applied to oneself?


It would still be an appeal to authority. However an appeal to authority, while sometimes fallacious, is not always fallacious.

[0] https://en.wikipedia.org/wiki/Argument_from_authority#Valid_...


I don't think you know what "appeal to authority" means ;P.


Enlighten me.


You know, I didn't think I wanted to dignify this with this level of response, but maybe someone else is going to get confused and I have terrible insomnia: you responded to someone (me) telling someone else "the information you think is relevant to changing my mind is information you should know I already have, which means there is some framing problem in our conversation that needs to be addressed" with a (slightly indirect, and thereby to me incredibly snippy sounding and extra insulting) comment claiming that is an "appeal to authority", when that statement wasn't even an argument and so wasn't an "appeal": you can't read that comment from me as "I am right because of who I am", as it is simply "you are trying to convince me of your point using information I do not disagree with (and which you should know I have, and so should probably know I don't disagree with...)" and was nothing more than a lead-in to my actual point, which was "given that I know that and that we don't even disagree there, you should go back and notice the following thing about what I said: we are talking about different points", which was the actual statement in that comment: I believe the article is not actually focusing on the same problem as Apple's bug bounty program.


You framed the problem as a moral one. dguido framed it as an economic one.

You responded to his rebuttal with "I know how these things work, don't you know who I am?" reiterate position without adding any further points or addressing the opposing party's points

How is that not an appeal to authority?

I don't know who you are (and maybe that's why I'm the only one calling you out). If you are as influential in this space as you appear to be, it's important to take the initiative to present strong arguments rather than rely on your name.

Sorry to hear about the insomnia - I'm in the same boat. Hope it gets better.


So that's the real ethical question involved here

as without at least one bug, you don't even own your own phone enough to look for others

Indeed, the fact that these bugs can lead to freedom, in the form of jailbreaking, certainly complicates the decision.

The physical analogy doesn't help either; I mean if you were somehow kidnapped and physically kept hostage in a "walled garden", would you tell those watching over you that you found a way out, if they told you that telling them would result in some reward --- but that you'd remain inside?

I have, a handful of times, discovered exploits that would break some DRM or allow myself more freedom (e.g. bypassing a registration-wall on a website or other software, found detailed intended-to-be-confidential service and design information for equipment I own, etc.) In all those cases I never considered notifying the author(s) about it. Granted, this was a time when bug bounties were nearly nonexistent, but now that I think about it, even if I was offered some other reward, I wouldn't.


But... your analogy is even worse, I think. Apple doesn't "kidnap" people. You willingly opt-in to the "walled garden" ecosystem. It isn't forced on you in any way at all.

It's more like contracting a security firm to keep your stuff safe. They promise they'll do their best, and in exchange you give them some money and acknowledge that they don't have to explain to you how it works. If you find a flaw in their security system, you can either (a) sell it to somebody, thus compromising all of the firm's clients (including yourself), (b) report it to the company so they can fix it, or (c) keep it secret for some reason. I would always choose (b). I find (c) to be in poor taste, but not appalling by any means. (a) is, in my opinion, never an acceptable option.


> which, given the demand from legitimate users due to Apple's insistence on locking down their devices for reasons that are more about business models than security

Well.. except that's not really true. iOS is de facto the most secure consumer OS out there, precisely because it doesn't allow arbitrary code to be run, and because of the restrictions placed on the process of getting software onto them. The value Apple gets from users not worrying about viruses, malware, or other security threats far exceeds the piddling revenue from the App Store, because that peace of mind and brand reputation drives device sales where that value is captured.

Meanwhile, the more permissive consumer OS' of Windows and Android seem to have continued issues with security, and the average - even informed - consumer never really knows what the software on them is really doing.

I don't disagree with the rest of your comment though!


The issues with Android are not due to the token ways in which it is slightly more permissive for users (while on the whole still being extremely locked down...); they are due to the ecosystem of manufacturers and how devices end up being deployed. Also, the reality is that for the first over well-over five years of iOS existing, it was de facto open: jailbreaks were commonly available and easy to use; the idea that the very existence of an ability to control your own device can thereby be shown by simple demonstration to not be a serious concern. And to be very very clear: of course the issue is not "the piddling revenue" (they break even on the App Store at best)... the business reasons are that they want a strong DRM story in order to be able to pitch content producers and developers on why their platform should get to deploy and vend their media.


> The issues with Android are not due to the token ways in which it is slightly more permissive for users (while on the whole still being extremely locked down...); they are due to the ecosystem of manufacturers and how devices end up being deployed

I meant something slightly different wrt 'permissive', namely that the overall market execution of Android is very permissive. To users buying the product, the specific reasons why Android is less secure don't matter. The only thing that matters is that they cannot fully trust software on the device or acquired though stores. Whether its the vendors that don't update the OS or the telecom company that installs crapware or the poorly policed third-party app store, malware has ample ways to get on the average Android device, due to the permissiveness of the full chain of processes and people involved in how it is deployed and managed.

> the reality is that for the first over well-over five years of iOS existing, it was de facto open: jailbreaks were commonly available and easy to use

Agree. The first few years were a whirlwind of rapid evolution and Apple had its hands full, and Apple just wasn't developer-focused. Jailbreaks were likely tolerated because developers were hungry to try things and Apple knew the libraries on iOS weren't that great at the time. Plus, jailbreak developers were like an outsourced R&D group for Apple to pluck ideas from. However, as iOS matured, and our culture rapidly adapted to smartphones, people began trusting them with more and more personal information. The cost of not addressing security became higher and higher. Developer tools and libraries also matured in this time, so overall the negatives of jailbreaks (security) began to far outweigh the positives.

> the idea that the very existence of an ability to control your own device can thereby be shown by simple demonstration to not be a serious concern.

Concerns for security exist on a spectrum (and are weighed against other concerns in development), and concerns can change as the world changes and the product evolves.

> the business reasons are that they want a strong DRM story in order to be able to pitch content producers and developers on why their platform should get to deploy and vend their media

I've never once heard the argument that DRM is the reason iOS is locked down. It also doesn't follow given that Macs are not locked down and can get the very same media, as can other platforms.

Given Apple has consistently focused on making computers for regular people for over four decades, I think it's pretty obvious the reason iOS is locked down is simply because there's an enormous amount of bullshit that comes with having users download software from unknown sources, and that a tightly controlled platform provides a much better user experience for regular users. Any other benefits are incidental.

EDIT: After re-reading your comments, I think we're disagreeing in whether locked down = more secure. In the case of iOS, I think it was a conscious decision made to create a better user experience, and an important subset of user experience is security. As you point out with the history of jailbreaks on iOS, you're saying Apple's permissiveness then suggests they don't have that security concern and they want to lock it down for other reasons (correct me if I'm wrong in characterizing your argument). I disagree and think security was one of many concerns being balanced at the time. Also I believe the types of vulnerabilities used by jailbreaks were probably not the types typical users would encounter in daily use of a device, such as browsing the web or downloading apps from the app store. Correct me if I'm wrong on that.


> Jailbreaks were likely tolerated because developers were hungry to try things and Apple knew the libraries on iOS weren't that great at the time.

> As you point out with the history of jailbreaks on iOS, you're saying Apple's permissiveness then suggests they don't have that security concern and they want to lock it down for other reasons (correct me if I'm wrong in characterizing your argument).

No: you completely misunderstand. I am not saying Apple let jailbreaking happen; I am saying Apple's device was so poorly secured (in an absolute, not relative, sense), that in practice you can model the device as being "open to anyone who wants to make their own device open". It was more open than Android for long periods of time as the jailbreaks were more consistent in their ability to give you full control of the device (Android jailbreaks tend to be "hit or miss", really).

The argument then becomes simple: if iOS is a platform you consider secure ("iOS is de facto the most secure consumer OS out there"), in practice this is a device which anyone who wanted to could get complete control over their own device, and so we know that property is unrelated to security.

The only thing it really does is make the device look better from a DRM perspective (particularly the AppleTV). I run into tons of developers constantly who are obsessed about piracy well past the point where it makes sense, and most of them have this weird mental model of Apple's security features that make them think weird stuff like "if I embed a key in my compiled binary no one can read it".

Regardless, I am just going to say it, as me having this information and trying to avoid saying it is probably just causing more problems than it benefits anyone: I have been challenged multiple times by people working at Apple (years ago, and I don't even think these people still work there, which makes me feel at least slightly more comfortable saying this) that if I want them to have a more open device in the ways that I want, I need to tell them how to do it without opening the flood door for piracy.


The argument to me reads as "if the security isn't perfect, it means they are not even trying, which makes it obvious that security is not the goal." but the not-perfect->not-trying jump doesn't seem to be fair.


I don't think goals or intents matter. Apple's goal might be to prevent the Ancient God Mulk'nar from returning to our dimension to begin the second stage of The Reckoning, and commenters might point out "Mulk'nar hasn't showed up yet, so this must be working"; but if either Apple or the commenters seriously think that having a closed device is what is preventing Mulk'nar from returning, they are clearly deceiving themselves as in practice the device has been de facto open (by ludicrous means that should not be required) for many years. The argument here is essentially "you are asking Apple to give up the only thing preventing Mulk'nar from returning" and I am showing how that makes no sense given the reality of history. People are saying the iPhone is very secure. The iPhone is, factually, a device which for many years was one of the more open platforms for people to tinker with their own device, due to the jailbreak scene. So the idea that allowing even all people, much less just security researchers, to tinker with their own device somehow would undo the security of the system doesn't make sense.


I think it's important to remember that Apple has a history of leaning to the decision which is most financially beneficial to Apple, wrt opening up the flood door to piracy. See iTunes/iPods policy changes over it's lifetime.

Interesting paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2244111


The article I read said Apple had iPhones that are easy to hack ? Can you explain what it means ? Quote “for special iPhones that don't have certain restrictions so it's easier to hack them, according to multiple people who attended the meeting.”


Wasn't there one jailbreak that worked through just Safari? Or am I remembering wrong?


Correct. One via TIFF exploit, then later via PDF exploit.


Yep, I believe that was the very first one.


I believe you're referring to JailbreakMe. Yes, it was Safari-based, but no, it wasn't the first jailbreak.


except cases like this: https://medium.com/@johnnylin/how-to-make-80-000-per-month-o... show that it's just a security theater for users rather than an actual working protection for users.

User lock-in is all that Apple really cares about - the entire app approving dance just adds a nice toolset to find arbitrary reasons for blocking apps that might be interfering with Apple interests (https://www.recode.net/2016/6/30/12067578/spotify-apple-app-...).


This is scammy and awful developer behavior, but probably best to keep in mind there was no overtly malicious content actually in the app regarding security, instead preying on user ignorance.

FYI: You're not wrong on this, I have found multiple apps which try to grab sensitive info and phone home with it, only pointing out your link may not be the best example to use.


I want option C. A more cohesive research organization that aims to keep iOS open as long as Apple wants to keep it closed. Right now such an organization is basically spread across a history of individual contributors, such as yourself, and others, like Luca, Comex, Geohot, and dozens of others that come and go (many do end up at Apple). We already know iOS bugs are getting more valuable because the span between jailbreaks is high and Apple's response is much, much faster now. As always with technologists, if you can compensate someone fairly and let them keep doing work they love in an environment with no, or at least few, moral problems, they will gladly take that option over a higher paying, but shady, option. Apple currently wins that battle easily.


FWIW, p0sixninja wanted to create such a thing; it has a lot of complexity, it isn't clear how to fund such a thing, and I will argue that, in the end, it isn't about raw research: while it is very difficult to make software without bugs, it isn't impossible and only a small amount of code really needs to be part of the attack surface. Eventually there just won't be any bugs, even if it takes another ten years. The idea of relying on failure to maintain open platforms is a short term solution: we need legislation.


You are right. It is at best a stop gap, and with a lot of complexity. I think as technologists it is easy to reach for the obvious technological solution. Forming a lobbying group with a mandate is very alien to me, but I know some sort of political and legal course is going to matter the most in the long run. It is one of the reasons we (my company) donates to EFF :)


> while I have met a few people in the field who are more than happy to sell a bug to literally anyone with cash, the vast majority of people (even the ones whom I have sometimes called "mercenaries" for being willing to "switch sides"), have a pretty serious distaste for the idea of selling a bug to the highest bidder.

How many people would openly admit to being willing to sell bugs to the highest bidder? I certainly wouldn't.

If anything, selling on the black market guarantees that you get what you think is a fair deal. You demo, you reach an agreement, you get your money (or bitcoins or whatever), and you move on. When disclosing a bug to a company, you have no idea how much payout you're going to get, if any.


Even if you don't "openly admit" that, do your friends know? How about the people you work with? Would they guess based on other stuff they see you do? I am not saying "I took a poll" or "I asked people", I am saying "over the past decade of being surrounded by people in the field of security, and having gotten to know a number of these people very well, this is the reality of the involved ethics".


You're neglecting the effort needed to find the keys. Suppose you spend all day every day checking every car for miles around searching for keys left in them. That's your occupation.

Either:

A) You're already a criminal doing it for the money or

B) You're just trying to help people and not doing it for the money.

Person B probably doesn't exist. So you're probably already A, a criminal and you don't care if people think you're immoral.


> Person B probably doesn't exist

I don't? Weird. There are lots of motivations for hunting bugs, many more powerful than money to people who have money already.


Or, C) You love the challenge of opening locks, and spend some of your time doing that.


There is also a possibility to stumble upon a bug accidently.


The analogy is misused. You compare to give information that does not cost anything in time and works, while chasing bugs is anything but free hobbies (for some maybe but this should not be the case at the professional level).

Bug bounty are: "do your job almost free and be paid sometimes"


In reality, people actually do stumble across many pretty bad bugs; for the ones that require "panning for gold", in addition to noting that there are tons of reasons why people might have been wandering past cars looking for keys (some good, some bad ;P)--including "academics"--I agree that this bug bounty program is not trying to solve that problem. There is too much attention here on the $200k tier instead of on the lower tiers.


I don't see the problem here. If the payment is not good enough then don't search for bugs and do domething else.


> So that's the real ethical question involved here: do you go to Apple (...) or do you sell your bug to a group that resells it to some government which then uses it to try to spy on people (...)

Can't you do both? (First sell to the evil group, and a little later to Apple).


Well, the group paying the big bucks will get extremely mad at you for doing that... the value of the bug to them was that the bug worked.


You can disclose to Apple anonymously. In your defense, bugs can be found independently by different people.


A mitigation usually used for this is to split up payments into installments, ceasing if the bug is fixed. Not perfect of course, but generally deemed acceptable.


> So that's the real ethical question involved here: do you go to Apple and get your $50-200,000, knowing that Apple will give you credit for the bug, let you talk about it at the next conference, and seems to care enough to try to fix these things quickly...

> ...or do you sell your bug to a group that resells it to some government which then uses it to try to spy on people like Ahmed Mansoor, "an internationally recognized human rights defender, based in the United Arab Emirates (UAE), and recipient of the Martin Ennals Award (sometimes referred to as a “Nobel Prize for human rights”)".

Selling the bug to Apple or some black hat group is rather a choice between the devil and the deep blue sea. If you sell it to some exploit vendor, it will have the ugly consequences that you outlined. On the other hand if you use Apple's bug bounty program, Apple will use this information to lock the golden cage even tighter on its users - a consequence that I would also strongly avoid.

So I can understand that if it is a choice between the devil and the deep blue sea, you will sell it to the best-paying buyer. On the other hand, if I could be sure that Apple will never ever not use the provided information to lock the golden cage on its users, I would accept your reasoning. As it stands now, I won't.


Well, it's invite only, and some fairly critical exploit categories seem to be way below market value. This is especially silly because these are maximum payouts.

Apple is not lacking cash on hand, and while I know that's not a reason to spend it, I figure they could come further toward the going rate. Especially infuriating is the lack of other moral incentives beside "doing the right thing", like when you get a bug bounty for an open source component, and know that the public has free access to it. Even "doing the right thing" when it comes to Apple is morally unrewarding, since they typically treat developers and partners like dirt, and are so isolated from the rest of society.

Because of the lack of true community around Apple and their products, owing largely to how proprietary all of their products and programmes are, I don't see why anyone would do white hat security research on their platforms unless they were paid substantially and directly by Apple or an Apple customer.

From my perspective, it's enough of a smack in the face to use their products. I doubt many want to be fed what amounts to (relatively speaking) table scraps for elite security research on a platform that you don't own at the end of the day even as a customer. To have to be invited to do this just makes it completely not worth starting if your goal is to participate in the white hat market.


So the short end of the stick, Apple is trying to be stingy on the rewards paid for finding these issues and is complaining that they're not competitive?


It's more like the exploit buyers will always pay more than what Apple would, so if Apple raises its bounties the exploit buyers will raise their amount too. Zerodium could just offer 2x as much as Apples bounty, and Apple can never win on price. It's a losing game for apple to play.

What Apple can compete on are the non-monetary incentives, such as prestige, rewards, access, etc.


>Zerodium could just offer 2x as much as Apples bounty, and Apple can never win on price.

I'm a die-hard Android fan, but even I'm forced to admit that iPhone has much better security. And security is a big selling point.

Raising the bug bounty to $10 million and it would still be pocket change to Apple, but it would result in:

1. Free advertisement

2. Good PR

3. Outbid the black market for iOS exploits

4. Encourage more white hats to look into iOS security


> 3. Outbid the black market for iOS exploits

That will never work. Zerodium and others are just middlemen that sell to the US/CA/UK/AU/NZ government. They aren't selling these exploits to random companies or underground hackers.

The NSA will pay an unlimited amount of money to have iOS remote 0days sitting on the shelf because they might have a small window where say, an ISIS leader's kid left his iPhone in the room during an important meeting.


Why do you think that organized crime and governments can't afford $10 million per zero day?

There are amounts that they can't afford, but those amounts would stretch Apple's wallet as well.


Governments certainly can afford such amounts. If you're developing a cyberweapon with an important purpose (e.g. Stuxnet) then putting in 3-5 zerodays at $10m each is within your budget, it's comparable to the cost of physical military hardware that they'd gladly buy and use (and destroy) for goals of similar importance.


> It's more like the exploit buyers will always pay more than what Apple would, so if Apple raises its bounties the exploit buyers will raise their amount too.

That is not how supply and demand work.

(Furthermore, it's also not as simple as exploit buyers paying more, because there are reasons for researchers to sell their exploits to Apple even if other buyers might pay more - the problem described (badly) in the article is that the disparity is currently too high).


The demand in this particular marketplace is, at times, extremely high.


http://www.cnbc.com/2017/05/02/apples-cash-hoard-swells-to-r...

I don't know, but I'm fairly sure, that Apple _could_ outbid the other exploit buyers if they chose.


For a company with $250B in cash sitting on its books this seems like a battle they can win versus any actors even most nation states :)


+1

Apple (and the others) controls how much they invest in writing secure code, and how they manage the trade-offs with, for example, investment in shiny new functionality.

So when they make a security slip up and write code that contains an exploit, it's a good thing that the market punishes them.


As far as I know, Apple's bug bounties are the highest in the industry. They're not competitive with the black market, but I don't think they should try to be.


1. Sell bug on the open market for whatever price the market will bear. Advantage: paid concomitant to your talents. 2. Wait n days. n could be zero. Don't have a theory for n. 3. Slip the bug to Apple anonymously.


I imagine that whoever paid you a ton of money might not be happy afterwards. I also imagine they're not the type of people you want to make unhappy.


Well, your payment could be part of a contract where they would only give you the money if Apple doesn't patch the bug in the next n days


Contracts only matter if they are enforceable by law. If a contract involves illegal activity, it is invalid.


There are many kinds of contracts. The contract implied above could be "cross us and find you and your family disappear", not "we sue you in court".


The article mostly proposes what Apple should do: increase bug bounty payouts. When proposing what the discoverer should do, does it matter WWJD?

He's dead so he cannot weigh in, but I think it's fair to suppose early life/career he may sell it to the highest bidder and use the money to invest in a Pixar type, or sell it to both parties if possible. Later life/career possibly use it to negatively impact a competitor? (weighing legal risk to his company of course) How about WW{Apple/Cook}D? I think we know what Uber would do;)


Apple has $58 billion, and a quarterly gross profit of about $20 billion. Based on the article's numbers, it could outbid the black market by multiplying their payouts by ten. I doubt there are so many bugs that they can't fix this with an insignificant impact to their bottom line, even assuming it doesn't help sales to have a bulletproof reputation for security.


I agree.

I don't think there's a need to multiply payouts by ten though. When you give regular people -- as opposed to people familiar with selling exploits on the dark web -- the ability to sell exploits legally to Apple, you motivate a much larger group of people. The honest hackers far outnumber exploit vendors if someone is willing to pay them. That someone could be the company that depends on the software for their business (Apple with iOS), or a software insurance company.

The very reason companies like e.g. Google, Microsoft, Apple are so profitable is because they have many users, which in turn make exploits against their software very valuable. So, unless a company's profit per user is very small, this should always scale (more users = both higher income from users and higher price for exploits). If they were willing to purchase exploits at market price they would, in effect, be functioning as their own insurer -- in my opinion, this practice is the only sensible thing an insurance company can do to insure software of reasonable complexity.

Microsoft paying one million USD per exploit, for 100 high-value exploits per year (0.1bn USD), would decrease their yearly profit (2016: 16.8bn USD) by only 0.6%, but make a huge difference in the security of their software (100 high-value exploits per year is a lot). Stockholders would be foolish to vote no if this were proposed.


> would decrease their yearly profit (2016: 16.8bn USD) by only 0.6%

> Stockholders would be foolish to vote no if this were proposed.

I doubt that an improvement to security of the type you describe here would have a significant improvement on sales or reputation of MS. So shareholders will see profits go down, who would vote on that?


This article is fucking weird. It's like saying, with a straight face, that the military needs to be competitive with professional-assassin pay scales if they want to hire the best killers.


I'd assume the military would have to pay more than going-rate to hire the professional assassin. It's definitely the wrong time to cheap-out.


Yes, the sub-system creation of long-term binding contracts (military) is usually not factored into free -market think at all.


If there's a shortage, or some other performance problem, they do, and they will.


> It isn't forced on you in any way at all.

Uhm, what universe do you live in exactly? Because the universe where I live, I have Apple or Android. Saying that apple's bullshit isn't forced on millions of users is like saying the Republican party's bullshit isn't pushed on millions of citizens. I mean argument aside, let's not do Apple's job for them and paint the world as capitalistic utopia where consumers have all the power and make all the decisions. Consumers do not make 99% of the decisions related to their phone, including the decision to have one.


Please don't take discussions into flamewar here. We ban accounts that do that repeatedly.

We detached this subthread from https://news.ycombinator.com/item?id=14733375 and marked it off-topic.


I don't see how being born into a family of a certain political opinion and being indoctrinated for years can be comparable to choosing a phone brand.

I think you are analyzing this question from a very US-centric perspective and failing to realize that in other markets (like Europe and South America, the ones I'm familiar with), Apple has a much smaller footprint, and does not possess the "reach" of a political party at all.

Edit: typo


I think the idea is that if you want a high quality phone--and you essentially need a high quality phone to be competitive in the workforce and even to engage in many social activities and functions--you are going to end up buying a device from one of a small handful of companies (Apple, Samsung, Microsoft, Sony, HTC), all of which are closed down and locked experiences. This is a problem that can likely only be solved by legislation (which the EU is thankfully looking into, as the EU actually cares: I <3 the EU).


> all of which are closed down and locked experiences

That's not true. Many Android devices have an unlockable bootloader with explicit support for building the Android Open Source Project (AOSP) for the device. Nexus and Pixel devices are directly supported by AOSP without modification. It's the same codebase used to build the stock OS for those device. The stock OS on those devices only adds Google Play apps to the source tree, some of which replace AOSP apps. It doesn't contain any secret sauce changes to AOSP. Android engineers use the same Nexus / Pixel devices that are shipped to consumers as their development devices. You enable OEM unlocking within the OS from the owner account and can then unlock the bootloader via physical access using fastboot over USB, allowing images to be flashed via fastboot. Serial debugging can be toggled on and done via an open source cable design through the headphone port.

Other companies like Sony have emulated this by releasing official sources for building AOSP for their unlockable devices rather than only making the bootloader unlockable and leaving it up to the community to hack together support. However, I think it's only Nexus / Pixel devices where you get support for full verified boot with a third party OS (i.e. you can lock the bootloader again, and have it verify the OS using a third party key) along with the ability to toggle on serial debugging.

It's why the Android security research community is so active. You get the same sources / build system, development devices (Nexus / Pixel), debugging tools, etc. as an Android engineer working at Google. The only major thing you don't get is access to their internal bug tracker. Hopefully they'll move towards the Chromium model where most of that is public once embargoes are over.


Putting aside for the moment your comments about AOSP (the timeline on the slow closing down of the source branches is a great one, particularly as you now watch more of the code move into Google Play services and the AOSP core applications be slowly obsoleted), as the issue here isn't really about the source code (and I don't think that's the point you are making anyway), I will concentrate on looking at the status of Android as an open hardware platform.

I cover the Nexus devices when I give talks. While I haven't looked into the Pixel yet (and I know that I need to, as the arguments I am about to make for quality likely will have begun to change), I can tell you that effectively no one buys the Nexus devices (the market share for them is ~1% with a 1% margin of error), and they are not seen as high quality devices.

The reality of the Android market is that Samsung makes 98% of the profit, and the vast majority of flagship devices are being made by the handful of companies that put the most effort into locking down their devices. If you want the "high-quality phone"--the one with the good screen and the good camera and the fast CPU that can run all of the apps that you increasingly need in this day and age--you are not buying one of the random open devices.

Again, though: I admit that Google's attempt to retake the flagship market and compete with their hardware manufacturer partners with the Pixel (a device which specifically looked at having stuff like a super high quality camera and screen and such) might change things, but this is an incredibly new development in the grand scheme of these things.


> The reality of the Android market is that Samsung makes 98% of the profit, and the vast majority of flagship devices are being made by the handful of companies that put the most effort into locking down their devices.

You know you can buy an international Galaxy S8 and unlock the bootloader without any exploits, right? That part works the same way as Nexus / Pixel devices on the S8 variants that can be unlocked (i.e. not US carrier versions, etc.). The difference is that Samsung doesn't give you 100% of their OS sources especially on the day that they release each update and they don't support verified boot for a third party operating system. They also revoke the warranty if you do it, but they permit it. They explicitly implemented a standard unlocking procedure for their consumer devices and it's not in any way forbidden by the terms of use other than voiding the warranty, which is sad but not exactly unfair. Their attempt to void the warranty is not valid in everywhere anyway. They still often need to honor standard warranty requirements unless it's demonstrated that the user is at fault for what went wrong.


> Putting aside for the moment your comments about AOSP (the timeline on the slow closing down of the source branches is a great one, particularly as you now watch more of the code move into Google Play services and the AOSP core applications be slowly obsoleted)

I don't know what you mean about AOSP source branches being closed. It's not true. They still release the entirety of every stable branch they ship on the same day that it ships. There isn't any substantial delay and they haven't started doing incomplete releases of the AOSP sources. They don't make things as easy as they should be but things haven't really gotten any better or worse overall for AOSP. It takes more work to assemble the proprietary Qualcomm code from their factory images (see https://github.com/anestisb/android-prepare-vendor) than it did before, but most other things are better now.

It's also not true that any of the core OS in AOSP has been or is being obsoleted. They've stopped maintaining some of the user-facing apps like Calendar, Email, Music and QuickSearchBox and a few providing app-layer services like text-to-speech which all have other implementations available. There's nothing they have stopped maintaining that's critical enough to really matter. There's no shortage of music apps and there are other Android text-to-speech apps, so the AOSP PicoTTS being unmaintained beyond them keeping it building / running as it did before doesn't really matter to anyone. They haven't stopped maintaining any core OS components. Many apps / components that are updated via Google Play and/or have proprietary Google service extensions are still properly maintained in AOSP without those extensions, like the Contacts, Dialer and Launcher apps. Below the application layer though, it's the same. Google builds the stock OS from the same source tree released in AOSP stable branches with their Google Play additions.

There's also the claim that code is moving into Google Play Services, but for the most part that isn't the case. Play Services has expanded but very little has been lost in AOSP. There isn't a movement of stuff to Google Play Services but rather they split out components into apps / components that they can update via Play (which doesn't hurt AOSP as they're still updated there) or they introduce new clients to proprietary server-based services. That's still what defines Play Services: clients to APIs provided by Google servers and out-of-band updates to components that are still maintained / updated in AOSP too. There are very few cases where anything has actually been dropped in favour of Play Services. I can think of a single example: voice to text. That's quite clear to anyone that has actually worked with it or used it.

You're speaking far outside your area of expertise based on anecdotes you've heard, rather than tangible facts.

> and they are not seen as high quality devices

Nexus 6 was as premium as Pixel devices, and the Nexus 6P was a quality device. Nexus 5X and 6P were the generation where Google started shipping good cameras, their own fingerprint reader setup, etc. not Pixels. Nexus 6 was a quality flagship device a year before then. There are plenty of non-Google devices that are 'open' in the same sense though.

> the vast majority of flagship devices are being made by the handful of companies that put the most effort into locking down their devices.

Not true. Vendors like Samsung sell plenty of unlocked / unlockable devices.

> If you want the "high-quality phone"--the one with the good screen and the good camera and the fast CPU that can run all of the apps that you increasingly need in this day and age--you are not buying one of the random open devices.

Not true for the reasons above. Your statements aren't based in reality.

> Again, though: I admit that Google's attempt to retake the flagship market and compete with their hardware manufacturer partners with the Pixel (a device which specifically looked at having stuff like a super high quality camera and screen and such) might change things, but this is an incredibly new development in the grand scheme of these things.

Pixels aren't a substantial departure from the Nexus 6 and Nexus 6P. Compared to the Nexus 6P, the SoC, image sensor, screen, etc. were all just moved ahead a generation. The build quality is comparable and in some ways the Nexus 6P was a nicer device: it definitely had nicer speakers, and




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: