Making a separate, 'researcher-friendly' device that is easier to own "without having to bypass its security features" on the surface seems so incredibly in line with Apple's general ethic yet so counter-productive in this particular context...
You're right. Main political opposition in India was affected with Pegasus and it is part of this program. I think this has more to do with the level of Apple's presence in a country/jurisdiction and their ability to retrieve the device back in case of misuse.
I find it funny that most of the true cyber powers are left off the list but India is considered to be “safe choice”. India may be poor but it is hardly lacking in engineering prowess. Apple’s own employees are a testament to that.
Right, but we aren't talking about private citizens but nation state actors with an interest in cyberwar capabilities. USA is probably near the top of that list.
It's kind of a lost cause when the government could just directly compromise Apple itself though? Maybe they are also fine with helping their own country's cyberwar abilities.
If you want a potential alternative to the security research device from Apple look into Corellium[1] - startup that emulates iOS on ARM64 server hardware, sued by Apple for copyright claims and survived.
It is a paid product (eg $x/hr cloud billing). Supposedly easier to obtain than the security research device.
I think two HN submissions got merged, I think the submission I made my comment to lead directly to that page. At least that's my excuse in case someone asks.
I remember when the iPhone tech remoted into my phone. I knew it had to be technically possible but it is reserved for Apple. Probably a good idea that it's not generally accessible, all my relatives who I recommend use their iPads for banking would be in trouble.
I’ve seen this done for support. Apple seems to have an internal remote access tool similar to common VNC or RDP based tools that allows them to view a video stream of your display and send touch inputs to it as if they were physically using your device.
I'd need to see some evidence they have this capability.
They could publish a firmware that abuses your device, but they would have to ship this to everyone on that particular channel (e.g. general releases, public beta, developer beta)
They host the servers. They know your current IP and current login session to the app store at all times. Of course they could ship a signed update to only you with a court order. How do you think beta channels work?
Maybe judges do not realize they have that power yet, but they will eventually.
The CCP certainly understands this well, which is why they seized the HSMs governing encryption and binary signing for Chinese citizens.
Not even close to true. If Apple were to exfiltrate your files, or monitor your screen intentionally without you opting in, they’d be taken to pieces in court.
Or maybe you visit a different country, like China, where Apple has ceded all power over their servers and encryption keys to the CCP.
Or even in the US, the courts you think will save you could be the very ones ordering your data because you are suspected of a crime because you typed a certain trigger phrase that activates an automated warrant like those Google has.
The 4th amendment only protects you when data is stored on your property. When it is in control of a corporation, the corporation gets the warrant, not you.
Or maybe Apple simply changes their terms of service to resemble that of TikTok or Facebook giving them more or less complete legal freedom over your data.
Or maybe it is just a rouge Apple system administrator that does not care about the rules.
You can not own an Apple device and the data on it is at the mercy of choices made by remote humans with incentives very different from yours.
Power that exists will always be abused. It is how humans work.
I hold my own data on devices that I own on property that I own.
No one has legal power to change their terms of service governing my data, or issue secret warrants, or extralegal power to outright take my digital property.
Would those situations ever have seriously impacted me in the first place? Probably not. Thing is though, they can and do affect a lot of people.
The more people that learn to take back control of their digital property, the safer we all are.
I was just making the point that Apple hardware and the data on it is not in your control. If you fully trust Apple and your politicians to never mistakenly target you or anyone you recommend the same practices to, then you are all good.
The comments you are making on HN right this very moment are contrary to what you preach. HN owns the data on your account, not you.
I cannot assume your jurisdiction, but if one were to believe what you have stated, these assumptions would need to be true:
1) you’ve never flown anywhere before
2) you do not belong to any single country (you need to be a nomad, as all modern countries have identification systems in place for their citizens)
3) you’re using internet at a cafe because if you are paying any bills to your ISP that’s game over
-
I find it highly improbable you speak the truth. I’m sorry to say, but it really seems like the type of comment made by an “ignorance is bliss” type of individual that has recently watched an episode of mr robot.
I am easy to look up, as well as what I do for a living, if you wish to do so.
The data I post on HN is public. When I post data to a public place it is no longer mine. I am not a luddite, I simply have separation of concerns between public and personal life. Data that is not public, is mine.
When I fly, I accept this is a public event. All parties are transparent about this. You will however be hard pressed to buy a record of the local locations I frequent or what I purchase at a pharmacy without an expensive private investigator because I do not carry a cell phone and I pay cash.
The data that I consider mine, such as personal family photos, detailed location history, my IoT product usage, etc, lives in servers I physically own on property I own. Not unlike a box of photos in someones attic. If you want to rifle through it, get a warrant. No surveillance capitalism companies will have a chance at buying that data regardless.
I do sometimes use third party services to distribute private data, such as matrix.org. The privata data such as DMs can not be decrypted or controlled by the server operators, and only by my keys on devices I control all software on. That data is mine too.
You do not need to trust your ISP to have a useful level of digital sovereignty.
> No one has legal power to change their terms of service governing my data, or issue secret warrants, or extralegal power to outright take my digital property.
I don’t see how this is even close to true. Governments seize property all the time.
Sure, but if the data lives on my property then they need to get a judge to sign off on probable cause and show me the warrant. Then at least I am aware of what is happening and can challenge it in court, or go public, if it is unjustified.
When the data lives with Apple or Google, they can just secretly take it, even in bulk. The NSA wire taps on Google and others were a real thing that actually happened. No pesky constitutional protections in the way for the data you freely give to corporations.
> Sure, but if the data lives on my property then they need to get a judge to sign off on probable cause and show me the warrant. Then at least I am aware of what is happening and can challenge it in court, or go public, if it is unjustified.
In principle, but not in practice. All they need is a witness for probably cause, and once they have your stuff, you aren’t going to get it back without a bankrupting fight, if at all.
If you think the government plays fair in the real world, you might want to have a discussion with a Mr Assange, currently a guest of King Charles III.
> When the data lives with Apple or Google, they can just secretly take it, even in bulk. The NSA wire taps on Google and others were a real thing that actually happened.
The NSA wire taps were ruled illegal. How satisfying.
They may be able to confiscate my stuff, but I will at least make them put in the work to target me individually and make them find that judge and that witness and that warrant. Doing that for -everyone- is infeasible and expensive which was the exact point of the constitution. It is designed to clip the wings of hopeful authoritarian elements of our government.
Also, because I control my technology, and my encryption keys, the constitution gives me one other major protection. The right to not self-incriminate, or, the right to not give them additional rope to hang me with.
No one knows my decryption passwords but me, and no one can compel me to reveal them legally.
It is exactly because the government does not always play fair, as you point out, that every citizen owes it to themselves and each other to limit their power.
Meanwhile you give up all rights and control of your data when you agree to the terms of service of Apple or Google. They will hand over your plain text data without your knowledge or consent if you merely say the wrong trigger phrase.
Did you have to accept a prompt? There's a preinstalled app to do that in macOS, called Screen Sharing. You can remotely access other people's macs using only their Apple ID, and their approval of course.
Do they actually pay out tho? I keep hearing security researchers having difficulty getting these bounties, seems like a great business strategy out source security audits, offer massive pay outs looks good, don't pay out and keep the pot growing larger to look even better.
If someone comes forward with legitimate good security vulnerabilities and you don’t pay out, you’re massively encouraging them to go to shady brokers next time.
> Device attack via
physical access: $5,000: Limited extraction of sensitive data from the locked device after first unlock. As an example, you demonstrated the ability to extract some contact information from a user’s locked device after the first unlock.
Uhhh I must be missing something here… I can trivially share a contact via email after my iPhone is unlocked?
An iPhone requests the user’s password upon restart, this would be referred to as “first unlock”. The reward is for an exploit that takes place against a _locked device_ but only after it has been unlocked once first. As in, an exploit that applies to the Lock Screen when the device was previously unlocked at least once. It is likely easier to trick a locked system into unlocking after it has already been unlocked the first time, due to password storage, credentialed background processes, and so on.
I believe by “first unlock” they mean a login/unlock right after a reboot. So - turn on device, do first unlock, then lock again.
Might be wrong, but afaik the very first unlock after a reboot is bit different then subsequent unlocks (I guess cached memory etc)
Yup. it's exactly this. After first unlock, data is decrypted and loaded into memory. You shouldn't be able to extract it though, without unlocking the device.
„Locked device after first unlock“ => the device is locked but was unlocked at least once after boot. I guess this loads some keys from the tpm into ram. Using Face ID for example requires an initial unlock via the users pin
First unlock means the user entered the PIN to go through the second level of encryption (after Secure Enclave device-level protections of flash).
Without first PIN, most functions don't work because the writable flash areas storing third party apps and user data are still encrypted.
This is also why you have to enter your PIN on reset rather than a biometric; it is far more established to derive a symmetric key from a password than from biometric data.
Under the detailed criteria listed here[0], it seems plain that he qualified for the $50k award when it comes to the Mac vulnerability, and the $25k award for the iOS one. Since they're fundamentally the same flaw, he'd likely only qualify for $50k, not $75k, but Apple still stiffed him of $43k. For shame.
Yeah I love the idea of bug bounties, however there is this issue created when the provider cannot offer the most competitive price for bounties. It's no secret that nation states will pay more than Apple will for vulnerabilities.
There's no reason Apple couldn't pay more than nation-states for bug bounties; They have a ridiculous amount of money after all.
In Crypto there exists a service called ImmuneFi, it's essentially a arbitrator between hackers and services offering bug bounties that provides an impartial third party ruling on the payout.
They recently paid out a $10 million bug bounty. That really needs to move into Web2.
> Immunefi has saved over $25 billion in users’ funds and has paid out $60 million in total bounties. The platform now supports 300 projects across multiple crypto sectors, and collectively offers $135 million in bounties to whitehat hackers. Immunefi has also facilitated the largest bug bounty payments in the history of software, including $10 million for a vulnerability discovered in Wormhole, a generic cross-chain messaging protocol, and $6 million for a vulnerability discovered in Aurora, a bridge and a scaling solution for Ethereum.
That would just create a bidding war and wouldn't stop the arms race at all.
Also, once the rewards increase past a certain point, security researchers would have no reason to work for apple, since one big hit would mean set for life.
Security researchers on ImmuneFi can be set for life by submitting a large bug bounty today. But that's extremely rare, and you don't see them lining up to leave apple.
It’s anything but lazy. You may not like the style, but it’s a very well-developed vocabulary and they’re extraordinarily diligent about making sure that they speak with one voice.
Imagine how operationally efficient their marketing group has to be as a point on the critical path of everything Apple does publicly.
Except the head of industrial design has decided to roll and the recent batch of hardware release doesn't look compatible in a hi-fashion sense collection. The latest ipad has been received by one reviewer as "weird" and that pen accessory is finicky in version type and connector type requirement.
> we’ve grown our team and worked hard to be able to complete an initial evaluation of nearly every report we receive within two weeks, and most within six days.
At other big tech companies, an initial evaluation of a security report will be done in 15 minutes... And if it's important, people will be woken up and a workaround will probably be deployed in a matter of hours...
For example, the Google security bug form[1] says "This option might really get someone out of bed."
Not 'verifies'. Simply read the report and decide on the priority.
Filter out the reports saying "The padlock is missing on my gmail" from those that say "If you type TRUE into the gmail login password box, it will let you log in as any user, and 4chan has discovered it".
I think there's a difference between verifying a report and simply triaging it to the right team. Apple is doing the former while companies that respond in 15 mins are often doing the latter.
There are clearly 2 different levels of "evaluation" at play here.
Being able to "evaluate" every security bug submitted to you in 15 minutes implies relatively insignificant bugs, or it implies that you are not "evaluating" the bugs you claim you are "evaluating".
The first time a report came in on meltdown/spectre/heartbleed whatever, there is no way any serious security researcher could have fully evaluated that report in 15 minutes. Never having seen or heard tell of it previously. Heck, just pulling together the requisite hardware and getting the requisite software on it might take more than 15 minutes. I don't buy that it could be "evaluated" in 15 minutes.
how do you feel about the word "triaged"? There's some reports that are obviously going to be worth an immediate response and some that aren't. And some will slip through the cracks, in either direction, because the queue is being watched by a human and not a robot. If the report contains a screenshot of your private admin panel, it's getting escalated.
Anyone serious got advanced notice of meltdown/spectre/heartbleed and had longer than 15 minutes to decide a course of action. Whether that's a good or bad thing about infosec as an industry, I can't decide.
I have had replies on bug bounty reports in under 10 before. It can and does happen.
Edit: To clarify, especially in cloud environments (which is most stuff these days) it's really not hard for someone to verify something if it's well written.
I might be a bit pessimistic here, but I'm betting that's not an experienced, trained individual that's responding to the ticket. It's like a level one techie who's basically just moving the ticket from one queue to another and potentially waking someone up if it looks scary enough.
It's certainly possible. I've handled a bunch of bug bounty programs and sometimes submissions come in at just the right time and attract just the right attention. It's not a reasonable expectation for the average submission.
It depends on your targets, IME. Huge companies? Yep, you'll get a "thanks for telling us" from a bigger bug bounty program and then not hear anything for weeks to months.
For small- and mid-sized companies that do bug bounties (of which there seem to be fewer and fewer these days as a percentage) you can definitely wind up submitting directly to the right people and get really quick response times.
This statement did seem strange. However, I sent in a report to Apple Product Security a week ago, and I received a personal response within 48 hours saying that they reviewed my report.
Read more carefully, it says “an initial evaluation of nearly every report we receive within two weeks, and most within six days.”
Emphasis mine. What it means is that even reports that fail the initial “oh shit” gut check (which Apple definitely does BTW) will still get worked within 1-2 weeks. That’s pretty good.