Regarding the headline, it is absurdly unrealistic to expect that any consumer-grade software can defend itself from state-funded hackers. Not even the entirety American intelligence apparatus can defend itself against state hackers. Why would we expect lowly iOS (or any other consumer software) to do better?
If you have reason to believe you're targeted by state-backed intelligence agencies, you really oughta be working under the assumption that they can see everything you're doing.
The Intel Management Engine (ME), also known as the Intel Manageability Engine, is an autonomous subsystem that has been incorporated in virtually all of Intel's processor chipsets since 2008. It is located in the Platform Controller Hub of modern Intel motherboards.
The Intel Management Engine always runs as long as the motherboard is receiving power, even when the computer is turned off. This issue can be mitigated with deployment of a hardware device, which is able to disconnect mains power.
The Intel ME is an attractive target for hackers, since it has top level access to all devices and completely bypasses the operating system. The Electronic Frontier Foundation has voiced concern about Intel ME and some security researchers have voiced concern that it is a backdoor.
ill echo what the gp said. again, none of that means it is at all realistic to believe you can defeat determined funded state actors.
it is the height of ridiculousness to ever think you can use anything with the surface of something like a modern phone and be safe against a state. ridiculous. i’ll say it again. ridiculous.
threat models exist for a reason. if anyone ever implied that an apple phone or an android, or pc were safe against this model, this person is a lightweight and you should run fast and far from their advice.
if you’ve ever told people they were safe against a state actor with a smart phone, you need to immediately reevaluate what you do and what you do not know. immediately.
no serious person—including apple themselves—ever made the claim these devices were safe against state determined actors with an off-the-shelf device you can grab from best buy on your way home from work.
Yeah the international tech standards are woeful. Look at Caller ID in the telecoms world, it uses a dialup modem protocol, so what can you do with a bit of hardware that can handle that, I'm thinking hardware that moves landlines onto computer based phone systems? Reprogram some firmware perhaps? Then maybe you have a dialup modem instead of some PSTN to ethernet device?
Its fun looking at the hardware, wondering what else a component can be made to do besides its obvious legit function.
What’s so good about states? Just an employer who can’t pay enough or hire people who smoke weed, but you seem to be worshipping them like they’re automatically gods.
States have crazy amounts of resources and they are not worried about their return on investment the same way other actors are.
If a hacker wants to scam people out of their savings they have a limit on how much money they can potentially earn. I don’t know what the actual numbers are, but let’s say an attacker can hope to extract 100k USD from one in every 10k prospect. If those are the numbers then the attacker can’t waste more than 10 dollar on any individual targets on average or they would start loosing money. I suspect the real numbers are even worse.
On the other hand if you work for a state and you burn a few billion dollars to research some zero day vulnerabilities which you then use to delay the weapons development of your adversaries by a few months you get a medal. Or maybe you don’t but one thing is for sure the state won’t collapse just because of this.
What does it mean in practice. Imagine a scenairo where you left your smart phone at the side of your bed for the duration of a bathroom trip. You return and you look at your phone. Can you trust that the phone on the side of your table is the same you had earlier?
It is absolutely 100% sure that the would-be-savings-scammer doesn’t have the resources to camp outside of your bedroom, monitor your comings and goings, sneak in when you leave the room, swap your phone and leave unnoticed. An operation like that costs serious money.
But if a state put in their collective head that their national interest is best served by your phone being swapped then the above can and will happen. They have the resources to do it and they don’t care about “ROI” in a traditional sense.
Now will a state actor sneak in to your bedroom? Probably not, because there are lower hanging fruits to achieve the same cheaper. For example by paying a batalion of nerds to find zero day exploits in complicated software.
The thing about states is that they are higly uneven. One can have slow and inneficient burocracy and crumbling infrastructure and failing healthcare yet still pursue some other goal with dodged persistance. And the truth is if someone decides to spend a billion dollar to hurt you you will be hurt, if they decide to spend a billion dollar to know your secrets your secrets will be known. States can be both bumbling idiots and godlike entities at the same time, maybe even in the same building.
Absolutely agree. We can and should expect good things from Apple on the security front. And we should call out where their security fails short of their marketing promises.
My comment was only about explaining what makes a state level adversary formidable.
Apple has insane profits and multiple secretive projects only part of the company knows about. Much less their shareholders. They can literally spend tens of billions a year into practically anything without any kind of outside scrutiny.
Software security can be thought of as operating on a cost/benefit model. To produce secure software means making attackers spend more resources on penetrating defences than the value they get from successful attack. And it works well for mass market, because the vast majority of attacks are non-targeted attempts to fish for financial credentials, taking over the devices for use in a botnet etc.
For state attacker the model breaks. The value of a target can be very high. And the available resources - financial, technical, other - are there to fill the budget.
I can exchange pgp keys with someone and send encrypted messages back and forth. No amount of funding or resources will ever be able to expose those messages. This is math. You sound like you are defending Apple for exposing journalists and activists when Apple, in fact, explicitly allowed remote code execution by untrusted actors.
They way you put it, these people fired up a quantum computer and broke the users password hash. What they actually did is the equivalent of an activex control owning a machine in a drive-by attack.
What you describe is an ideal Alice / Bob situation, setting just a single protected surface. In real world it's getting far more complex: what OS your PGP client is running on? On what hardware? Is anything else running on the same device that may break out of its sandbox and inject the code in another process? And so on.
> when Apple, in fact, explicitly allowed remote code execution by untrusted actors
Can you please elaborate what do you mean by that?
>I can exchange pgp keys with someone and send encrypted messages back and forth. No amount of funding or resources will ever be able to expose those messages
You have no goddamn clue if this is correct or not. Your keys may be compromised, your computer might have spyware on it, your messaging client might be backdoored.
Their incentives aren't that strong (best you get is a medal you can't show or tell anyone about) and unlimited funding can mean they start an unlimited number of underfunded projects, rather than funding one endlessly.
The state has a monopoly on violence. The state has no legal or fiduciary obligation to play by any set of rules which other actors abide by. The state can, effectively, do anything it wills without real consequences. Individual actors don't have that power.
I dunno, if you look at the incredible hoops the iMessage exploit had to go through in order to work, then there must be a point where the effort involved in finding new exploits would far outweigh the benefits ... assuming Apple is actively finding and closing current exploits and not introducing new software that just increases the attack surface area.
Control doesn't guarantee security, it just assesses accountability. Also, control doesn't mean Apple writes all the software. They outside code like everyone else.
But they don’t have to. If anyone had the resources to hire top tier software engineers to write or audit everything in house it would be Apple. That they don’t do it is almost… greedy and shows a lack of foresight imo.
> If anyone had the resources to hire top tier software engineers to write or audit everything in house it would be Apple. That they don’t do it is almost… greedy and shows a lack of foresight imo.
Maybe it shows that even Apple doesn't have the resources to do it, and therefore nobody does.
Not only that, but there is more going on than just auditing. If Apple spent the time necessary to constantly do deep security dives, and spent the money on top talent to do it, it still wouldn't get done because it is constantly a moving target unless they froze all new development. Otherwise they have done a complete walk through of Mac OS 12.1, but 12.3 is released and they have n't even started on a full walk through of 12.2. These are time intensive tasks, meanwhile customers and shareholders are still wanting the latest, greatest, and shiniest.
Then at the end of the day, customers are still installing 3rd party applications, so does apple have to walk all of their code too?
As long as fallible humans are involved in creating software and hardware we will have bugs, and some of those bugs will be exploitable.
The best we can hope for is using the best tools and practices we have available to make it as hard as possible to make mistakes and therefore as hard as possible for attackers to find exploitable bugs.
In reality tho, that's not really what happens. We still use C (and C++) for example (and other footgun-languages) and/or run billions of lines of badly audited "legacy" code (that wasn't written with the threats we face today in mind) in a lot of places for various reasons[0], and then try to use all kinds of tools as work around to detect past and new mistakes or at least mitigate the severity of outcomes when mistakes happen and are uncovered and exploited by adversaries[1].
And that's not even yet considering supply chain attacks, be it software or hardware ones.
That isn't to say that Apple couldn't still do a lot better...
"Beware of bugs in the above code; I have only proved it correct, not tried it."
[0] Such as maintaining legacy code bases, "performance", interoperability, development speed, developer availability, business ("cost") considerations, etc.
[1] Starting with code reviews, code coverage, unit tests, static analysis, fuzz testing, etc, and then later at runtime ASLR, canaries, retpoline, sandboxes, etc.
it's 100% unrealistic because the discussion and the headline can be summed up as "how come this thing is not unhackable as the company promised?"
When people say "state-level hackers", they appeal to authority like "the state" is something one simply can't win against. And those employed by the state are super-level Uber-hackers. But in reality what it means is:
A state is able to waste billions on mistakes and incompetence. The state gets away with most projects failing.
In reality forensics and post-mortems show the adversary's understanding of tradecraft, opsec and technical understanding of the space was so bad, it's _not_ skill but luck why they won.
There is very little "advanced" about an APT other than unlimited pockets. If you have unlimited pockets you're more likely able to brute-force your way to winning.
state-level is such an opaque concept because of the huge numbers of actors in it, that never get credited. People like systems-thinking and the idea of everything being planned and orchestrated like in a Hollywood movie. And they we forget in many "bAd coUntTiEs" it's a collaboration between criminal enterprise and state enterprise. It includes thousands of "criminals" who can be leaned on or are willing to lean on others. There is also the private-public partnership in some of the more functional countries[1] that adds thousands to this group of people.
the term "state level" is almost useless. It is the language of PR and propaganda. If taken serious as a concept it just means an entire different country.
And this is why it's saying "unhackable": When one expects a security guarantee to defend against such a large group of diverse actors (or not even being able to identify the adversary in the first place) then to fulfill that promise it will have to be "unhackable".
[1] Australia has a gag-order where you can't even warn your employer when they force you to implement a backdoor in their products.
Apple cant do this on its servers either, considering it has no server product.
Google has this stack control in Android, Google Home, Chromebook, etc trivially if it chooses, if it doesn't have that already.
Microsoft can do this on its Cortana smart speakers and Surface Pro X running their (Qualcomms) ARM cpu.
Also I imagine bulk buyers can get Intel ME disabled if they don't want it on their server. AMD PSP can be disabled in the BIOS I believe.
Comparing a phone to a server doesn't make too much sense. A phone is a consumer product like Android or Home and a server is a business product. Apple runs its cloud on Intel or AMD like everyone else.
I'm skeptic this is what they're populating their datacenters with. These are niche products for smaller businesses. I meant they are not powering their cloud with these products.
Apple's icloud is partly their datacenter running a mix of systems/hardware and renting out Amazon and Google cloud units, which are Intel/AMD.
Two hours ago you didn't even know they made servers, so your statement about how niche you think they are isn't super relevant.
Google populates it's datacenters with their own servers. So does Facebook. So does Amazon. Netflix builds their own CDN appliances. Apple absolutely could if they wanted to.
Define "own servers" because this discussion is about controlling the entire stack, and eliminating things like ME or PSP. FANG runs on AMD/Intel, so they don't own their own stack. They have to contend with the AMD/Intel product.
Buying a beige box to put an intel chip is not "your own" servers in the context of this discussion. I'm not sure why you're so rude and confrontational here. We're discussing how Apple has top to bottom control in its consumer devices, not some weird discussion about datacenter packaging. Servers run on AMD/Intel and as such are stuck with the elements of those platforms AMD and Intel dictate. That's not having the entire stack to themselves.
I think you seem to be confusing what is available in the public cloud and on Newegg for what companies use internally. Custom silicon is extremely prevalent and it doesn't all run on AMD/Intel.
These things are servers in name only. Apple used to build true servers (the Xserve line) with true server operating systems, but they stopped building both around the time Steve Jobs died.
Using today's MacOS as a server is an exercise in pain, largely because there are a number of routine server administration tasks that fundamentally require the GUI in MacOS.
>Because Apple controls the entire stack. From processor, to boards, to OS, to the walled garden of apps.
That's not really relevant. The American intelligence apparatus controls their entire stack too, ordering and autiting custom versions of professors and such.
Surprisingly enough there are other consumer grade devices where one company controls the entire stack and deliver similar security -- game consoles. xbox os and ios have different designs but make similar security promises.
You can't buy these. All pinebooks are out of stock due to chip shortage or what stock is there is reserved. So no, that suggestion isn't good for basically everybody.
Apple does all of it quite well, but software inherently has bugs for as long as we use type-unsafe languages. Maybe Apple could move entirely to Rust instead of C to prevent things like checkm8 but even then the logic of a program is written by imperfect humans.
> software inherently has bugs for as long as we use type-unsafe languages
This is untrue. Textbooks on type theory (Pierce’s TaPL comes immediately to mind) will state early on that well typed terms cannot always eliminate all undesirable states.
In fact, there is a tension in encoding all your constraints into the type system (at which point you become a mathematician proving theorems) and producing programs that are actually useful to other people.
"Consumer-grade" iOS probably has a larger R&D budget than all mil-spec competitors combined. iOS also has a larger attack surface but it's not obvious to me that it's a lost cause.
It doesn't mean they spend all that budget on hardening against attacks. The vast majority is for user experience, flashy+useful end user features, etc.
To make it even worse, the "flashy+useful end user features" are the very thing that make the OS hard to secure. Nearly all of iMessage exploits stem from parts related to non-text handling. FORCEDENTRY was caused by code related to image handling, for instance. I'm sure if you're willing to cut the scope of the messaging app to only text, it'd be impossible to hack.
You’d probably have to limit it to ASCII text, and even then use fixed-width fonts just to be sure — at least one iOS vulnerability was related to word-wrapping of Zalgo-style Unicode in lock screen notifications.
There is a reason smart phones won. That discord trumps irc and xmpp.
I am not sure how secure a Nokia 3210 really is, but I could phone and text with it even today I expect. The users are rare, and that is not accidentap
If being hacked makes someone feel bad, consumer products are in an unenviable position. If making money makes some feel good, consumer product are in an enviable position.
There are only a finite number of engineering hours available. Employing more engineers doesn’t work.
Rewriting the stack in a safe language would take years (25+ years of code to rewrite).
Saying the vast majority is on ux+flash, is the super common mistake: you don’t see any of the work that is not in the UI portion of an app, that doesn’t mean that’s not getting huge amounts of work.
That is what we pay them for, to a material degree. iOS wins in a lot of use-cases because it's perceived as the platform that better protects privacy.
For me, it's fine if they lose some battles, but they should budget to win the war.
That is mostly true, but I hate to see the narrative that it's insecure due to lack of resources. It's easy to miss the ridiculous scale of these big tech companies.
The problem is not money, there is only so much any one engineer can do, and there are only so many engineers you can have working on anything. No one can "solve" security simply by buying more engineers, what is needed is time.
NSO can only pay so much for a vulnerability, and that amount is bounded by the budgets of its clients. Apple could choose to outspend NSO for those vulnerabilities.
Government-backed entities can also make promises that Apple's money can't buy. Who knows what kind of back room deals are made that involve citizenship, safe passage, dropped criminal charges, etc.
Big governments aren't selling their espionage services, so they're arguably less important to handle. Moreover, there is likely duplication in exploits between these groups such that buying private sector exploits and patching them will likely take a few government exploits out with them.
The problem is time rather than money - there are a finite number of engineering hours in a year, the only way to increase that is to add more engineers, but adding more engineers very rapidly starts costing time rather than helping.
I think the strategy suggested here was “why doesn’t Apple just buy all of NSO’s exploits”, to which I responded that doing this for all threat actors is infeasible. I think you’re talking about actually solving the security issues, which is blocked by things that aren’t just money, I agree.
Resources get surprisingly low if you share them for every possible attack interface. (If you even know them all) On Apple’s scale, there are quite many. Attackers need to find only one buggy interface.
Thought experiment: Apple could hire nine NSO equivalent teams to do what NSO does. Then fix the bugs they find. This is not a guarantee there are no bugs, but if finding them is crucial to fixing them, NSO is now 1/10 likely to be the team that finds them and 10 times less likely to maintain unique access to the secret.
Scale numbers to make better financial sense.
Now, one would think there are already more non-NSO security researchers than NSO have, who earn money from bounties and report issues to Apple. Yet NSO has a business model. How?
> Apple could hire nine NSO equivalent teams to do what NSO does.
No they can’t. If Apple could, they would deploy their war chest billions to do so. People often make the mistake of thinking some objective is money-constrained, and a place like Microsoft could’ve just thrown the most money at mobile in 2008 to be the best mobile OS.
The Israeli’s in these national security level positions work like their life depends on it because they literally believe it does. Good luck replicating that with a gazillion teams of cushy big tech Silicon Valley positions.
Another thing I realized, but after I could edit my comment :D
NSO makes money reselling the same set of productized exploits. Their business model depends on no one else (apple or other researchers) finding it. Selling an exploit gets you a single payday, selling a service lets them both charge market rate for exploits, but keep doing so.
Honestly I wouldn't be surprised if they had a monthly subscription fee.
You're assuming apple doesn't already have that. What we see in the news, etc is an example of survivor bias: you don't see the bugs that apple finds in anything they haven't released.
For reference, Apple has four red team positions currently open[0], and I imagine these are rolling openings (ie. recreated every 6 months or so for listing refinement).
It isn't just about NSO-level entities. Their products have an almost full reach over the human population. Every security researcher and most of the good engineers are looking at them or their customers as target, in some way. The're always outnumbered. Money and headcount gets you only so far. Global reach = global attack surface with global adversaries, and not only within the industry.
This is not how security works. Reasonable security can be achieved by compartmentalization, so you don't have to make all code flawless. See: https://qubes-os.org
macOS and ios are heavily compartmentalized - but attackers are still able to get through.
Extreme compartmentalization through virtualisation is also insufficient, as it just becomes a matter of “is there a bug in the hyper visor”, to which the answer is yes: every major VM system has had multiple escapes, as another commenter pointed out even your example of qubes os has had them.
I know how hard security is. I know how hard writing bug free code is.
Brushing that aside and saying “just do X and bugs don’t matter”, assumes that somehow the people implementing support for X are immune to the same problems faced by other developers.
We’re working on a security definition which means “control of system”. That means escaping the compartment: for iMessage that means escaping the sandbox. The most recent one on the PZ blog is a multiparty series and I don’t believe they’ve described how NSO gets from code execution to actually escaping the sandbox. If being able to escape the sandboxing means “not compartmentalized” then if anyone escapes the VM in qubes then qubes is also not compartmentalized. The difference is how extreme the compartmentalization is. macOS and iOS use sandboxing which is much lighter weight than a full VM, but is not as strong a boundary as a full VM. However saying that they’re not a form of compartmentalization is plainly false.
Every other virtualisation system also uses hardware virtualisation. They’ve been popped. I don’t think it’s unreasonable to suspect that qubes has not had the same degree of offensive interest as, say, VMware.
Have a look at my link. This is the operating system which does it right: even if you are compromised, the compromised VM* is isolated from the rest of the system and is not trusted to begin with. You don't have to make zero mistakes.
*Yes, you do everything in VMs, with a transparent interface.
You do though. You can exploit flaws in the hypervisor to escape the VM and then you've bypassed the Qubes security model.
Here's one such example:
https://www.qubes-os.org/news/2017/08/15/qsb-32/
Don't forgot how recently this wasn't obvious to hardly anyone though.
Probably only 10 years ago, I knew a lot of people who thought careful and conscientious use of encryption and security features could protect sensitive info from governments. That's not a very long time ago.
It's taken a while to sink in -- and to the full extent of it being more and more widely known. That nearly any government can probably trivially get access to anything, using commodity off-the-shelf surveillance software.
I fully agree there. In addition to software expectations, state-backed intelligence doesn't require any software (or hardware) security flaw to surveil you. If you're are a target of interest to them, there isn't much you can do.
As you say, only thing you can do in such case, is to do everything assuming you're being spied upon.
Lucky for the Mossad they are not building a competitor to the iPhone/iMac/MacBookPro/iPad etc...
They don't have to compete with apple they just have to find an interesting mistake in their code. On phones with tens of years of legacy firmware/modem/protocol code that's very hard to reliably stop.
(NSO is not Mossad and Mossad doesn't use NSO afaik)
> With public key cryptography, there’s a horrible, fundamental challenge of finding somebody, anybody, to establish and maintain the infrastructure. For example, you could enlist a well-known technology company to do it, but this would offend the refined aesthetics of the vaguely Marxist but comfortably bourgeoisie hacker community who wants everything to be decentralized and who non-ironically believes that Tor is used for things besides drug deals and kidnapping plots.
The defense is to not implement slapdash feature that careen into a memory ditch. I will repeat, Leaving out functionality that you have not secured is the defense. Calling software "consumer grade" and not expecting it to be secure is uninformed hand waving.
It should be possible to protect oneself. It is extremely difficult only because there are basically only two options if you want to have a smartphone: either iOS or Android. The immense power of nation-state adversaries is focused on just two targets.
And there are are no commercial off-the-shelf security solutions that can “protect” you. You have to do your own security. This is why Snowden had to painstakingly teach the journalists on how to use GPG to receive his cache. No other way would be trustful enough.
Name some non consumer grade software that's up to the task?
I think you're making excuses. Apple, owning the entire stack, has the least excuse of anyone. Given their market share they also have a lot of responsibility to be better.
Apple has tons of money, a crazy amount of money, they have a net income of dozens of billions every quarter. They have more money than most state funded agencies. They just don't want to.
Do you even know what you are asking? You are asking perfect code and hardware, no single mistake on any line of the process. Taking account on every possible scenario and sidescenario you can’t or can think about.
A lot of commenters point to the heaps of money. But what do you buy with it? How do you build it up at this scale? Security isn't a metric you can reliably work by like clicks, op/s and sales. And beyond a very narrow frame it won't help their case of being a producer of consumer products, rather it makes very difficult to work with state actors.
Raising bug bounties by an order of magnitude goes a long ways. Apple's program is pretty good relative to industry norms, but all of the programs are well below black/gray market prices for exploits, and most are below costs to find, document, and submit an exploit. Google pays less than a month's salary for their most serious Chrome exploits, for example:
I think all this is good until an adversary decides to brick every iDevice, Android, Windows, MacOS, and Linux machine in the US. And by "brick", I mean overwrite firmwares, disable fans, and run at maximum load.
Sure, for US, Russia, China this is probably just true without nuance. For someplace like Israel (small but highly motivated and strong in both espionage and tech) apparently also true.
But should I not expect a FAANG-level company to defend itself from state-sponsored hacking if the state is Uzbekistan or Cambodia?
And if every state can get access to top-tier hacking by having an alliance with $MAJOR_PLAYER, why should we expect that doesn't extend to non-state actors? At which point, are we just giving up on security?
I think Apple absolutely can defend itself against state-funded hackers. Whether they choose to do so is a question of priorities I guess.
I agree, in the same way that Raytheon is "not America," but my interpretation of the parent comment was that the cosy relationship with the government made it effectively the same thing from the point of view of companies defending themselves.
Whether or not that's actually the case is an interesting discussion.
Granted, a state actor can (depending how important the state is to your business) compel you to not counter their attacks.
But that isn’t really a question of technology, and the parent comment was, I think, suggesting it’s an unrealistic expectation on the tech level for companies to beat back state actors.
That sounds narrow and simplistic to me. Apple is just consumer-grade hardware? How many politicians and CEOs of key American businesses use iPhones? Probably all of them. Is there some secret iPhone model for Donald Trump, hand-patched and reflow soldered by the NSA to remove any NSO vulnerabilities? Does the American government not know about NSO? Are they powerless to stop key American businesses from being hacked by a foreign state? I'm sure there's a lot more to the story that we'll only find out about with a well-placed FOIA in the year 2070 or so.
Not technically but an Israeli cybersecurity company like this isn't exactly two guys in garabge bootstrappy capitalist success story you're making it out to be. It most likely is filled with spooks, has deep connections with the Israeli intelligence services, and is full of ex-military. In authoritarian or semi-authoritarian regimes spinning off hacking to a private company pays dividends, because they get blacklisted, as a private company, which doesn't greatly affect diplomatic norms. Also it allows Israel to sell exploits to regimes that might be hard to via government agencies. Afterall, it was "private" business that sold them, not the government, wink wink. A bit like so many "private" cybergangs in Russia have impunity to attack the USA/EU and sell and deploy ransomware there because, again, its "not the government, its criminals" but in reality they're defacto arms of the governments there or at least have strong ties to the government, pay them protection money, and often get marching orders from them.
Israel has neither an authoritarian nor a semi-authoritarian regime. From your analogy I think you are misunderstanding NSO and its relation to the Israeli government. NSO is not a civilian cover for government operations. It’s an Israeli arms manufacturer. Its foreign sales require approval by the Israeli Ministry of Defense, and as such can be facilitated by the government based on diplomatic considerations. This is not unlike other arms manufacturers in the US or in western countries in general.
Be also aware the conscription law applies only to Jews & Druze. Arab citizens are not required to serve (for "reasons"), so if your car repair shop or shwarma joint is run by Arabs, then there are not veterans there.
They can volunteer and have their application approved or denied for "reasons." Note I'm discussing the mandatory conscription law only where what I said is entirely correct.
Well of course. Apple has only certified iOS to provide resistance against attackers with a basic attack potential. Why should it be any wonder that their security is inadequate against moderately or even highly skilled attackers?
Here we can see their certifications under the Common Criteria:
“The evaluator shall conduct penetration testing, based on the identified potential vulnerabilities, to determine that the TOE is resistant to attacks performed by an attacker possessing Basic attack potential.”
Which is consistent with the text of the Common Criteria standard on page 170:
Which by page 31 of the same document corresponds to EAL1 under the old EAL model.
To reach "resistant to attacks performed by an attacker possessing Moderate attack potential" would require conformance to AVA_VAN.4, 3 entire levels higher than their certification (corresponding to EAL5 under the old EAL model) and 1 level higher than any Apple, Google, Microsoft, or Linux system ever created and which has been deemed economically infeasible for them to ever retrofit onto their existing products as stated on page 38 of the same document.
I’m not sure Common Criteria has much to do with it. Companies only get CC certification so their devices or applications can be used by certain government organizations.
And so no company is going to target a cert level higher than the minimum they need to meet whatever business requirements are driving them to get CC certified.
And CC certainly isn’t a good reference for good security and cryptography engineering practices. It’s not bad, but it misses a lot.
To add, while Obama was told he couldn't have an iPhone[0], Trump apparently had two NSA-secured iPhones in 2018[1], so the basic OS is secure (and likely loaded with MDM, maybe even jailbroken to disable safari JIT or disable safari entirely).
Similarly, rooting an Android phone simultaneously increases its attack surface and helps you tie down components that you otherwise couldn't without recompiling the OS.
Though, in the long run, I wonder if Androids can be more secure than iPhones.
None of what you posted has anything at all to do with the popular perception that iphone is especially secure, which is obviously what's referred to by "despite the hype".
I have no doubt Apple is perfectly capable of securing iOS and MacOS against attacks these relatively simple attacks.
I also have no doubt it has no intention of doing so. It honestly seems very naive to believe that a huge multinational corp would not allow state level access.
At the top level, the vulnerability is in a parser for a compressed image format. The parser isn’t scriptable or programmable, but the code is subverted, exposing some very primitive logic operators that can be applied to image data. Specifically AND, OR, XOR and XNOR.
The attack then uses these fundamental Boolean operators to construct virtual circuits to emulate a primitive custom CPU architecture using only raw bitwise logic, and uses that to run a virtual machine implementing a bespoke Turing complete bytecode interpreter. This then runs the rest of the payload as a program that implements the spyware functionality.
Security is very hard in companies driven by marketing. Even if "security" is a feature being marketed. Having security by design means moving much slower, having code reviews done by paranoiacs who care not one bit about features, and constantly thinking about reducing attack surfaces.
Paranoiacs will start with "do we need to handle any user messages at all? that's untrusted data!", much less "do we need to support ALL SORTS of image formats, including some handled by obscure, obsolete libraries that noone truly knows how they work?".
The fact that the messaging application allowed untrusted data to escape the sandbox via obscure data processing libraries they didn't write themselves is a proof of lack of internal software risk assessment process. I hope they learned their lesson and code is now reviewed by people not incentivised to rubber stamp everything the moment in lands on their desks.
> I hope they learned their lesson and code is now reviewed by people not incentivised to rubber stamp everything the moment in lands on their desks.
Almost certain they did not learn any lesson. Like you say, they’re marketing security. If they had made investments we would have heard of it (big hires, security rampup etc).
Anecdotal, but It seems like Apple’s DNA is still very hardware focused. People in my friends group have all interviewed there but none of them accepted offers because they would never match other Big Tech.
"Anecdotal, but It seems like Apple’s DNA is still very hardware focused. People in my friends group have all interviewed there but none of them accepted offers because they would never match other Big Tech."
It seems most of the real magic, long term R&D ROI, and difficult hires. Now rest in the switch from OEM to in house made chips. I would be interested in others perceptions of this.
A modern phone is built on tech from chips like a Z80. However now the amount of chemical engineers, electrical engineers, process, and so on has changes so dramatically.
We have large teams of people becoming so specialized the first stack software is far more times the abstraction it was year by year.
Security is hard. Just like quality, they both take back seats more and more often since they generally only hurt ROI.
Maybe security can be enabled by a toggle where, like plane mode, you restrict your phone to a smaller set of features but with better security. Of course the difficulty is once you've said that the feature set will be much reduced because you've exposed your responsibility. But then security is an additional feature
Disabled code is still an attack surface, sadly - and the false sense of security might make it a net negative. A different os build would be better.
Sadly, the biggest surface, aside from the users themselves, is likely still the web browser. With them being nothing short of mini OSes, I am not sure one can make a secure modern phone. If you have secrets, design your opsec around the assumption that your phone is easily hackable.
They have moved image parsing to a separate restricted process. That's way overdue. Wouldn't hurt to have these parsers rewritten with more attention to memory safety, though.
I think the key question is why does Apple consider it okay to make moves like dropping the headphone jack for merely cosmetic reasons, whereas dropping obscure 30 year old formats that do all kinds of psychotic stuff (like run code) for security reasons is a much tougher sell?
Apple was going to sell hundreds of millions of AirPods with or without a headphone jack. We may never really know what the final straw was, but I’d be willing to bet that Jony Ive & Co. were looking for a suitable excuse to drop it for years.
It’s one more component that limits how thin the device can be, it’s an ingress point for water (Apple was never going to add a flap), and if you’re the kind of person who digs symmetry then a giant hole on one side looks ugly.
Luckily for them, the market came their way. Bluetooth headphones were massively outselling wired headphones by that point, and it’s one of those areas Apple loves to work in - a space where they can add some proprietary magic (H-series chips, instant pairing, auto source switching) that makes what would otherwise be a relatively ordinary product into a remarkable one.
I don't know whether wireless earphones sold like now what if Apple didn't remove 3.5mm hole. There were many phones 3.5mm jack and waterproof without a flap.
I keep seeing comments about how NSO is Israeli state funded. I think it's not entirely incorrect (am Israeli, know people who work at NSO and people at 8200) - but the relationship is quite sinister and not something the Israeli public was aware of.
As in - NSO's ties to the Israeli government were filled with corruption and NSO was using its own technology in exchange on Israeli citizens.
There is (quite a big) public inquiry and people from NSO will likely (finally) go to jail.
Also note the US government has been using companies like NSO to push its own political agenda since it lets it sell this sort of technology to allies (like Saudi Arabia) without scrutiny.
This is following two other scandals relating to NSO in Israel that are getting quite a lot of press here.
For example the inquiry suggests that police members used Pegasus to spy on opposition leaders who opposed Netanyahu a-la-Watergate.
To be clear this _is_ a watergate level scandal here and you'd be hard pressed to find a news website in Israel not mentioning NSO on the homepage.
I don't understand why they can't compare android vs iOS. Can't people use Signal instead of iMessage? If they are criticizing iMessage then shouldn't the default messenger on android be used as a fair comparison?
My 0.2$ on this topic: prevention is nice but not good enough against a persistent or unknown threat. Mobile devices lack EDR coverage, even enterprises install an MDR and forget about it. I want every child process, network connection and interprocess call that falls out of baseline logged. Compromises happen, discovering them months later is the big issue at hand. If you're a journalist or dissident then you or your benefactors should be able to purchase a mobile endpoint security monitoring service that will actively monitor for exploitation and prevent known threats based on good intel. Crowdstrike is the only company that comes close to supporting this.
Even if you install Signal, you'll still have iMessage installed and processing messages sent to your apple ID.
iMessage on iOS is part of the OS, and does some message handling in kernelspace. Yes, it has blastdoor, but even since blastdoor sandboxing has been introduced, there have been multiple iMessage CVEs which lead to a full device compromise. See this project zero post for more about iMessage [0].
What I'm trying to say here, is that on an iPhone, no matter what messenger you use, some iMessage exploits can pwn that messenger anyway since iMessage cannot be disabled, and has privileged OS level access.
On the other hand, android does have "google messenger" or whatever the OEM installed, and it may not be uninstallable, but it's sandboxed away from the OS in the same way every other app is. There have been exploits based on kernel-space processing of media attachments on android too, but those actually did depend on the app being used, so if you switched SMS apps to one which rejected such an attachment, or otherwise processed it differently, you actually could be secure. Said another way, on android you actually can turn off the default messenger and use signal for SMS. On iOS, that's not so realistic.
Well you can’t necessarily, because who’s to say what various methods they’ve discovered. However, if we were to use the most famous recent example that relied on an unknown sender transmitting a GIF containing the compromise, Signal does have some additional protection in that it doesn’t receive messages from unknown senders by default. Rather the recipient must accept the message explicitly.
Yeah if you can inject into the libraries on the system app isn't to important, just log whatever is sent to uitextfield or whatever and some info about the process; or be lazy and just take screenshots
Yes. That would be a comparison I am interested in. Are there vulnerable and exploitable system libs and components on iOS? How does that fare compared to android?
I don't but the article was saying iphone is less secure and signal has better restrictions on who can text you than imessage. So use signal on iphone? How does imessage reflect badly on the whole OS especially when compared to a non-default app like signal.
Well, the article said that in their sample, fewer Android phones showed evidence of attack but they also said that Android does less event logging and they could not know if there were other attacks that they did not catch. Not really a definitive answer.
> If you're a journalist or dissident then you or your benefactors should be able to purchase a mobile endpoint security monitoring service that will actively monitor for exploitation and prevent known threats based on good intel.
I am actively building one such open source app (for Android). We are half-way there. One my friends was editor-in-chief at a big media company and his frustrations made it clear to me that they were sitting ducks. Though, securing folks in high risk profession is not easy, at all. But one ought to start somewhere. And it is clear, the eventual solution needs to be external to the phones themselves.
Easier said than done because of the attack surface, but I do think the industry will respond given Androids are the most widely-deployed, always-connected, computers in use out there.
I don't think just securing the hardware is enough (though, even that isn't fully done), the software itself remains hard to tame. Add to the fact that folks (including attackers) can "install" arbitrary things on their Androids, the problem gets only worse.
Then, there's the social and legislative aspects as well. For example, no one can raid your home without a search warrant, that's illegal. Similar thing needs to exist in the digital space. This at least thwarts illegal spyware, if not the law enforcement themselves from snooping about people's digital lives.
Okay so the real issue here is that people want convenience, which requires "N" number of features, which results in an attack surface of "too large to make it secure".
> I want every child process, network connection and interprocess call that falls out of baseline logged.
This would be a nice thing to have, yeah
> If you're a journalist or dissident then you or your benefactors should be able to purchase a mobile endpoint security monitoring service that will actively monitor for exploitation and prevent known threats based on good intel. Crowdstrike is the only company that comes close to supporting this.
lol no. Commercial EDR software sucks whether it is Crowdstrike or Carbon Black or Cylance.
> lol no. Commercial EDR software sucks whether it is Crowdstrike or Carbon Black or Cylance.
That's like saying commercial firewalls suck. Do you mean the UI perhaps? They all allow you to specify detection rules like with ids and fw but the out of the box stuff is good enough to catch APTs but obviously not perfect. Althougj out of those I can only recommend crowdstrike falcon, defender ATP is also decent. Red teamers have bypasses for some, it's a cat and mouse game but can you find one instance where an actual threat actor intentionally evaded an EDR?
I would quote myself from another thread where I said that Apple is only about design, but since we're talking security, I see always apple as exploiting having a very low market share for desktop and nil market share as server, but in terms of practical security I see posts like this: https://www.forbes.com/sites/adriankingsleyhughes/2012/04/25...
But I also like to share these kind of posts when I want to make example about like the absolute lack of knowledge in terms of security of apple, like having the guts to release to public a system where password files / shadow and sudoers are world writable just shows that they really have no idea what they're doing
And I wouldn't consider myself a fanboy or like justify this kind of posts like mine as "HN is notoriously against apple" I would more likely say that HN audience is a bit into tech, and if you are into tech and understand how systems work then you can't just ignore these kind of things when you see them so you forcefully end up just trying to dodge a system like apple
I wouldn't exclude it, it worked well. Let's not forget that they eventually still unlocked the phone with a similar tool anyway. It's only possible that these tools weren't as readily available on 2015.
I was only following the mainstream news headlines back then... not paying super close attention... but I failed to read that part. I assumed they were never able to access it to this very day.
Along with 99% of the rest of North America who can only read so much about computer security I am sure.
> On March 28, the government announced that the FBI had unlocked the iPhone and withdrew its request.
> The unidentified method used to unlock Farook's phone - costing more than $1 million to obtain - quit working once Apple updated their operating system.
Nothing is perfectly secure, it's just a question of how many resources are required. Clearly, if the FBI wanted to do it, they could spend enough resources to break into any consumer phone.
If Apple is adding back doors willy nilly why would they execute a parallel marketing campaign centered on privacy at the same time? The more likely reality is that are extremely upset this is happening and being publicized. My personal take is it would be impossible to add back doors to Apple products without whistle blowing. Why does everything have to be a conspiracy these days?
We are a now in a age where people are talking about building their own chips and phones because they don’t trust the commercial products to be secure. I have a feeling the Apple marketing org is less than thrilled about recent news. Also Google no doubt is looking for ways to capitalize on this. They already have the reputation of having security geniuses on par with the NGO group on their payroll
Also a lot of these exploits are buffer overflows. That’s a terrible back door because anyone could find it. If there were back doors it would be things like sharing the seeds to the RNG that generates their private keys or just giving access server side access. But they aren’t doing that because it’s dumb. And if the government already has back doors why would they purchase external tools? But as ever the best argument against the conspiracy is it would impossible to keep secret
This is a failure of the security model underlying the operating system design.
Operating Systems exist to securely multiplex the resources of a system and make them fairly and reliably available to the user of that system. In order to do so, the first order of business is that the system should observe the principle of least privilege. That is, it should grant no privileges by default, and only grant access to the resources required for a task to complete.
iOS, Android, Linux, Unix, Windows, pretty much any modern operating system makes zero provisions for enforcing the principle of least privilege.
Analogy: You owe someone $5 for an ice cream cone. Do you hand over your wallet and all of your 2fa credentials to that person, and hope they only take $5? No, you reach into your wallet, and extract a $5 bill. That is a capability that gives the holder $5 of spending power.
If you don't have a $5, you could give them $20, but your maximum loss would be that token. You don't have to worry about losing your title to your car during the transaction.
There is no equivalent mechanism for the average user to securely tell the OS to run this program with these resources, and trust that nothing else will happen, nor hidden surprises persist.
---
Note that the crude "grant access to X" flags present in some phone OSs are NOT capabilities, though them might have that same overloaded name.
---
Note that wallets aren't seen to be impractical, and people have been using them for a very long time. If you're tempted to say that an OS can't reflect that simplicity and ease of use, you just need to give people a few iterations to get it right.
---
We need capability based security. We'll keep having these stories until we get it deployed widely.
> There is no equivalent mechanism for the average user to securely tell the OS to run this program with these resources, and trust that nothing else will happen, nor hidden surprises persist.
You’re just not familiar with them. iOS has pervasive sandboxing allowing for fine-grained access control to system resources. Most sandbox profiles start by denying everything by default and adding in things deemed necessary for the program. The issue is that 1. sometimes the things added in are too broad or 2. the sandboxing mechanism itself is broken via an exploit.
Attempting to run untrusted code on trusted hardware is a fool's errand. Sandboxes do not work. Hardware privilege separation does not work. It is a lost cause. We need to start separating devices, and stop normalizing js.
I find it extremely plausible that there is no sandbox which can prevent programs from escaping, and that the same is not true of physical separation. (Obviously even physically separated machines are prone to problems too; but I believe it is possible to resolve them.)
Regarding 'don't be nihilistic', I really don't think that's what I'm doing; I'm suggesting a path forward, a different course of action. It doesn't seem any more nihilistic to say 'let's move away from shared hardware' than to say 'let's move away from md5'.
> I'm suggesting a path forward, a different course of action.
It's not a practical path for most people. It's expensive and very cumbersome. You effectively suggest to give up. Good sandboxing is much, much better than physical separation, because it's actually doable. I run Qubes as my daily driver. I can't imagine managing a bunch of machines.
Also, Intel ME is disabled and neutralized on my machine, Spectre is patched (and hyper-threading is disabled on Qubes). Such things are also extremely rare.
I don't really buy the arguments given there. All but the first of the 'cons' listed regarding physical separation apply equally well to virtual machines. There is a link to a longer paper; I will read it later; perhaps it has more compelling arguments.
> expensive
Most people are using phones and computers that cost hundreds of dollars. The cheapest raspberry pi is, what, $5? Getting a few of those would not be prohibitive for many people.
> very cumbersome
That is true, but we can do something about it. How cumbersome was it to habitually run software in vms, before qubes?
> The cheapest raspberry pi is, what, $5? Getting a few of those would not be prohibitive for many people
I have been informed that raspberry pis are hard to come by and are now retailing for close to $50. They have historically been cheap, and many other electronics are currently also quite expensive due to extenuating global factors; it seems likely that they will return to historical prices within low 1s of years.
Most of it isn’t, at least officially. Sandboxing was never really exposed as a public API for third parties to use, so Apple doesn’t really discuss it in much detail.
> Analogy: You owe someone $5 for an ice cream cone. Do you hand over your wallet and all of your 2fa credentials to that person, and hope they only take $5?
We do this all the time with online transactions. Just because the website says $5, in principle when they charge your bank they could ask for as much as they like.
In many cases (I guess it depends on the payment gateway) I get a push notifications which opens a confirmation dialog in my bank's mobile app, where I see the merchant and the amount, and have to confirm it.
I think this is most common with European payment gateways.
Sure, nowadays there are alerts that will help you spot any discrepancy more quickly, but it doesn’t stop the discrepancy occurring and we’ve been running blind on this for a very long time.
Which points to a design problem with credit card transactions. They should be able to generate a one time use number for giving X money in a single transaction. I've heard of some experiments in that direction, which is hopeful.
The infamous "SSL Added and removed here! :^)" slide[0] should have been proof to everyone that no one is safe from $THREE_LETTER_AGENCY or their international equivalents. The ubiquity of standardized smartphones at the highest levels of government means that unfathomable budgets are dedicated to cracking them with the urgency of national security as their motivation. The world's security blocks are incentivized to share their cracking tools within their spheres of influence, and the trivial margin of reproduction for software means that these tools escape from their security apparatus and proliferate like a virus on the outside. They're simply too useful to remain in the dark. If cracking your phone is possible and a cyber power wants to know what's on it, then your phone is as good as unlocked.
Not sure if that's the best example, because by my understanding Google had taken immense efforts to fix this hole, and now always treats the network as if there's hostile actors on the line. It's pretty much a success story of a civilian company against the NSA.
(I make no comment about if the NSA has found other ways into Google or Google has willingly let them in, I do not know the answers to those questions)
That is an extremely naive viewpoint. When the NSA develops a capability they don't just sit around spinning in their office chairs until it stops working. There is constant development effort to ensure continued access to the information they need.
It’s not immediately obvious to me that diversity is better. A chain is only as strong as it’s weakest link. If a foreign state has access to two phones with a secret on it and each has a different OS with different vulnerabilities, that just gives them a bigger attack surface.
Similarly if there are 5 sophisticated attackers after your secrets, some may have cracked your phone security, but not others. If you use more different phone OSes you increase the chance that more attackers have exploits for at least some of your phones.
Also using more phone types complicates training, increases the work you must do checking for vulnerabilities, diluting your efforts.
You are implying that phone models for top officials are chosen at random.
Better diversity would also mean more choices, which mean some devices are more tailored to needs of a certain type of consumers, like business and government ones. They get less gifs to send, but have their software developed by paranoid people and "do not leak any secrets ever", and "no rce" are their first and only sprint goals.
So basically, now choice is between bad alternative and the even worse one.
Asking for more choices is an important proposition, I'd say
It gets different when you are talking about denial of service attacks. Imagine everyone drove cars from the same brand, and one day a hacker discovers a way to make them all stop. Then society is shut down. If 20% of cars stopped working, it's still a major catastrophy but you can work around it somehow.
If you have robot armies or something it gets even more dangerous to put all your eggs into one vendor basket.
Could you elaborate? I myself can only guess at what you mean by "diversity to security".
I can think of:
1. Develop software in the open.
2. Teach security and cryptography widely using openly-developed syllabus.
3. Encourage parallel initiatives.
I don't think you are referring to racial or gender or cultural diversity.
Evolution has the benefit that it can develop in parallel. We only have limited manpower for our creations, so it's entirely possible that focusing our efforts produces better results.
Besides, tens of thousands of separate software companies can easily develop in parallel.
There's only a bottleneck when you assume that this stuff has to come from a BigTech giant. The same mistake people made with PCs and Windows in the 1990s.
Focussing on this article (from 2021-07) now doesn't make sense. This topic (Pegasus) is getting attention now because the NYTimes just published a fairly major investigation of NSO, Pegasus, and related politics. (Maybe WaPo wants to piggyback on that?)
The Battle for the World’s Most Powerful Cyberweapon (2022-01-28)
A Times investigation reveals how Israel reaped diplomatic gains around the world from NSO’s Pegasus spyware — a tool America itself purchased but is now trying to ban.
Why can’t Apple build a security team like NSO (in fact a much larger one), finding these vulnerabilities?
If others can, so does Apple.
If it’s based on resources, Apple is far ahead (even large governments don’t put so much resources into one application, let alone small companies).
In fact, Apple’s valuation is incomparable relative to NSO’s and Mossad’s.
And why don’t US politicians protect US citizens?
If there is something else at play, it’s good to highlight it at some point. But I doubt it will be acknowledged to public till decades later.
If the cost of developing an exploit is lower than the market value of selling that exploit (either raw or productized), then the platform will be broken.
All companies can do is increase that cost - ideally to infinity (eg unbreakable). There are numerous ways to do that, and at this point I think it’s reasonable to think that the cost being hit is time rather than money.
There’s only a finite amount of engineering time available and how to spread that is hard. You could say that “everyone should be working on security”, but that doesn’t work because then people don’t upgrade (I recall multiple articles saying that people were updating to get even something as trivial as new emojis, but obviously other big features matter as well).
You could say “rewrite the unsafe code in a safe language”, which is not something that could be done in any reasonable amount of time, especially while retaining ABI compatibility.
So google, apple, ms, etc try to make the correct trade offs and improve the overall security posture of their platforms, but as long as there’s a market of organisations with functionally unlimited budgets the best they can do in the near future is increase the cost.
Security of a platform is FAR more complex than “a numbers game”
State actors are willing to spend near limitless money for a vuln with zero return on investment.
That's a common misconception - a government may technically have infinite money, but they're still comparing the cost of things. At some point (for example) sending people to put bugs in someone's house is cheaper, or 24 hour surveillance becomes cheaper.
The goal of the defender is to push the cost of attacking the software above the cost of doing something else.
There are two kinds of security. Security against scammers, and security against governments.
I just wish there would be industrial standards for basic security, I don't care protecting myself against governments (yet).
There are very few people who can be targeted by governments, so I will always be suspicious of people who can evade government surveillance, unless if you're a journalist, of course.
I think the major driving force is machismo and competition. People I've met have into this sort of thing have an image of themselves as Matrix-style hackers. It's easy for business types to exploit that.
Be the best hacker working with the best hackers in the world.
I mean, there are people who work for ideological reasons instead(hackers working for FBI/MI5 despite low pay for instance), and in sure there are people in Israel who take this job to do the service to their country and such, and not just the salary.
IANAEG (...Not An Evil Genius), but it seems safe to assume that a lot of folks in that line of work do not share much of your worldview or value system, to need to do any "rationalizing" before doing such things.
(Actually, you might want to read a few works of fiction which feature well-written Evil Geniuses. Or some "I was a CIA agent..." biographies.)
".. Developing technology to prevent and investigate terror and crime"
I guess that their initial intentions were good, but a lot of clients (govs) abused it. It's still their fault for not acknowledging or putting breaks for these kind of actions.