That right there tells me that we as "the tech community" are way too okay with this sort of application of the tech. The tech we're all so convinced will "make the world a better place." /s
It looks like NSO is backed up by the Israeli government. They say their software is only sold to governments which were previously vetted, but the reality is that most of the time they sell to authoritarian states which monitor and persecute people opposing the regime.
The way this works is that in addition to the more colorful clients, you absolutely need to make sure that you have a sufficient number of clients among law enforcement and security services in countries with a decent(-ish) track record regarding human rights. This way, your products and services are not obviously illegal. You can even tell your employees that your products and services are saving lives because it's actually true.
This strategy mostly works because the major operating system suppliers refuse to implement requested lawful intercept solutions for their consumer products. Instead, we end up with companies that try to fill the gaps, making a business of exploiting security flaws. It's possible for the OS vendors to completely dry this swamp, by offering competing services to law enforcement using the interfaces they already have (automated software updates, for example). The reputable clients would migrate rather quickly. These companies would be left with just the shady clients, making it much more difficult to justify their continued existence.
The OS vendors refuse to implement lawful intercept capability because there is no such thing as a lawful intercept capability. There is only intercept capability for any purpose because ROM bootloaders and secure enclaves cannot vet the lawfulness of a request to subvert their owners. You can make a phone relatively secure against people trying to break into it, but only if it has unique access keys for the owner. If you give any government a second key for intercept capabilities, that key will be a single point of failure for the entire system. Eventually it will leak and your phone password will be effectively useless.
I don't even need to invent a scenario for this: you can buy the TSA master keys off Amazon right now. The only reason why it's not a huge problem is that TSA locks are a special thing you buy and use solely for airline luggage that is already in TSA custody anyway. If you use TSA locks on anything else, however, you're just asking for it to be stolen because the locks don't actually provide any security.
The shady clients will get their hands on any intercept key provided by law enforcement, because it's legally unreasonable for Apple or Google to only provide intercept capability to some of the countries they operate in. e.g. if you give the US and UK a decryption key you also have to give it to Saudi Arabia[0]. Hell, in some countries the shady and legit clients are part of the same government - e.g. you can't give the key to just the FBI but not the NSA or CIA.
[0] The Saudis have one very big lever they can use to force the west to do what it wants: gas prices.
2nd is Saudi Arabia then Russia. However it happens that US is also the largest consumer and their production doesn't meet the demand so they have to import from other countries like Canada and Saudi Arabia
So Saudi Arabia most definitely does have a lever, and so does Russia since the rest of the world including US allies like Japan, South Korea, Australia, NATO countries depend on their lovely black gold to have functioning economies.
I want to add that even if the US produced more oil, we currently don't have enough industrial refining capacity for the type of crude that we produce to meet our demand, so we would still need to rely on foreign imports.
> [...] so does Russia since the rest of the world including US allies like Japan, South Korea, Australia, NATO countries depend on their lovely black gold to have functioning economies.
Have you been following the news for the past two years? Russia's sanctioned up the wazoo. No NATO country is buying Russian oil. India is now their number one costumer.
There is truth in that Europe isn’t buying directly from Russia. However plenty are buying from countries are buying refined oil products from India (and possibly others) where the source is Russian crude oil.
If the US was like Saudi Arabia where they exported half of their oil, and could supply most of the world at competitive prices, Russia would have really felt the Sanctions.
But right now Russia doesn’t feel the Sanctions. They’re more isolated and Putin’s propaganda has somewhat worked at making the general population anti-west and support the Ukraine invasion.
That is a slippery slope though, because the OS vendors could offer Law Enforcement everything today, and there will be a special request made for a little something extra tomorrow.
The ties to government are a red herring. Hacking into people’s private phones and computer systems is generally immoral and illegal.
It generally continues to be immoral and illegal when governments do it. Except it also becomes more outrageous, because governments are supposed to protect us from this sort of thing.
I don't see why the government doing it would make it more outrageous. If democratically elected leaders pass a law outlining when and how the cops should be able to access private devices, a judge looks over a specific case and signs a warrant, the cops use a hacking tool to catch a terrorist and the evidence is presented in court, this seems like the most excusable use of hacking tools that I can think of.
The government is given power over people in order to protect us from other people and this is one tool to do it. They have cops with guns and soldiers with tanks, they can break in, search and seize, they can lock people in prison. All of these things are tools and it's they way they're used that decides what's immoral or outrageous.
The bigger problem here is that a private company has these tools and can use and sell then with no oversight.
It does if we grant the two the same assumptions. If we assume that serious, unjustified harm would occur by failing to act, and they are in a reasonable position to act… then I’d say a private company is equally justified in doing the same thing. However, you’re assuming the government is justified merely because it’s the government.
Private companies aren't, but in certain circumstances private citizens working for those companies are. In the US (except perhaps Georgia?) if a crazy guy comes into your workplace waving a knife around, you're allowed to disarm him and pin him down on the ground.
Depending on the circumstances, absolutely. Assuming that serious unjustified injury or death would occur if they failed to act, there should be some legal window in which they’re allowed to prevent the harm. Private companies (and individuals) should not be required to stand by helplessly while people are hurt.
Indeed, legally, private individuals and companies are allowed to act in emergencies. For example, I generally should not break into my neighbor’s home. However, I am legally allowed (and morally obligated) to forcibly enter their residence if their house is on fire, or they’re being attacked by a burglar, etc. and I am able to prevent some of the harm.
Of course, if we assume we’re talking about situations where the government needs a warrant, the legality becomes more complicated. At what point does something become an emergency? I would say it’s not an emergency if there is time to inform the government and to let the government prevent the injury. If we assume the government is unwilling or unable to act, then the window for action should expand by some measure.
Exactly. Indeed, in Phoenix v State, 455 So.2d 1024, the Florida Supreme Court implies that a private citizen could request and receive a warrant to arrest a felon. They say the citizen could be excused for failing to obtain a warrant by proving the person arrested was actually guilty.
This is obviously a bad idea, private companies or individuals having the power to arrest people because they want to? Look at the recent few years of history in the US where multiple experienced and distinguished (at least by resume), members of the us govt, senators, reps, tried to subvert an elections, dozens of lawyers told them it was illegal, we have their email and texts telling them. That group still acted to do many illegal actions, lie about it, tried to cover it up. And they still deny any problems with their behavior and choices.
Private companies having arrest rights is just a nonstarter of an idea (putting it kindly).
Maybe it depends on the country, but private companies cant generally get warrants to infringe on people's rights afaik. If justified is interpreted as 'legally justified', then it would make sense that only government agents could be justified to act in this manner. Of course, government agents are known to operate outside the law as well.
I wouldn’t assume that private companies and individuals cannot get warrants.
However, they look very different. The major distinction is that when a private party requests an injunction allowing them to e.g. trespass on their neighbor’s land, the court will require notice and a hearing for the defendant. So, if a chemical plant needs to do earth works on a neighbor’s land to prevent a collapse, etc. the judiciary may well issue an order requiring the neighbor to let the company enter.
Frankly, notice and hearing should probably be required for some criminal warrants too. I can think of a few indictments and arrest warrants that have recently been issued where there is a genuine question as to probable cause and the alleged illegality of the conduct. It’s not fair for people who are not a flight risk to be arrested (and often imprisoned) with no opportunity to defend themselves.
Devil's advocate: we have a reasonable expectation that governments using due process to obtain warrants for criminal investigations have a right to break and enter into digital property or wiretap to catch and prosecute malefactors.
How far do you really expect any tech outfit to vet the legitimacy of the warrants issued?
How about the legitimacy of the government? Most of the abuses are governments which have a long history of abusing their power and it wouldn’t be unreasonable to say that entire countries should not be trusted with sales.
Legitimacy based on what? Recongition by the UN? Lots of governments even predating the UN have been long accused of rights abuses. How many people affected, and proven so by what basis constitutes infractions beyond moral right to be trusted by NSO. I'm asking people to really grapple with this.
My point being that there’s precedent for restrictions - we don’t sell nukes to anyone, and the companies which make advanced weapons systems have to get things like ITAR approvals. What would be especially powerful would be revocation: if a country is found abusing their access to this tool, they are blocked from purchases of any sort for a decade. Unfortunately, given Israel’s current politics it’s extremely unlikely that anything would happen since there’s no way to write a policy which would continue to allow their own usage.
The US has their own version, called the NSA. Available to hire via really simple framing. Guaranteed whomever is caught will be in prison for years just to get a trial to prove they're innocent.
Oh, but you see, NSO targets only "terrorists and criminals", so if you're a law-abiding citizen with nothing to hide, there's nothing to be concerned about. Right? It's not like there's any regimes out there where, say, casual investigative journalism or opposition politics would ever land you with criminal or terrorist charges, no sirree.
In Hungary, for example, which is an EU country and democracy (i.e. there are elections), investigative journalists have been targeted with Pegasus by the government.
To compare US elections to Russian or Syrian elections is both incredibly naive and dangerous. In one country, you have a leading political opponent having stolen classified docs treated with kid gloves; in the other, you have political opponents poisoned and literally blown out of the sky.
Sounds like you don't like the outcome and dismiss the possibility that it's the result of the system. From what I understand Orban has a decent approval rating in his country. For Putin it's hard to say since Russia is so foreign and separated, just like China - the only source of info is random crap by your preferred biased media source.
But for Hungary - I've been there multiple times in the last few years, know a few people, and I have no trouble believing he won democratic elections.
Selling malware/software weapons to US entities is generally legal for other US entities*, with the main caveat that if it ends up ITAR regulated then you can only sell it to the US government and other ITAR-cleared suppliers unless it's open source (in which case you'd be selling the platform).
NSO Group is bad because they have been caught selling to oppressive regimes and allegedly actively supported (and potentially continue to support) the deployment of their software for oppressive regimes to harm innocent civilians. They should be (and iirc are) sanctioned for their bad behaviors, bad intentions, and mishandling of their responsibilities.
* There are plenty of caveats (e.g. the seller & buyer need to have good intentions and only plan to use the malware in accordance with the law). I am not a lawyer and this is not legal advice.
Much of this stuff is classified as a weapon, and thus really sold by the Israeli government, not by the company. It's no different from a MANPADS that sometimes is used to destroy a Ka-52 over Ukraine, and sometimes is used to shoot down a civilian airliner - that is to say it's directed by the foreign policy (and foreign policy errors) of the manufacturing country.
There's no reason to expect the world to disarm any time soon, so the best approach is to be aware and democratically influence policy, rooting out bad ideas and bad actors.
Israel is constantly trying to woo Saudi Arabia so that they can be allies during a potential war with Iran. Israel will definitely sacrifice some human rights activists just for the ability to cross the Saudi airspace. But it has not been going well for Israel lately.
There's multiple responses echoing this idea that it's a defense company like any other and thus an evil we'll have to accept exists.
That may be true, but these companies (NSO group is by no means worse than the rest of them, just more notorious) have been caught over and over again, selling these "weapons" to dictators, companies, etc, who in turn use them to spy on journalists and activists, not terrorists or anything of the sort. And that doesn't even go into keeping 0-days for the benefit of the few, keeping literally everyone on the planet less safe, which is arguably as big an issue, if more systemic.
These companies may exist in some form or another because the nation state & private surveillance systems that form their client base want them to exist.
But my point is that the individuals working at this company should be ashamed of themselves. I'm not appealing to their sense of morals, I'm talking purely about "us the tech community" making it abundantly clear that having one of these companies on your CV will make it very hard to find any decent job afterwards. It needs to be socially expensive to work there. To loan from Max Goldt's opinion on the BILD newspaper [1]:
> NSO Group and the like are an organ of infamy. It is wrong to use their products. Someone who contributes to these products is absolutely socially unacceptable. It would be remiss to be friendly or even polite to any of their developers or managers. One must be as unkind to them as the law will just allow. They are bad people who do wrong.
Yeah, I think the appropriate comparison is if a weapons manufacturer made "selling to dictatorships for suppressing dissidents" the core of its business strategy.
This is true, but then you have to also socially shame a large part of the US military, for invading Iraq. At least those that didn't resign as soon as it became clear that there are no WMDs there, and the large amount of Iraqis were killed pretty much for nothing.
In short - you have a point, but it's not quite that simple.
Yeah, every single US soldier who voluntarily stepped on Iraqi soil should be sent to the ICC and tried for war crimes. Some of them would be exonerated for being too stupid/brainwashed to understand that they were committing criminal acts. Others wouldn't.
However, those who develop the NSO spyware are middle class Israeli citizens who easily could get a well-paying job at a less repugnant company. There are no extenuating circumstances, no "had to put food on my table", no claims of being fooled/brainwashed. They 100% deserve to be punished and they 100% deserve our disgust.
Why the soldiers? They aren't the ones that made the decision. Sure, if they committed war crimes themselves, but for anything else you have to address the people that actually were responsible. Prosecuting soldiers would be futile and certainly no justice.
There was a lot of media propaganda to make the war popular. It wasn't at first but it didn't even take half a year until people ate it up. Liberal, conservative, didn't even matter. It was scary to see how quickly people were manipulated. It had large support in the population. People should stop and reflect what made them support the war, which messages and by whom. That is the responsibility they can take here and it would be much more constructive than putting the blame on soldiers...
Israel citizens might have a better excuse to develop weapons than most other countries, so I don't see the point. Not an excuse, but at least an explanation.
Because no American was forced to commit war crimes in Iraq. Yes, participating in an attack on a state that is of no imminent threat to your own is a war crime. Brainwashing may have been an extenuating circumstance, but a lot of Americans were staunchly against the war so how come they let themselves be brainwashed? If I join a gang can I claim to have been brainwashed when most people in fact do not join gangs? People are responsible for their own actions.
I don't know- I imagine deserting your brothers in arms (which may include your literal brother or sister) would be akin to deserting your family in a deadly situation. Regardless of how stupid the causes, once you're in the shit and people are at risk that you care about, the reasons you're there probably arent your biggest concern. The people that should be held accountable are the ones who orchestrated and perpetuated the whole thing, not the soldiers (unless they commit war crimes obviously).
I mean, to follow that analogy, yeah people absolutely should abandon their families if the families are out there actively murdering innocents.
The person saying that they're only staying to murder with their families because they care about them is not a redeeming quality, and they should definitely be held accountable and not excused for their crimes.
For the record, I consider any armed person outside their home country should be considered as a terrorist and a militia and treated as such. There is no reason someone from country A should be carrying a weapon in country B and attacking people there. This is 100 times even more valid when country B has not authorized this.
I agree, but the world just isn't this simple. It's not about murdering with you family- it's about protecting your family. Kids I knew that went to Iraq were the protective types, not murderous. People can enlist in the military with the intention of protecting their country only to be ordered overseas caught up in some bullshit war. Historically, drafts were the main reason. And no man is an island, so whatever situation pulls one person in, is bound to ripple through other people's lives and pull others in as well.
> I consider any armed person outside their home country should be considered as a terrorist and a militia
I mean, there are situations like hostage crises where foreign countries send in soldiers that I think are completely justified. But, I agree, in general. Our foreign policy has been fucked since the CIA started after WWII. I'm just grateful I never had to fight a war- chances are I would've being born in the last couple hundred years
So then, by this logic, once you've worked for NSO Group or the like, there's no way back for you. How then, can someone reform or "see the light"? Is someone once tainted, always tainted? Or do they have to do 10 years in the NFP space before we see them as worthy?
The problem is that by walling off developers who participate in these activities, we essentially force them to continue these activities. I'm not sure that's net positive.
It’s not like we don’t accept that people change, but stigma is useful for both discouraging starting there or staying. If your first job out of college or the military is a defense contractor, oil company, Palantir, etc. a lot of people will sympathize with needing to make rent. If you’re still there a decade later, they’ll assume you’re okay with what they do.
That’s a personal ethics shirk. For example, policy makers haven’t outright banned tobacco companies but a large number of people would not spend their time trying to make such companies successful.
They should be ashamed of themselves, but you are still barking at the wrong tree in the long run. You should demand your own government to outlaw this type of surveillance.
I can do both, actually. Just because something is technically legal doesn't mean it has to be socially acceptable. The two systems are often complementary, and often even contradictory.
Regulation at all is hard enough, expecting it to work by social norm is just impossible. Even more so in a country where all women do 22 months mandatory millitary training and all men do 36.
> have been caught over and over again, selling these "weapons" to dictators, companies, etc
Meanwhile, a nice silent worm propagates among their network... I have 0-faith that the version they have sold to bad actors is clean when they probably are begging you to take their software into your internal network.
From being in MI, collateral damage is a thing... decisions like "if we act on this information, 100 people will be saved but they'll know we know and 1,000s could die. If we let 100 people die, we can save 1,000s" are more common than we'd like.
1. There's no compelling reason to think that this applies in NSO's case, since a lot of the bad actors are geopolitically aligned with "good" governments.
2. One needs only study the cold war briefly to see how the group-think in these unaccountable environments can become completely detached from reality.
Pardon me if I'm skeptical of unaccountable officials making those decisions, and orders of magnitude more skeptical of random people on the internet alluding to such actions as if we can all just assume abuses are justified by unspecified good ends.
They don't just make a gun, sell it and then it's up to whoever uses it. It is well established they run the C&C servers and tailor operations - they are combatants.
> Israel is constantly trying to woo Saudi Arabia so that they can be allies during a potential war with Iran.
You have that reversed. Recent Iranian regimes have been especially hostile toward Israel, but that's nowhere near as longstanding an enmity as Saudi/Iran.
AFAIK Israeli government audits NSO and stuff, but they are separate... And Intellexa (authors of Predator I think?) doesn't even get audited because it's "not israeli" on paper
It's more like they leverage it for diplomacy. The auditing means nothing really, it's being given to authoritarian government like Saudi Arabia as long as they are OK with Israel existing. The bar to get access to NSO tools is too low...
They don't really leverage it for diplomacy. Israeli arms exports policy consistently prioritizes getting better R&D economies of scale over actually affecting foreign states' behavior.
Definitely part of diplomacy effort of netanyahu with the despots if the world. NSO CEO travelled with him to Saudi t among other places.
It's software there's economics of scale by default.
Economies of scale only apply if you have scale, ie lots of (paying) users. If you're making this fancy thing for only the Israeli security services you won't be able to pay your developers.
Economy of scale is actually inverted with 0 days. The more you use it the higher the risk it's detected and fixed so value and scale are inversely proportional.
If they are dumb and get caught during audit for selling to Sudan or something then sure, Israeli government will probably tell them they're bad. (And what, shut them down? Lol.)
It gives me the impression that you find "the tech community" to be a cohesive collective that has the organization to switch gears in a given direction.
I wonder why you expect it to be like that.
In reality, "the tech community" is extremely diverse and not cohesive at all.
For one example, a large proportion of developers are barely making enough money to pay their most basic bills. They don't have enough mental space to even know what NSO is...
> In reality, "the tech community" is extremely diverse and not cohesive at all.
I really dislike this phrasing "the X community" which seems to be so popular nowadays. Lumping together many millions of people worldwide who have a single thing in common–how did people end up using the word "community" to describe that?
I’m in the Tech Community. I’m fine with the NSO group. (Which, btw, is owned by U.K. Novalpina Capital, and managed by a firm out of Luxembourg. But for some reason you all here aren’t obsessed with those countries.)
That is the problem, we are not cohesive enough in shaming these people. They are criminals, and criminals don't post openly on linkedin proud of their work.
I just got finished listening to the most recent episode of Darknet Diaries this morning on the way into the office! It was about similar companies to the NSO group: https://darknetdiaries.com/episode/137/
I listened to the first half this morning. Was thinking about going back and watching the NSO group episode he mentioned again. Then I get to work, and the first thing I see is this link.
Actual headline: mentions NSO group and nothing about Apple.
Top comment (+50 comments): Why do we talk about Apple so much and so little about NSO group.
The absurdly pro-Apple PR on HN is tough to bear. I have to say it's so overt it made me more hostile to Apple (NSO is obviously a worthy topic, but we do discuss it).
Plenty of these comments are about NSO. And that's fine! But trying to catch every blackhat won't solve our security problems. Ultimately, the only solution to these security holes is more secure software, and the only way to get that is to pressure Apple to invest more resources. The main question should be how come 'secure' Apple software keeps having 0-click exploits.
Pressuring Microsoft led them to adopt a much more secure culture compared to previously. Apple shouldn't be exempt from the same pressure.
To be fair, when I made my post I had to go through 3 pages of threads to verify there was indeed, at that moment, very little discussion about NSO, and mostly discussion about remedies and how bad apple is handling imessage. But I'm glad that enough other people seem to care. :)
The NSO is less of an issue to me than the fact they are finding exploits Apple isn't (assuming Apple truly isn't aware of these and/or building them in on request) and that Apple has more than enough to budget for. To me, the NSO (as evil as they are) is like a regulator who cuts through a company's "self-regulation" claims and proves that the company they are regulating is either intentionally making their own platform insecure or is at best, negligent to mitigating and being proactive in addressing obvious issues.
Apple could pay all these people and companies way more than they could ever hope to earn on the free world market to simply fuck off. They are notoriously stingy with bug bounties and constantly disillusion those who are helping to ostensibly make their platforms more secure. I view NSO in a similar light to Correllium, whom Apple has tried to shutdown (unsuccessfully).
Its like trying to blame a whistleblower rather than prosecuting the misconduct that comes to light. The energy and blame is misplaced and this lawsuit only distracts from the fact that iMessage is basically the skeleton key to access anything and everything on a modern iPhone, after all this time.
The NSO group is the easiest to spot. The other parties involved in their operations are not so easily traced, such as Team Jorge, AIMS, Legion, Xaknet etc.
For once, I am not okay with what they are doing, and I've started to fight them actively.
If NSO did not exist the vulnerabilities they discover would still be there. So I guess the complaint should not be that they exist, so much as their motivations and applications being questionable. It's an argument for something similarly funded to exist, but with an aim to responsibly report the bugs and get them fixed.
I dont think its the "tech community" being okay with this application of the tech as much as it is fear of standing up to Israelis in any way.
Imagine you own a infosec company and an applicant with excellent skills applies. You look at the CV in detail before the interview and you see that they proudly declare their NSO background. Tell me what will you do? Cancel the interview? How comfortable would you be to deny the applicant a job for that reason alone?
I would wager the majority would consciously hire them out of fear of blowback and most of the remainder would unconsciously suppress their opinions on the NSO.
Nah. Joining NSO or such displays a moral "flexibility" and/or a lack of judgement that pretty much precludes the individual from taking on significant responsibility.
Imagine having a business handling privacy relevant data and when someone asks about your stance on data security you have to admit you hired people who are okay with keeping the whole planet insecure for their own benefit.
I know, I know, in my more cynical moments I see it your way as well. But that doesn't make it right.
I don't see why companies that facilitate criminal acts are not swiftly brought to legal justice. We should not be tolerating companies like NSO group in any sense. If the Israeli government wants to look the other way, we should designate NSO group a terrorist organization and start sanctioning any country that won't bring them to justice.
If Snowden and Assange can be extradited to the US and tried for crimes, executives of NSO group absolutely should as well. Lock 'em up!
> That right there tells me that we as "the tech community" are way too okay with this sort of application of the tech. The tech we're all so convinced will "make the world a better place."
This calls for a larger discussion of individual choices of every one of us. It would not be an easy discussion, because things are far from simple, and yet every one of us should actively think, instead of falling into the whataboutism trap and doing nothing.
For example, there are probably thousands of tech people in Russia right now either breaking into Ukrainian systems or writing software for missiles, drones, targeting systems, etc. These systems do not write themselves. Each of those people should ask themselves if this is really what they should be doing. I am certainly asking myself if I want to ever work with people who were complicit in these crimes (and how will I know?).
I know some people who pledged to never work on any military systems. I was close to that point of view, until Russia started dropping bombs on my Ukrainian friends. Now I don't see it quite in the same light anymore.
Similarly, the NSO group is not an amorphous entity, PEOPLE work there and write these exploits. In each case, it is a conscious decision.
My point is that we can't abstract tech from moral choices. There is always right and wrong, there is always the right thing to do. It might not be universally applicable, and there will always be endless discussions on HN ("but what about..."), but each of us can and should think about how our work is applied.
No big political leaders out of the tech world yet. So "the tech community" doesnt have anyone to rally around. And this more a political prob than a technical problem.
This is a problem of legislation. It would be trivially easy to stop this behavior, but governments in the western sphere tend to like surveillance as well.
It sad true that tech would make the world a better place (in some ways). But not because of the infinite goodness and wisdom of the first movers, who happened to entrench themselves at the right moment in time.
The same is true today. Eg LLMs have huge potential. What worries me are the sociopaths who draw the same conclusion.
Its super interesting to me how much its emphasized that you shouldn't use Lockdown Mode unless you are a journalist or otherwise in direct palpable danger. They really do try to talk you out of it. Its curious, because there's very little difference in functionality (as experienced by the user) other than disabling a lot of Apple nonsense from running in the background expanding your attack surface.
And everybody parrots the nonsense caveat that everyone shouldn't use it, only those special enough should like it was a zero-sum game or scarce resource. Everyone should use it because it disables a lot of nonsense that doesn't serve you and probably even saves battery power. Also, the more people use it, the less it can be used to fingerprint specific users.
It does make iOS slightly more inconvenient, such as when adding each other on iMessage. And it severely reduces JavaScript performance in Safari. I think Apple wants to avoid making iOS feel slower or clunkier than Android. And zero-day spyware is usually targeted towards important individuals, not used for mass surveillance, so it indeed is a smaller risk to individual people.
I'd prefer a third mode that compromises between the two, perhaps letting you lower your security for a few minutes when you need the extra functionality. For example, Safari could detect when JavaScript is being slow and pop up an offer to re-enable JIT.
I would argue that iMessage is way to problematic to be used safetly, at all. By anyone. Full-stop. It also seems to be the primary attack vector of NSO related zero-days as well and its become known that phone country/area codes have relevance to its chance of succes in past exploits, which suggests a phone/messaging type attack vector.
That fact that Apple blended iMessages, SMS text messages, and email into an extremely confusing mess may also be the reason for so many security issues related to iMessage. Perhaps not directly responsible for this particular NGO exploit, but I find iMessage's logic and behavior bewildering at times.
For example: If you stop using WhatsApp for example, nothing bad happens if you try to send messages another way. But if you stop using iMessage, then you can no longer send a normal SMS to someone with whom you've communicated before using iMessage. The Messages app will tell you, "You must enable iMessage to send this message", even if it's an SMS text message to a normal phone number! Why shouldn't that work?
To be able to again send SMS text messages to someone you used to talk with is to disable iMessage of course, then sign out of Facetime (who could imagine that as a necessary step?), sign out of iCloud, reboot the iPhone, and wait some minutes to hours to days until you are "deregistered" from iMessage. I'm talking about the same phone with the same SIM chip. The problem can become much worse if you've switched phones or SIM card.
The source code for iMessage must be a nightmare having integrated SMS and email and a new messaging system all together.
There is no email (the protocol) in iMessage (the app). You can use somebody's email address as the recipient for an iMessage (the protocol). No email is ever sent.
You can type in a contact with an email address by just their name and send an email from iMessage. I have done it to contacts accidentally many times.
I think sending SMS to emails and receiving SMS from emails is a functionality of the mobile network. You should be able to do that in any app that can send/receive SMS.
The point is that those other apps don’t use email addresses as the handle to contact someone. If someone iMessages you, the iMessage might (appear to) come from their phone number, or it could (appear to) come from their email. If you have an iMessage contact that’s just an email and you iMessage them, it works fine. If you try to then add Android users to your group chat, everyone gets SMS and the iMessage user with an email handle gets an empty body email from AT&T with an attachment containing the SMS as a plaintext file. And then this user gets another empty email for every reply to that group text.
I'm fairly certain that "text to email" is a feature of MMS - I've used it a few times years before iPhones were around.
I don't remember if MMS is enabled by default in iOS but theres a toggle to disable it, and realistically there's very minimal real world use-case for MMS these days.
Yes, it’s an MMS feature. But iMessage makes it way too easy to “text to email” inadvertently when you start a group chat with some non-iPhone users. MMS sucks but it’s the universal way for iPhone and Android users to communicate without needing everyone to be on the same third-party messaging platform like WhatsApp. In my US-centric personal experience, there is no universally accepted messaging app you can be certain that everyone is on.
Oh, I have never come across that because I have avoided MMS like the plague ever since WhatsApp/Signal/any other cross platform messaging option with media capability became available.
On Apple's end, iMessage also supports email addresses as a user identifier (and it's the only one you get if you don't have an iPhone with an assigned phone number).
It's still not sending emails, though. The iPhone Messages app sends SMS, MMS, and iMessage; email is the responsibility of the Mail app.
The point is that iMessage lets you send to any contact and it’s not clear if it will send to their iMessage, which uses email as an identifier, or to their actual email inbox through mms.
It is clear in a non group conversation, since the contact will show up in a blue color in the “to” field.
In an MMS, it could be unclear, but only if you choose to put an email address in the “to” field. If you know it is an MMS, and you only use phone numbers, then it will not be an email.
> For example: If you stop using WhatsApp for example, nothing bad happens if you try to send messages another way. But if you stop using iMessage, then you can no longer send a normal SMS to someone with whom you've communicated before using iMessage. The Messages app will tell you, "You must enable iMessage to send this message", even if it's an SMS text message to a normal phone number! Why shouldn't that work?
To be able to again send SMS text messages to someone you used to talk with is to disable iMessage of course, then sign out of Facetime (who could imagine that as a necessary step?), sign out of iCloud, reboot the iPhone, and wait some minutes to hours to days until you are "deregistered" from iMessage. I'm talking about the same phone with the same SIM chip. The problem can become much worse if you've switched phones or SIM card.
That’s simply not true. I just turned off iMessage and instantly switched to the Message app and sent a SMS to someone I have a iMessage chat with and it worked without any problems
Don’t use SMS instead of iMessage though. Then all your texts will be sent across the network without any kind of decent encryption. And WhatsApp is almost unusable unless you consent to uploading all your contacts to Facebook. (IIRC this was the red line that got crossed that caused the WhatsApp founder to quit FB post-acquisition.)
Signal is a good recommendation, but you won’t be able to convince 100% of people you need to interact with over text to use Signal. You might convince friends and family, but not acquaintances or random people who might need to text with (like your electrician etc.)
Given the tradeoffs, iMessage is pretty good for day-to-day messaging.
It's a tradeoff. Do you want messages from strangers to run through a bunch of parsers that historically had problems, or do you want to take advantage of your peer group using iMessage.
I'm outside the US, so I don't even need to consider. Nobody here uses iMessage, even the people with iPhones.
It works but it shows phone numbers rather than contact names and you can’t assign a name to a number without giving access to your entire contacts … it ticks me off.
That is because iMessage has the same function as the night men in the Eagles song Hotel California:
"Relax,” said the night man, “We are programmed to receive
You can check out any time you like but you can never leave"
Somehow fittingly that song is about the excesses of American culture ... also about the uneasy balance between art and commerce [1] according to one of its authors, Don Henley while also having been interpreted as being all about American decadence and burnout, too much money, corruption, drugs and arrogance; too little humility and heart and a metaphor for hedonism, self-destruction, and greed ....
> I would argue that iMessage is way to problematic to be used safely, at all.
Maybe I'm missing something but every single time the only part of iMessage (actually Messages.app) that is insecure is the bit that automatically unfurls attachments and the payload is exploiting a vulnerability elsewhere. So any other app unfurling the attachment thus triggering the payload would be equally vulnerable.
Imagine ping had a privilege escalation vulnerability and someone does ssh foomachine ping <payload> to get root, it'd be a bit weird to call out ssh as being unsafe because it can execute commands, one of them being able to privesc.
Disabling ssh would be a mitigation, and I do wish Messages would disallow unfurling for senders not in the recipient's contact list.
imagent runs as root and processes incoming messages. whatsapp or signal or whatever cannot ship an unsandboxed always on daemon like imagent.
signal/whatsapp/etc have to parse incoming messages inside the app sandbox. iMessage doesn't.
(I'm saying this all very confidently because the quickest way to get the right answer is to be confident about the wrong one and get corrected by a techbro)
Why would they give that specific process (imagent) that much privilege? Can nefarious motives be inferred from such a choice? It seems pretty damning to me that a glorified GIF processing helper is given root access to the entire system. It just doesn't add up that this is all accidental.
What are the odds that something like the NSO just happens to luck into being able to remotely initiate and sustain the building of an entire Turing-complete internal and unauthorized computer internally that also happens to be able to override all hardened protections to the contrary? It just seems so unlikely that there was not a hand in facillitating this internally at Apple. That's what happened with the GreyKey guy...
IIUC (from a cursory look) according to the diagram it delegates all message processing to MessageBlastDoorService/IM{Transfer,Transcoder,Persistence}Agent, relying only on locally computed boolean-ish metadata replies from these services, and merely transparently forwarding actual data between those.
I'm no security pro, but last night I iMessaged a friend a TikTok video and according to him, the link initiated an App Clip. Perhaps it's totally safe and I'm just naive but it just seems like the risks of a link initiating code like that outweigh any rewards. Even if it's totally safe and all involved can be trusted, that experience is enough to creep me out.
There’s been a long history of them, and an entire industry doing things like filtering attachments or rendering HTML emails in sandboxes.
I think the original poster made an attribution error: iMessage gets attacked because it’s popular. If it didn’t allow you to receive rich messages from anyone, people would switch to other apps which do and there’s a long history of those being exploitable, too. What makes iMessage special is that you can assume an iPhone user has it enabled without having to check whether they use WhatsApp, Telegram, Facebook Messenger, etc.
There have been! One was in Microsoft Outlook parsing email subject lines, so you had to do was recieve email in order to get hacked. And there was another one around when heartbleed was a thing that had to do with parsing of the DNS lookup and response of who the email was coming from.
> It also seems to be the primary attack vector of NSO related zero-days as well and its become known that phone country/area codes have relevance to its chance of succes in past exploits, which suggests a phone/messaging type attack vector.
If you are using a phone, you have a phone number. Targeting the phone and SMS handling apps will always be the go-to vector for these sorts of attacks, because you don't want to tell your customer that they can only spy on targets that have Evernote installed and configured.
I agree, and there really need to be controls on it. I understand they want the "IMessage Network" to have predictable functionality, but I care about security more, and IMessage has been demonstrably unsafe for a long time.
I would really prefer to keep it text-only, and am fine with the goofy symbols. If they want to make photo exchange safe, they have the hardware to securely sign images taken on-device and only allow those.[1] (Although that would probably piss off regulators even more.)
[1] With some work, this could be a new feature, used to demonstrate images haven't been altered. With some lockdown of the clock, it could have secure timestamps. (Location could still be spoofed with a GPS hijack.)
It's also insecure. The sync keys for iMessage are backed up in the non-e2ee iCloud Backup, which means that iCloud serves as a key escrow for iMessage's e2ee, rendering it useless (as Apple, which is definitively not an endpoint, has a private key of the participant and can read all the messages in real-time).
This is less true now, with the option to enable “advanced data protection”. Turning this setting on disables Apple’s access to your iMessage keys along with a bunch of other stuff, though of course if you get locked out, Apple can’t help you
Yeah, and this is the sort of thing that I think drives Apple's care in recommending the most secure modes; they don't want people causally turning it on and discovered that they've buggered themselves up.
I agree with you; Amazon servers receiving 80,000 or 800,000 requests per second or 8,000,000 is all a different ballgame than it is for 800 individual actual families around the world (or 8,000 or 80,000) to get their telephones totally buggered up before work that morning on any given workday -- just because somebody trustworthy has advised them to play it super safe without making equally sure the listeners were understanding the UX difficulties of recovering their smartphone's functionality in certain mundane use cases, etc, which would ensue. That's a lot of panic to deal with. Apple user help forum volunteers would be helpless to reach all the affected frustrated people.
I don’t believe this is true. You can change your iCloud password at any time, which means they definitely are not encrypting your iCloud data based on that key or a derivative. If I had to guess, they generate a key and encrypt that key with your password so it can be changed but they also aren’t able to produce it on request.
The drawback here is that the encryption key for your data never changes, even if you change your password (the private key is just re-encrypted with the new password).
If they’ve implemented it well then this is mostly academic but it does mean they must be escrowing encrypted keys for every account, and those with ADP enabled are just encrypted against their password rather than the Apple key. It also means if they’ve suffered an undetected breach in the past then changing your password doesn’t help protect your data going forward necessarily. That being said, if an attacker had ongoing access to iCloud data then it probably doesn’t matter (although the presumably-more-secure key vault wouldn’t need to be breached again).
I have no insight into Apple’s practices and this is all speculation, this is just the trade-off I would make to keep it usable.
The keys in advanced protection are derived from your device passcodes, your macOS user password and a recovery key. You'll notice you have to approve from one of your devices to use iCloud web or add a new device.
The deviation function takes a while to run and depends on the secure enclave, but you still probably want to avoid 4-digit passcodes.
They are, but they also must be encrypted n separate times where n is the number of signed in devices.
Mac
iPad
iPhone
Recovery Key
Each of the above would have a separate uniquely encrypted device backup key as a result of the derivation function. I can change the password on any of those (or regenerate the recovery key) without a full iCloud re-encryption or duplication of my iCloud data - therefore Apple must be holding a key in escrow that is the actual decryption key. One would assume it's that key that is encrypted against the derivation function, as then it could still be credibly argued as end-to-end, but that's just an assumption I'm making.
I'm not sure why you're doing all this speculation, when wrapping keys is a pretty standard technique (i.e. LUKS key slots) and Apple provides the details themselves[1]. Yes, they're doing a handshake with secure enclave keys and transfer the master key to your devices. Turning on Advanced Protection will reencrypt all the data in iCloud in the background whereas turning it off will submit the master key to Apple so they can presumably place it on an HSM. Apple already did this before advanced protection with your Keychain.
Unless BOTH ends of a conversation are using it, it's pointless.
This means that turning it on does nothing in terms of privacy, in practice, today. All of the iMessages you send and receive will be readable using the escrowed keys from
the other users you are messaging with.
Perhaps at some point Apple will prompt or nudge people to migrate, but that's unlikely given the risks to data loss for people who forget their credentials (and have "nothing to hide").
I probably spent 100+ hours doing everything possible to regain access to an iCloud account with advanced data protection.
I lost the password and the recovery key (with no 2nd apple device that was logged in). The only outcome in that scenario is losing your iCloud account completely.
Lesson: enable advanced security, but save your recovery key!
It's on by default, which means everyone you iMessage with is escrowing the keys that allow Apple to decrypt all of the messages. Turning it off on only one end of the conversation has no meaningful effect.
Was going to mention. iMessage seems to be that golden key thing the FBeye asked them for back in 2015 in San Bernadino (insofar as iCloud itself isn't a/the key itself, already)
Another idea for Apple would simply be quarantining attachments from unknown contacts. E.g. display that an attachment exists but don't download it to the device until a user accepts a "attachment from unknown sender" warning box
AFAIK all iMessage attachments (since iOS 14) are quarantined via BlastDoor, any such full system takeover must include at least two escapes: one from BlastDoor, and another from the application sandbox. They also need to cope with ASLR. It's pretty heavy duty even in the most basic default configuration.
I think attackers would just try to make the system offer to disable security whenever possible then. Anything as easy as clicking an already offered option by the OS itself will be used often enough to negate most of the security benefits of that mode IMO, meaning you deal with it being slower be default and probably not as secure as you think because people will opt out often for convenience, so the worst of both worlds.
As I understand it this was a real problem with earlier versions of Windows where it kept asking for admin privileges all the time for simple things, and people got conditioned to just authorize it. They made a concerted effort to provide APIs that didn't require it for most actions to combat this.
What do you mean "per-app in safari"? I'd like to turn it on globally, with a single exception: I want to be able to continue using shared photos albums with my two best friends.
I don't care enough about JS performance or, more generally, the mobile web, to want to disable it on safari, or even parts of it.
You can disable lockdown mode in web views for specific apps. You do it in settings because those apps don’t have the usual Safari UI for configuring that.
> Its curious, because there's very little difference in functionality (as experienced by the user) other than disabling a lot of Apple nonsense from running in the background expanding your attack surface.
If they didn't want people to have all of the background stuff running, they wouldn't put it on there in the first place. It's not super surprising that they want people to use the features (whether "nonsense" or not) that they purposely put there.
Capitalist view: If they didn't emphasize it, some first-time Apple customers might be convinced by concerned friends and family to enable Lockdown Mode by default, and then might complain to Apple / return their device because it "doesn't do the things it was advertised to do" (because those features don't work in Lockdown Mode.)
Realpolitik view: repressive regimes probably only allow Apple to release devices with this feature available, as long as they don't heavily push it / make it the default. If Lockdown Mode defaulted to "on" in China, and so was used by the majority of users, then Apple would be quickly booted out of China.
Yes, this is the angle I've been trying to capture. Its realpolitik, thank you for helping crystalize that. But I maintain that it extends to the US as well in terms of cooperation with domestic enforcement bodies.
How about the alternative capitalist view that they don't have to spend as much time on QA in lockdown mode? Seems like one of those things that could cause all kinds of unintentrd consequences across apps.
I use Lockdown Mode on my Mac because I don’t use iMessage, FaceTime, or other apple services on that device. It’s literally just a computer for software dev and maybe YouTube videos.
I haven’t noticed any difference with web content either, but I also use Firefox / Chrome instead of Safari.
What I would really like to see is options. For example on iOS I use shared photo albums, so it would be nice to keep that feature but disable all the other capabilities.
> I haven’t noticed any difference with web content either, but I also use Firefox / Chrome instead of Safari
Lockdown mode only affects Safari. If you use another browser, it doesn't make any difference.
Here are some features that are disabled in Safari when lockdown mode is enabled:
- JIT
- Remote fonts
- WebAssembly
- WebGL
- WebRTC
- PDF Viewer
- MP3 Playback
- Gamepad API
- Web Audio API
- Speech Recognition API
- MathML
- JPEG 2000
- MediaDevices.getUserMedia()
You can configure most of those in Firefox and Chrome, but it has to be done manually and cannot be disabled easily on a per-site basis like in Safari.
> For example on iOS I use shared photo albums, so it would be nice to keep that feature but disable all the other capabilities.
I'm in the same boat. I was a bit confused, since I'm pretty sure I read somewhere that you could selectively disable it for some "apps", but I've never found out how to disable it for photos specifically.
My requirements of my phone being otherwise slim, I didn't encounter any other issue with lockdown mode.
Ironically, you need it all the more specifically on the devices you like to use those services with. Even moreso than on the devices you don't use them with. The fact of the matter is Lockdown makes iMessage as safe as is possible (I still wouldn't take the risk, personally, but YOLO). It doesn't hurt to be using Lockdown everywhere.
Lockdown mode? How do
I enable it? As someone who owns a Macbook as their only Apple product, I hate seeing or dealing with Apple pushing their services to me
> Everyone should use it because it disables a lot of nonsense that doesn't serve you and probably even saves battery power.
Lockdown mode acts as a natural ad block which is great (as a reader). But it also disables JIT. I assume this causes wasted CPU cycles and perhaps, on balance, worse battery life?
On the balance, I have found the opposite to my experience. Your phone spends more time passively carrying out a multitude of background tasks and analystics stuff then it does with you actively web browsing.
Who? Apple? I can't find any statements from them begging us not to use it. It's also a dumb argument since they can just --not-- release the feature if they don't want us to use it.
Just like disabling JavaScript in the browser by default, or using LTSC versions of Windows --- it's propaganda to keep you on the path they want, and not the path you want, because there are powerful interests in the former direction.
If this was true Apple would have never released the lockdown mode feature. A good conspiracy theorist will drop a theory when there's clear proof they're not up to anything.
Much like LTSC and the ability to disable JS still remains, it's merely a concession they don't actually want you to use, and in the case of lockdown mode, serves as a feature to tick off their privacy-oriented marketing.
Isn't it time we made first messages from all new contacts plain text only, and all other messages some very restricted subset rather than some crazy extensible system that isn't so different from ActiveX?
And on top of that, maybe the whole app should run in a sandbox.
And on top of that, perhaps it should all be a webview to give one more layer of protection.
There is even precedent for doing this seamlessly: the Apple Mail client will not render media from unknown senders without user confirmation. iMessage should have the exact same behavior for the same reasons. It’s frustrating to watch greedy project managers re-learning the exact same lessons that a previous generation already learned the hard way, especially when they all work in the same building.
This is for a different reason (all useful e-mail clients do the same thing!). If Mail (or thunderbird or whatever) loaded the media, then the sender could know you opened the e-mail (by sending each recipient a unique image), leaking information.
Does the Apple Mail client do this for images included with the message (instead of referencing by URL)? Like another comment mentions, this is done for other reasons, and many clients render embedded images by default.
Mail does this for privacy, not security. Emails can load remote content and this can be used to track you. This is not generally true of images sent in iMessage because they get sent directly.
Yeah I can't believe we are still seeing this happen over and over again. Whenever you see "zero click" you know it's one of the complex payloads like images, fonts. The answer shouldn't be "don't render images". We should be able to trust that a component that parses external data such as an image, simply can't do anything malicious regardless of input.
If that means sandboxing, fine. If it means having to rewrite all the image parsers from the ground up in a safe language or formally prove them correct, fine. Just get on with it. Apple is rich enough to be able to run their own space program ten times over, I think they could write provably correct imaging libs too.
Yes parsers are already sandboxed, and violating the sandbox boundary is where the actual valuable exploit is. Parser vulns are near worthless without the rest of the chain building on it, and the millions of man-years it would take to re-write every last one of them as "provably correct" is better spent hardening sandbox and privilege boundaries.
Which is a completely different problem than simply rewriting things in a safe language.
What is the cause of the sandbox escape in this case? Somewhere (too high in the stack) there is a C-ish program where someone does pointer arithmetic or an array deref in C which is the same thing.
The security model of a sandboxed process is that even full arbitrary code execution cannot do anything the sandbox says the process cannot do, and the process the parsers run in is sandboxed to only be able to communicate to other processes through very limited interfaces that have no access to network or disk.
Rewriting would introduce new bugs; it would take a large number of engineering hours away from delivering shiny new things; and a formally correct version would probably be less power-efficient.
It won't happen because these targeted attacks don't affect the bottom line whatsoever. Nobody is switching to Android just because a journalist or NGO employee occasionally gets pwned.
It doesn’t really matter if there are 100 new bugs for every memory unsafety bug fixed. Those new bugs in an image codec would be hangs/crashes or incorrect rendering and that’s it. And that might be serious but it’s not a security vulnerability.
Your problem isn't the quality of your own code, it's that Google exists and is unable to stop their employees from doing stupid things like inventing WebP, because now you need to support WebP too which means using their code to do it.
(Worse, WebP is at least two completely different formats - the lossless mode has nothing to do with the lossy mode.)
a) Google should be doing that in a memory safe language, kinda nuts that they haven't started doing that already
b) Apple could definitely write their own? Unless I'm missing something crazy here, it seems like they could burn 8 figures and just have their own implementations that are safe
They already did - WebP added a lossless mode and VP8 was updated to VP9.
Though the same may happen to JPEG; it always had 10-bit and 12-bit modes but most decoders don't support them. (Not sure if they can decode it as 8-bit or not.)
I think Apple should either sandbox or reimplement even the most complex formats. Video formats might be painfully complex to implement, but to avoid zero-click you don't even need to safeguard the whole process. You stop autoplaying and you ensure the safety of the parts that parse the metadata/thumbnails required to show the preview. Then worst case you have at least a 1-click threat when someone plays the video which then calls into some 3rd party code.
This is very different from ActiveX. ActiveX had hundreds of exploits widely available freely on the dark parts of usenet, and exploited by every proverbial scriptkiddie in a basement against a swath of computers across the world.
iMessage has had a handful of exploits which are licensed out for extortionate amounts by people like NSO to a very small number of scummy nationstate threat actors in extremely targetted but very high-threat attacks on very high-profile targets.
True but a useless fact. One way to interpret u/seanhunter's comment is to make the comparison between imessage and activex in the known exploit space and then extrapolate into the unknown space assuming equal proportions. Seems reasonable to me.
I really don't think that's true - the state of appsec and security awareness in general has really improved a lot, and all of the major platforms (windows included) are much much more secure now by default than in the activeX era.
It's definitely not the case that anyone can just throw together an iPhone zeroday, which is why the price of these exploits is so much higher.
It means it runs through iMessage. Whether the "turn off iMessage" feature is enough I don't know. But if turning off iMessage actually stops any messages and SMS/MMS etc from running through iMessage at all, it's a pretty decent bet.
What's not clear to me is given all of the layers/security features Apple has, say you are able to get an iMessage exploit where you can run code... you can't access the file system/cache of other apps (like your banking app to get cookies/tokens), can you?
Again a buffer overflow in image decoding, that sounds similar to the one from 2021 [1]. That one was wild, building a CPU out of primitives offered by an arcane image compression format embedded in pdf, to be able to do enough arithmetic to further escalate to arbitrary code execution!
And a much older bug with TIF rendering in iOS 4 used by jailbreakme.com back in the day. It was wonderful pressing a button in Safari and suddenly seeing my iPod touch reboot with Cydia installed.
This is the frustrating part: that is cool from a technical perspective but terrifying when you think about this stuff being used to target journalists, activists, etc. Maybe not everyone gets the bone saw but some will - and from the sounds of it it’s people standing up to abusive people:
> Last week, while checking the device of an individual employed by a Washington DC-based civil society organization with international offices, Citizen Lab found an actively exploited zero-click vulnerability being used to deliver NSO Group’s Pegasus mercenary spyware.
The interesting thing is that, as the article states, Lockdown Mode, which is intended for users with exactly that kind of risk profile, does in fact prevent this attack.
the more interesting thing is why the default state has to be made vulnerable in the first place instead of just making lockdown the default method of using an apple device
The even more interesting thing is that all functionality increases the attack surface and therefore makes all devices more vulnerable. The most secure state is not to have the device at all or, failing that, to have it permanently turned off. This is true of every device, not just apple.
The reason people possess devices is to use functionality and therefore they have to make some tradeoffs in terms of security. The default state is what apple currently think is the best tradeoff in terms of risk vs functionality for most people. For people with an extremely unusual threat profile it stands to reason a different tradeoff might be appropriate.
That said, they do give a lot of granular control to the user to turn off individual functions if the user feels differently and wants to change their stance eg iMessage can be disabled with a switch in settings.
In Safari, yes, losing the JavaScript JIT is hefty but I’d somewhat cynically argue that it’s probably balanced out performance-wise if you install an ad blocker.
Honestly I find it a little reassuring that these are the lenghts you have to go to find a reproducible exploit. Granted the failure mode is not great…
Yeah, it’s been too slow but we have come a long way from when any motivated person could find an exploit in a binary file decoder with a day or two of work.
What is frustrating is the NSO group continues to exist despite all the bad they do. How many people are they responsible for being on the receiving end of a bone saw?
At the risk of being boring: software liability would go a long way towards getting companies to do this work themselves. Even though Apple is the largest company on the planet an entity that has a small fraction of the budget is apparently able to do a better job. I don't see why Apple couldn't make those people an offer they can't refuse. That takes them off the market and has them doing something productive.
> Even though Apple is the largest company on the planet an entity that has a small fraction of the budget is apparently able to do a better job.
NSO Group is Israeli and (most likely) filled to the brim with former Unit 8200 staff. About the best of the best what the IDF has to offer - they've been said to match the NSA in quality.
> I don't see why Apple couldn't make those people an offer they can't refuse.
For all that can be guessed, they're a semi-private company, deeply connected with the Israeli government [1]. No one can pay these guys enough. If you want them to stop, you'll have to get the Israeli government to agree, and they won't give up any asset that gives them an edge over Iran or its numerous other enemies.
So stop shipping iPhones to Israel until they play ball. If they're that smart they can roll their own phones. These companies do immense damage and endanger lives the world over. Given enough time and budget there is nothing that can't be cracked and it's the very worst actors that have access to this stuff.
As much as I agree with you... I think it's most likely that the US NSA, UK's MI(whatever), Israel's Mossad and a bunch of other secret services all cooperate with each other. No way these guys get taken down, and no way that the sanctions that have been nominally announced actually get enforced at the murky, intransparent bottom layer of the secret services.
Someone has to crack open the phones of drug kingpins, terrorists etc. after all.
You are severely underestimating the power of an entity like Apple. HN regularly spouts opinions that if US companies don't like the GDPR they should just stop doing business with the EU. That's a massive block of consumers and I highly doubt any company that likes its bottom line is going to take that approach.
But we're talking about one company here that simply should stop selling their crap to the highest bidder. I'm at some level ok with the Israeli's doing what they do, they're no different than any other nation state. But to allow this sort of entity to operate from your soil in a commercial manner, including selling those exploits on the open market where they will inevitably be used against the home country as well seems 'optional' to me and there is a lot of Israelis that like their smartphones.
Why would an entity the size of Apple risk their reputation and everything they stand for to avoid a run-in with a relatively tiny company in a relatively small part of the world that is causing an enormous amount of problems?
So tourists (or people visiting for family or work) who own iPhones wouldn't be able to use them in Israel? You can probably see how that's a tough sell.
Yes, that's exactly it: you harbor this sort of company you will not be able to pretend it's business as usual on other fronts.
After the 500,000th Facebook post of tourists linking NSO to 'my holiday in Israel was spoiled and I won't be going back there' I'm pretty sure they'd get the message.
I'm ok with whitehat hackers but this shit has to stop. Mind you, I have an old Nokia so it's not as if I'm affected, the only thing I have to worry about is the baseband processor and my telco. But there are plenty of people who need a smartphone for their work and their opsec is pretty much as good as their phones' security.
That would be an extraordinary act of political activism which is never, ever going to happen. I'd argue it's not a corporation's place to take such an action in the first place. This is, if anything, a diplomatic matter and should be left to the state.
I mean what next, stop selling to the KSA because of their gay rights issues? Iran? Russia? Where does that end? Well, it won't even begin, and rightly so. This is a state matter and they should stay in their lane. They're doing all they can, and should.
BTW, I bet there's more than a few USA organisations who are quietly very annoyed about Apple's relentless bug-fixing. Organisations like NSO are tolerated for a reason.
> So stop shipping iPhones to Israel until they play ball.
For what purpose? They would still procure iPhones through gray channels and hack them because that's what their victims use. Should Apple also stop selling phones in every other country, because that's where many of NSO's exploits are actually used?
What other purpose? Annoy the local population? Create a grey/black market where you're even more likely to be given a "pre-hacked" unit?
Ha! Thats some nice fan fiction. Look at how Elon is torpedoing himself even further trying to take on ADL(lets be frank they clearly have ties to Israel). It took far right wing people + Elon bringing the issue up to even have a discussion on pushing back against ADL (and now ADL can just say thats just clearly anti-semetic people being anti-semetic) so the issue is already dead.
Apple being a public company with many institutional portfolios holding their stock would not survive these portfolios dumping their shares due to pressure if they announced this. This could even be enough to force remove Tim Cook from his role. Why would he take such a drastic position?
This rot is at all layers of the western world(UK, Canada, AUS, NZ, France at least). All the way from state governments passing laws saying you cannot boycott Israel or else you'll be barred from contracts(Anti-BDS laws) to congress removing members from their committees if they criticize Israel(eg. Illhan Omar) and signing loyalty pledges to Israel. When ANY new resistance appears against Israel, multiple groups in all of these countries move at light speed to enact a response.
The downsides of having these exploits is clearly acceptable to all the people that make the decisions. And it's not like a regular person can use these exploits against members of congress to make them feel the pain. They'll just 'Julian Assange' you.
What you are proposing requires massive reform at ALL level of government and across the western world as this is not only a US problem. Good luck with that.
This requires changing fundamental beliefs of the majority of people who vote in these governments. They have a special "bond" with Israel and they wont willingly let go of that. You'll be better off just reverse engineering the complete iOS binary and finding every possible exploit.
> It took far right wing people + Elon bringing the issue up to even have a discussion on pushing back against ADL (and now ADL can just say thats just clearly anti-semetic people being anti-semetic) so the issue is already dead.
The issue is dead because Elon's grievance is patently absurd. He's accusing the ADL of singlehandedly engineering a 60% drop in Twitter ad sales. It would be genuine comedy were it not for the fact he's handing a megaphone to the worst-of-the-worst groyper kindernazis.
Thats my point. Pushing back against the ADL is almost impossible and when it finally happens it is associated with these knuckleheads. Therefore it is easy to dismiss...but there are serious abuses done by the ADL (just look up their history) and they now get to skate free.
You seem to be implying the issue is the messenger and his dimwitted minions, when really it's the message itself. If these guys are as nefarious as you're implying, surely the richest man on the planet could dig up something that's not prima facie absurd?
Thanks for clarifying. I'm not familiar enough with this organization to either stake a position for or against, but one passing observation based on that wiki page :
> Right-wing groups and pundits, including right-wing Jewish groups, have criticized ADL as having moved too far to the left under Jonathan Greenblatt, labeling it a "Democratic Party auxiliary"
> In August 2020, a coalition of progressive organizations launched the "Drop the ADL" campaign, arguing that "the ADL is not an ally" in social justice work. The campaign consisted of an open letter and a website, which were shared on social media with the hashtag "#DropTheADL". Notable signatories included the Democratic Socialists of America, Movement for Black Lives, Jewish Voice for Peace, Center for Constitutional Rights, and Council on American–Islamic Relations.[179] The open letter stated that the ADL "has a history and ongoing pattern of attacking social justice movements led by communities of color, queer people, immigrants, Muslims, Arabs, and other marginalized groups, while aligning itself with police, right-wing leaders, and perpetrators of state violence.
Always interesting to see entities criticized for being both too far left and too far right.
To me, the ADL doesn't seem right or left within the US. The evident goal of their org today is to silence criticism of US-Israel relations and run PR for Israel in general, which makes sense given its founding org. That's its own thing, in fact it'd be counterproductive to do it in a partisan way.
I’m sure there are a lot of committed patriots there but I doubt it’s the whole company. Tim Cook could drop 1% of their cash on hand and see how many of them would turn down a million or two as a signing bonus, and if that didn’t work he could escalate to 10% or toss in some stock. I find it unlikely that wouldn’t tempt a lot of people, especially since the U.S. is one of Israel’s staunchest allies so it’d be pretty easy to tell yourself that pile of cash isn’t selling out.
The real reason they don’t do that is trust: how could you ever be confident that someone wasn’t passing information back to Unit 8200 or even helping them out?
I understood but am skeptical of that - they'd block sale of the entire company but I think it'd be a surprise if they prevented a bunch of Israeli nationals from accepting prestigious jobs with an American company.
Consider as well that designing (known obsolete at the time, with no practical threat to Israel) long range weaponry for a relatively benign enemy (Iraq was never Iran or Egypt) is likely far more forgivable than assisting a far more powerful foreign power with a history of at times cool relations with Israel with the current high priority useful intelligence tool which they are known to have a unusual world class edge in right now.
Gerald Bull was annoying. Someone good leaving any of the APT groups in Israel to help Apple get better security or anyone else would be borderline treason.
NSO seems more like a business. If Israel wanted to, they could pay NSO to keep their software internal/private, no?
The more devices that get exploited, the more exploits that get closed. That's how you lose your edge against your enemies.
Unless they're so confident in their stream of exploits that it's worth burning a few. Or these nation states are buying the devices to operate these exploits and operating them in their security labs...hrmmmmm...
It appears that the Israeli government operates the same way as the Russian government with respect to their private black/gray hat companies and groups: hacking other people is OK, just don't hack our nationals or our institutions, and we're cool. And if they hack companies or people seen as hostile, so much the better.
If Apple buys NSO Group and shuts it down, other firms are incentivized to enter the market especially because of the prospect of a nice payday if Apple buys the new firm, too.
Anyone buying them and shutting them down won't even temporarily make the problem better, as NSO has competitors who would immediately hire the best people.
That's true, but I'm not entirely sure why this would be relevant to include in my comment? It's just pointing out that other vendors exist in this space other than NSO Group. I don't even see the hypocrisy if I had posted that while working at one of the places I mentioned? How would you rephrase it?
(It seems like you know me, are you someone I've met before?)
Yes they can? It's totally possible for a small, well connected, group to be writing small pieces of custom code in very critical applications, like core reactor controls system for navy submarines.
...implying the scrutiny Boeing is held to does anything beneficial.
The regulatory capture resulted in a pathological operating module that put over 346 in an early grave because they couldn't be arsed to not cut corners; then on top of ot all, there's no substantive finding of liability or wrongdoing.
Laws that are ultimately unenforced due to 2B2F might as well not exist at all.
I don't want to defend Boeing's management but even with the worst failure in, what, half a century? it's still much safer to fly than drive so I wouldn't be so quick to throw aviation security culture under the bus.
I think you should really think about what you're saying. Would you cut makers of physical artifacts the same slack, say a small prepared food producer who just can't afford to vet their supply chain or final product to make sure it's not contaminated?
This has already played out. Most consumer software specifies that it cannot be used in medical devices, or for nuclear energy production. So some version of this already exists. But should this apply to video games? Horoscope websites? Random number generators? I'm just pointing out that it isn't a universal argument.
I don’t understand your comment. Are you saying that involving trial lawyers and US juries to collect big settlements from Apple is going to stop the NSO Group? Or is it that the NSO Group should be liable for the actions of their clients?
I don't understand your comment either. You say you don't understand and then you give a choice between two narrow interpretations neither of which seems to cover what I wrote.
To make this a bit more productive:
If Apple were liable for their defective products then they might decide not to ship them at all until they can be sure enough that the risk of the lawsuits putting them out of business is small enough that they can absorb it.
This worked wonders for other industries (notably: automotive, airlines, medicine). It may slow them down a bit, you may have a wait a bit longer for the next iteration of some gadget. But that's a small price to pay in my opinion.
As for the NSO group: I'm suggesting that Apple use their well filled cash coffers to buy these guys out, and failing that that they use some of that money to sue them for all of the damages that Apple incurs as a result of their actions as well as any criminal charges that they might get to stick. See 'Skylarov'.
It wouldn't be the first time that a US judge finds fault with a foreign company. At a minimum it would slow them and their employees down to the point that they will be in a US jail the next time they visit Disneyland. If it works against illegal gambling operations I see no reason why that sort of mechanism can't be brought to bear against state sponsored hacking groups and their employees.
> If Apple were liable for their defective products then they might decide not to ship them at all until they can be sure enough that the risk of the lawsuits putting them out of business is small enough that they can absorb it.
I think this works best at that level, like if there’s a sliding scale based on your company’s importance to normal people’s security. I think a lot of developers are worried that their two person consulting team is suddenly liable for bugs but it’s totally reasonable to say that Tim Cook should shake the spare change out of his office couch, call Graydon Hoare into his office and say “here’s a billion dollars, who should we hire so I never hear the phrase ‘buffer overflow’ again?”
> If Apple were liable for their defective products then they might decide not to ship them at all until they can be sure enough that the risk of the lawsuits putting them out of business is small enough that they can absorb it.
> This worked wonders for other industries (notably: automotive, airlines, medicine). It may slow them down a bit, you may have a wait a bit longer for the next iteration of some gadget. But that's a small price to pay in my opinion.
That's quite a big price for non life-critical equipment that is a billion times more complex than a pacemaker or the safety-critical parts of an airplane or car.
A billion times more complex than the safety-critical parts of an airplane? I think you lack perspective on avionics packages and the safety measures that are undertaken in that industry. Additionally, I think you're vastly over estimating how complex a smartphone is.
A billion might be hyperoble (although i dont think its a totally unreasonable guess either), but phone software is many GB large, i could easily believe that there are a million more MC/DC points in phone software, than in the safety critical part of airplane software.
Pacemakers are one of literally millions of regulated medical devices. If my CPAP fails one night, I don't die, but it's still regulated to ensure it's not gonna fail. You want this to be pacemakers vs Tetris but it's not. It's hearing aids and contact lenses and insulin pumps and wheelchairs and nebulizers and all kinds of devices that will not get you killed if they fail AND YET they are highly regulated and rightly so.
I mean, i assumed from context it was meant regulated in the way life-critical devices are regulated, since the mentioned industries like airlines that have elements that must apply with the regulations life-critical software has to be (e.g. full mc/dc test coverage and what not).
If the goal posts are being moved to regulated in any form, phones already meet this criteria as there exists regulations they are subject to.
So what regulation precisely did you have in mind and would it prevent the issue being discussed?
Maybe that’s true, it probably is, but they should still be sanctioned into oblivion considering they consistently are in the headlines on the wrong end of this being used for deeply questionable purposes.
The US enjoys some fruits of their labor and they're conveniently distanced from any explicitly funded operations to avoid blow back when exploits are publicized. They won't enforce sanctions or, more practically, withhold the massive defense subsidies they give to Israel.
>... as greed leads them into bed with the wrong people.
I'd hope they're at least targeting their own customers as part of state-sanctioned operations. Still, that doesn't justify the dissidents they indirectly facilitate being thrown under the bus. Or on the receiving end of a bone saw, as another commenter put it.
Allowing a US+Israel-approved* company to do this makes higher revenues possible, meaning they can attract higher talent => more hacks. Which would be fine if we prevented them from selling to unwanted customers. With weapons, we control who gets them, regardless of money.
* I was going to say "sanctioned," but that word can mean two entirely opposite things, it's dumb
True, but there’s a real question about how effective they’d be. NSO has the veneer of legitimacy which means they can hire top notch talent by pretending their products are just law enforcement tools – fewer people would be comfortable working for a Russian mercenary group or able to tell their friends and family their work for a Chinese government vendor wasn’t helping oppression. That doesn't mean that everyone in the world is comfortable working for them but think about how it is for Palantir where a significant percentage of top tech talent don't seek employment there due to ethical concerns - NSO has similar problems but they'd be an order of magnitude worse if they weren't in a close ally country.
I wrote a patch to fix it that one of the jailbreaks used. I wasn't in the scene, but wanted to protect my ipod touch. So I figured out a patch and gave it to somebody named "pumpkin" on IRC. It's been a long time, but I remember it was fun to learn ARM assembly and figure out how to rewrite the code to get enough space to insert a test and return.
Your phone would reboot with a pineapple logo and console messages flying across the screen like a 1337 h4cker, starting with the "regents of the University of California, Berkeley" message. Then you'd go install a ton of Cydia hacks.
Which actually makes me more sympathetic to Chrome not (yet) adopting JPEG-XL.
Don't get me wrong, I think JPEG-XL is a great idea, but to everyone saying "how can supporting another image format possibly do any harm", this is the answer.
Why not implement all image codecs in a safer language instead?
That would seem to tackle the problem at its root rather than relying on an implementation's age as a proxy for safety, given that that clearly isn't a good measure.
RLBox is another interesting option that lets you sandbox C/C++ code.
I think the main reason is that security is one of those things that people don't care about until it is too late to change. They get to the point of having a fast PDF library in C++ that has all the features. Then they realise that they should have written it in a safer language but by that point it means a complete rewrite.
The same reason not enough people use Bazel. By the time most people realise they need it, you've already implemented a huge build system using Make or whatever.
Firefox led a hand in making Rust, so I imagine if there is a browser that can make a more secure browsing experience, it would be Firefox, by making media decoders in Rust.
Almost all people don't want to or aren't capable of implementing image codecs, the safer languages aren't fast enough to do it in, and the people who are capable of it don't want to learn them.
Of course as the blog post says, just because memory safety bugs are overcome doesn't mean vulnerabilities have stopped; people find other kinds of vulnerability now.
Definitely, but GP was specifically using this as an argument for Google not supporting a codec in Chrome. If anybody can spare the effort to do it safely, it’s them.
I don't buy that being able to manually copy data into a memory buffer is critical for performance when implementing image codecs. Nor do I accept that, even if we do want to manually copy data into memory, a bounds check at runtime would degrade performance to a noticeable extent.
"Manually copy data into a memory buffer" is pretty vague… try "writing a DSP function that does qpel motion compensation without having to calculate and bounds check each source memory access from the start of the image because you're on x86-32 and you only have like six GPRs".
Though that one's for video; images are simpler but you also have to deploy the code to a lot more platforms.
I don't dispute that these optimizations may have been necessary on older hardware, but I think the current generation of Apple CPUs should have plenty of power to not need these micro optimizations (and the hardware video decoder would take care of this anyway).
> Why would an iPhone be running x86-specific code?
The same codebase has to support that (since there's Intel Macs and Intel iOS Simulator), and in this case Apple didn't write the decoder (it's Google's libwebp). I was thinking of an example from ffmpeg in that case.
> and the hardware video decoder would take care of this anyway
…actually, considering that a hardware decoder has to do all the same memory accesses and is written in a combination of C and Verilog, I'm not at all sure it's more secure.
I'd guess it's a combination of labor required to rewrite them and that you'd more or less have to use a safe systems language in order to not have a performance regression
Often it's infeasible to justify rewriting a lot of existing code, but my point is that these days this concern shouldn't really be an obstacle to integrating a new codec.
It should certainly lower the bar of adopting a new codec if the implementation is in a memory-safe language.
Even so, it is more code, and somewhat more risk. Lack of safety elsewhere might end up using code that is otherwise safe in order to build an exploit (by sending it something invalid that breaks an invariant, or building gadgets out of it, etc.).
Adding something in rust into a browser means you now need to bundle all of the needed crates and that your browser now also needs rustc to build… at a minimum.
You also need potentially to audit all the crates and keep them up to date and so on… without crates you can't do so much.
I can see that for components heavily interfacing with high surface area things like encryption, hardware interfacing etc., but why would that be true for a relatively “pure” computational problem like an image codec? Bytes in, bytes out.
Again buffer overflow in image decoding. Would think apple might just #threatmodel and #fuzz that to death... but you would be wrong. 2.7T market cap company can't do this...
They do, but some of these bugs are beyond what fuzzing can do. We don’t know that this is a buffer overflow or how complex the exploit chain was - the one linked above was anything but something you’d get by fuzzing.
I agree it is disappointing that this stuff isn’t all Rust or Swift yet but that’s in process. Of particular interest, did you notice how the new Lockdown mode is apparently a countermeasure? I would not be surprised to see some of those motivations expand into the base OS as they have time to improve.
Can't you trigger this by fuzzing? Sure, the JBIG VM won't be, but some random fuzzing should easily trigger out of bounds reads or writes.
Lockdown mode alters the iMessage user flow to such an extent that I don't see Apple enabling it by default. I don't think Lockdown prevents the RCE exploit, but I do think it simply blocks iMessage interactions from unknown numbers, so that the exploit can't even load.
The older one? Probably but I think the way it combined multiple overflows would have required a fairly advanced fuzzer, especially to look exploitable. The main point I had was that while fuzzing would have found interesting ways to crash ImageIO with PDFs, most people wouldn’t have expected that to be reachable without a click from iMessage. The relevant teams could have been rewriting everything they care about in Rust and this still would have happened because it was an obsolete usage of a format they don’t even use but which could be pulled in by the old GIF preview path.
I agree that most Lockdown mode features won’t be pulled in but looking at that list, note how many stop a NSO zero-click by adding a “have you ever interacted with this person?” filter to iMessage, FaceTime, HomeKit, etc. That makes me wonder whether a more polished UI might be acceptable to normal users where new numbers are basically text-only with warnings.
While not discounting the need to increase investment in this area, I will mention that there are very few things that can be solved by #buzzwords and #hashtags.
Apple has a long history of investing in all kinds of mitigations and security devices that make the App Store model secure and an equally long history of procrastinating on what is again and again and again causing their customers to be exploited.
A while ago I was surprised to learn that MS Internet Explorer had team of about 10 developers (I expected more) when MS already had more than 50000 employees total. Now knowing a bit more how sausages are made I would not be surprised to learn that this particular image decoder was maintained in Apple by a couple developers. To some extent this can be seen in corporations too: https://xkcd.com/2347/
If that. This weekend I ran into a TIFF decoding issue (Canon scanner produces TIFFs with embedded JPEG compression with different parameters than the outer TIFF container). This is an issue with libtiff and affects any Mac or iOS app using CoreGraphics, anything using ImagMagick, etc. GIMP, Nikon NX Viewer, and others with their own TIFF implementations are unaffected.
I doubt anyone at Apple cares. If a CVE is filed for libtiff, they’ll rebase, but I doubt they are actively fuzzing it or even have regression tests for it.
Coverage-guided fuzzing is extremely powerful and has proven to be very effective at finding oodles of vulns. But it is not perfect. You'll fail to drive the code to a bug or run into limitations of the sanitizers to actually detect a vuln.
You can stand up fuzz targets at all of the relevant endpoints and throw tons of compute at it and still fail to find lots of things. The problem is unsafe languages. Apple is taking steps to get things moved to swift, but it is slow going.
There was no fuzzing for this exploit lmao they developed a rudimentary assembly language inside the hacked pdf encoder by meticulously choosing the exact 70,000 pixel maps that overwrote the write pointers. And that's after they got the overflow exploit giving them control of the encoder/emulator.
Sure, but “internal bounties have fundamental problems of incentives at any scale” is a different problem than “Apple can't afford internal bounties on an adequate scale to compete with nation-state attackers”.
Unit 8200 is single largest Israeli military unit, their entire tech industry is filled with 82xx, 81xx and 99xx alumni.
This is what happens when you have universal conscription and the intelligence corps get their pick of the brightest conscripts.
It still doesn’t make them a state actor anymore than the dozen or so European malware vendors and the probably far more numerous US ones and that is before looking into the defense sector proper.
> NSO Group is a subsidiary of the Q Cyber Technologies group of companies.[7] Q Cyber Technologies is the name the NSO Group uses in Israel, but the company goes by OSY Technologies in Luxembourg, and in North America, a subsidiary formerly known as Westbridge. It has operated through various other companies around the world.[18]
Because sandboxing on iOS is terrible. Not that any of the other commercial vendors are any better.
If they could provide good sandboxes do you think the highest security certifications advertised on their website [1][2] would only certify protection against attackers with “basic attack potential”, the lowest possible level. Three whole levels below “moderate attack potential”. I mean, seriously, they certify their security sucks on their website, is it any wonder their security sucks.
No. From a security perspective a Common Criteria certification to the lowest possible level does not establish meaningful security. That is kind of the point.
The companies that develop easily hacked systems that are repeatedly hacked hundreds of times a year like Apple, Microsoft, Cisco, Amazon, Google, etc. can only achieve certification levels indicating they are easily hacked. They have never once succeeded at certifying meaningful security. The certification is pinpoint accurate, just the trillion dollar commercial IT companies do not like the results.
I agree it is largely not a useful differentiator, but that is because all of the commercial IT vendors are certified incompetent. The Common Criteria will not help you determine which fish in the barrel is hardest to shoot. Its job is to distinguish serious security by professionals.
It takes a while. At Google at least, new systems in android are required to be built in rust and there are major efforts to rewrite significant systems. But it takes time and rewrites are dangerous in other ways. And you need all the tooling to handle everything else an engineer does beyond simply writing code.
From where I sit, it also feels like the industry has really only coalesced around "the only real solution is safer languages" in the last 2-3 years. "Rewrite it in swift/rust" was way more controversial in 2019. So hopefully we'll see significant progress in the next several years.
How long do you think it is reasonable to go from "we are now in agreement that rewriting stuff is the right call" to "all media processing code is written in a memory safe language"?
1) Because that takes work
2) Because that makes things a bit slower, so it’s a stand-off between Apple and Google because neither of them wants to be the “laggy” phone
Image decoding can be ported to rust, however most video/image decoding software is rarely ported (performance reasons and what not) - and used as a library instead.
Java would have similar issues as well. It'd be using a compiled C code as an external library in cases like these.
Interestingly, no kernel vulnerability or anything is mentioned.
As far as I know, any parsing code for iMessages should run within the BlastDoor sandbox – is there another vulnerability in the chain that is not reported here?
One CVE is in Wallet and Citizen Lab mention PassKit. My guess is that BlastDoor deserializes the PassKit payload successfully, then sends it to PassKit which subsequently decodes a malicious image outside of BlastDoor.
It may be the case that either the kernel vulnerability hasn't been analyzed or fixed yet, or that they were not able to capture it. Many of these exploits have multiple stages and grabbing the later ones is difficult.
Curious why no fix is out for iOS 15 yet. Is iOS 15 not vulnerable to this attack? Or is there often a delay in backporting security fixes that I'm not aware of? And if so, should I be implementing a workaround if I wanted to protect against these exploits?
They've been supporting iOS 15 with security updates for devices that can't update to iOS 16. Not sure how long they'll continue doing it but I imagine it'll be for a good while.
Please don't suggest that people should be murdered with drone-launched missiles for making software. Making software is a peaceful act, regardless of what purpose that software serves.
That is not a reasonable position, "peacefully" writing software that you know is to be used to murder and silence other people makes you just as complicit those crimes (definitely an accessory). No better than a getaway driver.
Software is speech (ie protected expression), like writing books.
A getaway driver is involved after the crime is committed. To assign the same level of culpability to a tool-maker would imply that they have the ability to predict the future.
A better example would be a car or firearm manufacturer. How the tool is used is up to the user.
Software doesn't root people's phones, cops and spies do. Someone sent that iMessage, and it wasn't the author of the software.
Speech can be murderous, and more importantly, it can be punishable. Ok, what about if the getaway driver drove him there as well, and explicitly expressed knowledge of the crime that was to occur and refused to abandon the effort upon receipt of such knowledge.
Given where the group is headquartered, I doubt it. That unacknowledged aerial platform will remain firmly planted on the ground until the next defenseless target is chosen.
There needs to be a more fine tuned lockdown mode, for example to disable automations and risks in imessage and safari but leave device accessories working. Losing bluetooth accessories to protect yourself from zero click imessage exploits is just bad.
imessage is the major wide open attack surface.
Its super interesting to me how much its emphasized that you shouldn't use this (Lockdown Mode) unless you are a journalist or otherwise in direct danger. They really do try to talk you out of it. Its curious, because there's very little difference in functionality other than disabling a lot of Apple nonsense from running in the background expanding your attack surface.
Apple doesn't want it's users to have an interior experience. It might be subtle so it makes sense that they don't want fully uninformed users from enabling it for no real reason. Currently it's not clear anyone who's not under risk of state sponsored monitoring should bother enabling it. Thus it's not clear why they should message it any other way.
I don't need to be able to accept iMessage messages from random numbers. I'd be happy to enable "Prevent messages from unknown numbers" for example. Is this possible?
Yeah, its starting to get weird that they refuse to implement this. Its almost like certain stakeholders need to be able to randomly text you with malware and refuse the notion of being silenced.
I don't want the huge number of inconveniences of lockdown mode. I just want to have apple block messages from unknown senders to iMessage server-side, before they forward any untrusted data to any of my devices.
lockdown mode also doesn't add server-side filtering for unknown imessage senders, so that doesn't seem as good for this purpose.
Here are several comments regarding rewriting everything in safer languages like Rust, among others. However, before such a transition can potentially take place, I believe it's more realistic to achieve another important goal: enabling robust logging capabilities, akin to the Endpoint Security Framework on MacOS or System Events on Windows, for iOS. With the implementation of such tooling, enterprises could potentially integrate mobile endpoints into their SIEM systems, making it easier to detect attacks of this nature.
I've personally utilized the mvt-ios tool to investigate iPhone backups. Within these backups, there is a SQLite file that mvt-ios scans for potentially malicious process names. (I've examined all publicly available STIX2 IOCs and having tooling that simply reports the names of processes from mobile phone to a central SIEM would be adequate for identifying these attacks.) Unfortunately, this method cannot be used in real-time across all devices. To employ it, one must first create a complete backup of the phone and then scrutinize that backup. If we had a tool similar to the Endpoint Security Framework available for mobile devices, we could activate enterprise-level security monitoring systems and potentially establish secure communications in the current era, rather than waiting for everything to be rewritten in Rust (a bit of irony).
I appreciate that a solution is for people to update immediately. It really makes me wonder if my Android phones over the years have had 1-days exploited by the sheer incompetence of the ecosystem in updating phones.
Not much confidence when you get an update with security patches from 2-3 months ago.
> they aren't just flying around hitting random devices.
For the moment, but only until other wankers reverse engineer the security flaws based on the updated 16.6.1 firmware from Apple. After that you too are vulnerable if you haven't updated.
I understand that, and I'm partially assessing that in the context of any high targets who might be using the latest Android flagship, that frankly still suffers from the same problem as all Android phones.
Unless proven by leaked testimonials I would not fully trust GrapheneOS to be fully safe either. Maybe they have zero days as well and we just didn't discover them because of obscurity but NSO bought them and uses them.
My dad used to say "Known devil is better than unknown angel."
Naive question, does apple have any way of detecting and informing users who are current victims of these types of exploits when security fixes are issued?
> “every machine is compromised and I should never trust anything ever”
This is where I am already at.
However, you can't totally live like this in 2023. You need to take some risks.
I just can't believe how bad iPhone security is STILL. Capitalism prioritizes profits over all, it seems Apple has no problem cutting on security and spending on marketing the word "SECURITY" with black text and a white background.
Every app is insecure they just choose iMessage because every iPhone has it and you can send the payload with just a phone number making it slightly easier to exploit.
For someone to send me a message on signal, they have to either social engineer me into adding a number I don't know, or they have to steal a device from one of my existing contacts, get it unlocked, and send from it.
There is no way for them to go from knowing my phone number to me receiving and processing an untrusted image without them first somehow becoming a contact.
iMessage has no option to require friending first, before receiving unsolicited messages.
That doesn't seem like a "slightly easier" thing, but a rather significant difference.
> The exploit involved PassKit attachments containing malicious images sent from an attacker iMessage account to the victim.
Man, iMessage is a security disaster for Apple. No matter how much work they do in other areas, it seems like they'll paying for a while for their decisions around the iMessage architecture.
Some of the problems with iMessage have to do with the fact that it's integrated with the system SMS app. It seems that there are a large number of legacy requirements in the GSM spec that require the Messages app to be privileged in some way, especially with regards to automatic processing of data received. There have been plenty of iMessage or Messages related vulnerabilities.
I do wish there was a way to turn off automatic downloading of attachments like images etc. from at least non contacts. I think many/most other chat applications, by default, sanitize and/or format images and media on upload on the server by transcoding image data to prevent things like this and save bandwidth (e.g. Facebook/Meta appears to recompress images server side). However, there are obviously security and privacy considerations to doing this, and the client is not exactly something you want to trust to do this, so I can understand why they would be reluctant to implement something like this.
Perhaps this is one potential use of Treacherous (trusted) computing/remote attestation - the client runs a remotely attested signed binary code that will read an image file, encode the pixels as a jpg, and output a signed output that will only be accepted by the server if untampered.
Obviously, there would be issues with that approach as well, but it could potentially prevent the use of the iMessage network to send "crafted" media files.
Apple have a service which attempts to do this, BlastDoor. The issue here is feature surface area unrelated to GSM. My guess from the CVEs is that this exploit revolves around sending a valid Wallet/PassKit item attachment which has a malicious image. The payload is safely _deserialized_ by BlastDoor itself, but is then passed off to the PassKit framework which happily detonates it.
IMO Apple should make a middle ground Lockdown mode - something that still allows attachments (which Lockdown mode doesn't, making it difficult for many users to employ), but forces them to be 1-click. This is something I would use personally and would at least protect me from getting 0-clicked by attacks like this; I'd never click a Wallet item from an unknown sender, but I also can't live with the restrictions in Lockdown mode.
It pisses me off man. If someone sends you a link on iOS, you can't copy it without doing a long press that loads all the spyware on the website in a preview window
Apple's architectural fix here is to move file parsing and other risky operations into proper sandbox harnesses, and to tighten these harnesses year after year.
This is the path that WebKit has followed, and the sandbox for the WebKit JIT is incredibly hard to break through these days
One can't even mark all messages as read in iMessage, which seems to me like the most basic functionality. Something is really messed up in how this thing has to run if you can't do that
They're slowly rewriting the whole thing in Swift which should eventually eliminate most of the non architectural attack vectors. Most of them were mitigated in iOS 14 where they did some rather large architectural changes.
When I look at an initiative like BlastDoor, I'm struck by how unlikely it is that every other messaging app makes a similar investment on every platform. Does WhatsApp have a similar architecture? Has the Gmail app rewritten all its image parsers in a similar manner? Has Tinder? And sure, if you compromise WhatsApp you only get access to its internal memory and may not be able to escalate to other apps or OS storage - but compromising someone's WhatsApp messages isn't any less serious, and there's history of escalation being possible: https://techcrunch.com/2019/05/13/whatsapp-exploit-let-attac...
It's a miracle that these kinds of zero-click zero-days don't get announced every single week. Though maybe they do, and we just don't know about them...
At least they’re trying? Meanwhile Google has spent 2 decades refusing to release a messenger that encrypts by default because they think they should be able to mine all your personal conversations.
I take that back, they announced encrypted messaging, then never released it, then probably fired the engineer who said it’d be a feature in allo (or whatever their last attempt was).
You are confusing security (no exploits) with privacy (encryption). The iMessage system is really private (no third party not even Apple can read your messages) but traditionally full of security holes (messages once decrypted can harm the rest of your device).
> no third party not even Apple can read your messages
This is not something that can be stated as fact unless 3rd party clients for a service exist. Apple can, with complete honesty, claim that messages are encrypted at rest/in transit all they want, but since they publish the only implementation of the client, they can modify it at any time to expose the messages to them, in any number of ways.
I'm not confusing anything. The entire point of the exploits in question are to BREAK the privacy provided by messenger. Google doesn't provide any in the first place, and actively mines your data. Who needs an exploit when it's never encrypted in the first place?
To further this: you realize NSO isn't selling these exploits to Russian kiddies to steal your bank info, right?
These exploits are used by people like the Saudi Government to uncover a Jeff Bezos affair. They're after politicians/power brokers for the purpose of accessing otherwise secure communications for the purpose of stealing state secrets or blackmail.
It absolutely does, you're exactly one subpoena away from that happening. Then you're at the mercy of Google deciding whether they care more about you or their balance sheet.
No that's not true.
Google just fails miserably at anything social, but almost every chat attempt from them eas encrypted, and now they are pushing RCS, which is also E2EE.
Google started pushing for carriers to use RCS in 2015, and launched it's own app for it in 2019 after that failed to move quickly enough. They didn't start adding E2EE to it until 2020, and it wasn't the default until 2021.
Though it can fall back to SMS in case you don't have data, which isn't E2EE. I'm not sure what the UX flow is like in that case, whether it warns you and asks for permission to send over less secure channel.
I was under the impression that this is a 'proprietary' extension between Google devices, and that there was no RCS-standard-based E2EE:
> The RCS specification defines several types of messages. Our implementation of E2EE uses varying strategies for encrypting each type of message to maximize user privacy while still adhering to the RCS specification.
How is that related? Encryption doesn’t really protect against malicious payloads being sent. Quiet the opposite actually as they can’t be scanned / stripped on the server.
iMessage or android messages? iMessage is E2E by default, unless one or more parties own multiple apple devices, in which case apple stores an encryption key on iCloud and maintains E2EE connections with every connected Apple device. This changes if you turn on Advanced data protection-- then iCloud no longer has the ability to decrypt messages. Somewhat unrelated but ADP is off by default as most customers do not want or need this.
Android Messages, the competitor to iMessage. The parent claimed that Google resists encrypting anything so they can mine your data. I was merely trying to ask for accuracy. I don't know the technical aspects of every Google messaging app but as the other responses in the thread confirm, Messages is end to end encrypted by default for non-SMS messages.
I just checked and Google does have a Messages app, different from the Messages app on my phone, probably by Samsung, which deals with SMSes. According to Play it has 1B+ downloads so it's probably preinstalled.
Anyway, the competitor of iMessage and Messages is WhatsApp. Nobody is sending me SMSes except banks so even iPhone users use WhatsApp to send messages to friends and to groups in my country. If somebody would insist using only iMessage they would be out of the loop. And about Messages, well, I don't have that app, I receive no SMSes, I still communicate with everybody so I guess that nobody uses Messages too.
iMessage is E2E even without ADP, even with groups and multiple devices. The details are complex, but they are publicly documented here[1]:
The issue (I think) you are referring to is that if you enable iCloud backup[2] or iCloud for Messages[3] (both of which move effectively move the storage of the messages to the cloud, either as part of the device backup or as the canonical representation that devices sync from respectively) then the messages decoded on device will be stored in blobs that iCloud has the keys to unless you enable Advanced Data Protection.
That would be probably 100% of the iOS users that I know, including my entire family. Everyone's got an iPhone, iPad, Apple Watch, Macbook etc. It's such a nice ecosystem, so it's hard not to get hooked.
iMessage is overall a lot more complicated and integrated than I like it to be. Want to switch accounts, you gotta log your entire user or device out of iCloud. Logging back in will often create issues. Using old Mac/iPhone OS versions creates issues. Messages and attachments are received separately by different devices, and weird things happen when one device is out of space. Deleting messages or blocking senders is per-device. Different devices might miss some messages or even get them out of order. A device waking from sleep often gets messages delayed by minutes (like email), and you're notified a second time for them. And lastly, the "effective. Power لُلُصّبُلُلصّبُررً ॣ ॣh ॣ ॣ 冗" vulnerabilities.
Compared to Facebook Messenger where there's one consistent state in the master server and neatly sandboxed web or iPhone app calling it. And yeah, I know these are partially by-products of e2ee vs traditional, and nobody else has really done e2ee messaging with multiple devices.
The other problem with logging out and back in is that it resets to all iClouds insane defaults that you have to manualky comb over to fix. Its 1000% dark pattern and default nonsense.
In addition to lockdown mode, pair with a vpn and security researcher Jeff Johnsons "stop the madness", and "stop the script". Both are paid safari plugins for ios. Stop the script is the best way to stop inline javascript on ios. Disabling JS on iphone can't do that.
It’s interesting that latest Safari TP (r178) somehow crashes on macOS Ventura, with or without the patch, when reading HN comments on this specific article.
Is there a honeypot in a comment on this page? /paranoid
I'm confused why they waited to patch this vulnerability until it was found in the wild. Or am I misunderstanding? Is this not the same NSO zero click exploit from like a year ago?
We're all very lucky that CitizenLab exists as they are often the first discovery point of numerous similar exploits. They proactively scan the phones of internationally sensitive people and publish their findings. I'm not aware of any other public service that has had this much success exposing mobile device attacks. Attacks which have completely and utterly compromised the entire device that someone keeps with them all day every day.
I tip my hat to CitizenLab and the good work they do.
Back in the dark ages, a "zero day exploit" was a piece of malware which would lay in wait, doing nothing, counting down the days, until it hit day zero, and then it would trigger and do naughty things. Some folks also referred to this as a time bomb, but that was a less `|33+ term for it. We used to see a lot of these available on sites such as asta... never mind.
Fast forward to the era of "cyber" being hugely popular, and the massive flood of people doing short Kali or "Ethical Hacking" courses and getting into IT security jobs, and I see various formal IT security publications describing a zero day exploit as "ANYTHING which is known and not yet patched". To me, there is absolutely nothing about that description which relates to the "zero" or the "day" or the "zero day". I suspect this new terminology is the result of that influx of people with no background in either computer science or hacking, latching on to a cool sounding term and misunderstanding it completely.
What is your take on this? Do you go with the ye olde terminology, or the currently accepted terminology in fancy publications? Do you believe the meaning changed, and if so, when and how and why?
The more important designation is whether a hack is "zero-click", meaning it requires no user interaction.If such is the case, it cannot truly be defended against, it is purely automatic or automatistic and if your phone is on and has the conditions necessary for it to take root, it will happen without fail.
Good to see NSO loosing a valuable exploit chain. If this becomes common enough they’ll think twice about (enabling) targeting legitimate civil society organizations, for purely economic reasons: the risk of detection and reporting of vulnerabilities is much higher than when targeting terrorists and criminals.
Does anyone know to what extent this compromises the device? I might have missed it, but didn’t see it explained in the article. Does the attacker get full access to the device, or do they only compromise a subset of the devices functionality?
They say at the top that the exploit can install Pegasus, so probably some or all of Pegasus's functionality. That doesn't really narrow it down, but it likely can constantly run in the background, use the sensors, read texts, and send info over the internet at least.
Were you not concerned with unethical and potentially fatal outcomes of your work?
I’m trying to phrase this in a way that doesn’t come standoffish - in some way you clearly _weren’t_, since you did the work. But I’m wondering whether this ever entered the picture for you, and how you dealt with that.
For some reason my original comment got flagged. It went:
I was concerned. Just as much as an average Facebook employee when it turned out someone built a psyop weapon on top of their data to manipulate elections’ outcomes.
The number of degrees of separation between average Facebook's engineer work and "direct harm to a human being" seems like it'd be orders of magnitudes higher than when working on exploits for companies with the client list like that of NSO's.
But number of affected people and the scale of impact was also on another order of magnitude.
Moreover, that tool depended on private companies that operated without any oversight. It’s a very different situation for exploits (although I agree that they often end up in the wrong hands)
NSO is probably one of the worst offenders when it comes to screening their clients. This raises ethical issues. It was a factor for me and many of my former colleagues.
However, for every abusive operation that gets exposed, there are many legitimate ones conducted by democratic governments. I think we are far from a mass-surveillance scenario, and those exploits are not as widely available as the media might portray.
People who work there are amongst the most skilled hackers in the world.
Security is very hard and that’s why even solid engineers fail a lot in tackling that. Especially because security is a cost center for vendors, while it’s a profit center for companies like NSO. So the resources invested are relatively more.
That said, Apple did an amazing job to improve security in the past years.
I just received a random image of a champagne bottle via iMessage from an unknown number. Any way to tell if this is the attempted exploit? I had patched my phone prior to receiving the image.
You can turn on lockdown mode on your iPhone and then specifically exclude certain apps and websites from being impacted by it -- this seems like a reasonable middle ground for most people.
The price of an Android zero-click is now 500k more than the price of an iOS zero-click for Zerodium now. It would appear they have a significant stock of iOS zero-clicks and a lesser amount for Android:
If you flick through the fixes for Android CVEs, you'll notice that there are only a few remote code execution vulns and they're all in C code. The rest are bugs in the Java side but they're all logic bugs and yield exploits like local privilege escalation, or they're privacy issues.
So the Android strategy of using Java a lot definitely seems to have wiped out a lot of memory corruption and RCE bugs. The remainder are a mixed bag and it's hard to imagine any sort of systematic mitigation or fix.
You can't really compare Android and iOS by CVE because iOS isn't open source or distributed to vendors, so Apple fix a lot of security issues without a public CVE ever being created.
Given all the major tech companies aggressively fuzz everything maybe, just maybe, you're missing the additional possibility: fuzzing is still random and extensive fuzzing does not mean you will encounter the same code paths as anyone else.
You need to understand "do fuzzing" is not a magic trick to find all bugs in software.
Similarly: definitionally you will only ever see the bugs that are not found prior to shipping - any bugs that are found prior to software shipping will have been fixed.
Fuzzing is not a magic trick, in the same way as invariants are not, and unit tests are not, and debugging is not.
All these techniques have degrees of mastery, and if applied carefully, and in combination, can save you a lot of grief.
Dumb fuzzing will not get you anywhere, same as dumb unit testing, and dumb debugging.
In this case, iMessage is particularly well suited for some smart fuzzing because all the attack vectors seem to involve smallish malicious attachment files.
You're missing the point: It is possible fro multiple distinct groups to all fuzz the same code and find different non-overlapping bugs.
You are erroneously saying "one group of people found a bug that could be found by fuzzing therefore apple is not fuzzing".
LibJPEG is decades old at this point and is still getting around 10 CVEs a year, despite being one of the projects I believe google constantly fuzzes.
zlib is getting a few a year despite being a vastly more constrained format than anything else imaginable, and again being a heavily fuzzed library.
If "do lots of fuzzing" caught every bug, then you'd get a big release that fixed all of them, and you'd never see any more.
> In this case, iMessage is particularly well suited for some smart fuzzing because all the attack vectors seem to involve smallish malicious attachment files.
I chose to include libjpeg above specifically to rebut this comment. That there are still CVEs coming in for libjpeg this year, despite years of fuzzing should be sufficient to show that even small attachments aren't magically invulnerable due to fuzzing.
Fuzzing is a useful tool but pretending that some project or software is going to be secure because it's been fuzzed a lot is nonsense, and pretending that fuzzing will find all the bugs is complete fiction.
Even software written in memory safe languages benefits from fuzzing: a memory safe language simply means your code will not continue if doing so would result in a memory safety violation, but for most memory safe languages that means at best an exception, but in most cases it means termination - that's what you get in Rust, Swift, or even functional languages like Haskell - and program termination can mean user data loss, or at least a bad user experience, so fuzzing is helpful even if bugs don't cause "security" issues.
Cynical reductionist me thinks Apple gets more ROI spending on marketing than in security.
They also spend a ton of engineering resources to prevent customers from using their products as general computing devices with the pretense of hardening security. It works to an extent and the tradeoffs are debatable, at least among tech-savvy folks in HN.
This was likely in a codebase that has been fuzzed extremely heavily. There are a lot of bugs that fuzzing cannot possibly reach. I'm guessing NSO group has a lot of talented vulnerability researchers who do code auditing. Companies need to invest in hiring and training these individuals and paying them what they are deserve. Throwing fuzzers at things and calling them secure is part of the problem.
What code auditing? Are you claiming NSO has access to iMessage and iOS source code?
NSO seems to be finding more and more bugs by poking a black-box alone, while Apple cannot seem to be able to fix by looking at the source code with all the fuzzing and verification tools, and much more $$$ at their disposal.
Sorry I thought it was obvious that I meant reverse engineering the closed source pieces of iMessage and auditing the open source bits. Source code just speeds up the process for vulnerability researchers, so Apple has a leg up in this regard.
"Are you claiming NSO has access to iMessage and iOS source code?"
The last NSO zero-click was in an open-source library reachable from iMessage. This vulnerability is likely no different considering it was in an image decoding library.
NSO group hires many talented security researchers who specialize in reverse engineering and auditing source code. It is hard for people not familiar with security research to understand but there are a lot of very talented code auditors out there who have honed the skill of picking up a new codebase, understanding it better than the developer who wrote it within months, and then finding bugs in it. There are teams of researchers at certain exploit shops who spend their lives focusing on understanding a single target.
Fuzzing is a great tool for finding bugs, but code auditing will always be the best way to find amazing bugs and novel attack surfaces. Researchers who can do both code auditing and fuzzing extremely well (like lokihardt@astr) are even rarer and extremely good because they can both find interesting pieces of code to fuzz through auditing and find amazing bugs while fuzzing.
Apple is and should continue hiring these talented researchers. The point I am making is that they should hire these security researchers even more aggressively and other tech companies should follow. Most of them work at exploit shops like NSO group because they pay a lot better than big tech. One security researcher and one security engineer to every five developers for these critical pieces of code should be the industry standard not 1 security engineer to every 100-1000 devs...
They also probably use simulation software like Correllium that eerily simulates iOS seemingly to the extent Apple wanted them shut down. If anything, iPhones would be far more secure if everyone could get eyes on their OS and be able to toy with it experimentally. I suspect they aren't the only actor against such radical transparency at the corporate and governmental levels.
You can audit binary code with tools like Ghidra and IDA Pro.
It takes a different mindset to find these type of bugs than it takes to develop software. I won't quite say they're orthogonal skill sets, but pretty close.
If the people finding these bugs don't want to work for Apple, Google Project Zero, etc. there's not really much Apple can do about it.
I wonder why Apple does not include a hypervisor in iOS, and "risky" processes such as iMessage, Safari (maybe a Secure Safari version) could then be executed in a separate virtual machine. The hardware (CPU + RAM) in the iPhones these days should be able to sustain it. Or would there be serious drawbacks to this ?
I find it interesting that most comments here are blaming the victim (Apple’s iMessage and by transitivity its users) rather than the aggressor (NSO and its users).
How come NSO isn’t yet designated as a (cyber-)terrorist group worth hunting down and extinguishing?
Apple makes security claims in a world where these types of attacks are known about and expected. They are responsible for fulfilling their own claims.
If an air bag fails, you fault the manufacturer. They don’t escape responsibility by saying it’s the other drivers fault.
I’ll also add that Apple is not the victim here, the targeted end users are.
This is a false comparison. Being hacked by NSO isn’t an accident. There’s an agent involved here with clear intent to harm and significant monetary motives.
If in a car accident we knew one party intentionally caused the crash (and were paid for it handsomely), we’d hold them responsible, regardless what claims car companies make regarding safety.
If your air bag fails to deploy after a crash, the manufacture is responsible for the product defect. It doesn’t matter if the crash was an accident or someone intentionally and specifically crashing into you, the manufacturer is responsible for a defective air bag.
The manufacturer is not responsible for the crash, only the defective product. The other driver is not less responsible for a death or injury resulting from the crash.
Responsibility for the crash and its consequence's rest on the driver at fault.
Responsibility for a defective air bag rests on the manufacturer.
They are two separate issues and not zero-sum.
If someone clips your airbag wires before the crash, that is not a product defect and the manufacturer is not liable, but that is not what happened here. There was no prior access or modification to the device or software. Apple claims to have a secure phone yet has a critical zero click vulnerability similar to an earlier vulnerability they previously fixed.
Pointing a finger at NSO could one day lead to some government’s action aimed at influencing another government’s actions toward a private organization. NSO doesn’t care if they’re unpopular online.
Highlighting Apple’s responsibility in this is how we incentivize better security in consumer products.
I don’t think companies should be responsible for every exploit all the time. Nobody is pointing fingers at ViaSat for being hacked by the Russians. There have been repeated iMeassage exploits that could be prevented with easy to implement defaults or simple opt-in settings (do not implicitly trust unknown numbers) which have been asked for after each exploit and ignored.
I think the main distinction is that Apple claims to have a secure phone, but not an unhackable phone. A secure vault is hard to get into, but not impossible.
Should they have done something about this? I believe so, but they are not marketing themselves as secure against state actors. They have release lockdown mode, which may or may not have prevented this particular exploit.
It's important to keep the demographic of iPhone users in mind. The average user do not want to be inconvenienced for security measures irrelevant to them. And if a competitor (Android) is providing a better experience, then Apple, from a business point of view, have no choice but to make the most secure system they can, while still providing the same UX.
All that said, I do believe that they should implement zero trust on first contact, as a default, with the option to enable explicit trust for every attachment. I just do not believe that this will be any major impact on these actors capabilities.
It doesn't make sense to talk about IT safety if you exclude intentional hacking. To take your example, we do hold car companies responsible for harm from collisions with other vehicles, regardless of which driver was at fault.
While NSO is, of course, not a good group, I think the larger problem is how prevent these exploits are. If NSO didn't find them, someone else would. I don't consider NSO to be the big problem here.
Is this the standard we apply to other cyber-criminals? Or any crime for that matter?
Do we ask how vulnerable the victims were? And how we should make them less vulnerable and give a free pass to the aggressors?
I'm not saying anything about the vulnerability of the victims of these attacks. I'm saying it's absurd how the trillion dollar corporation fails to protect them. NSO should be stopped, sure. But don't kid yourself — someone will take their place immediately for as long as these vulnerabilities continue to exist.
It is a huge stretch to take a political concept that applies to vulnerable individual humans (victim blaming) and apply that to a dominant, highly profitable corporation. Despite the big lies you have been told by American politicians, corporations are not people in any sense at all.
When Russian or NK linked groups engage in cyberterrorism, there are plenty of calls to hold those countries accountable for giving the groups cover. Why is NSO treated differently?
Well in security, any open door is as much a problem as the guy opening the door. Nobody's blaming anyone, we're all programmers and we all made mistakes.
Because it's an arms supplier to states, who use them on domestic targets. Selling tear gas rounds to tinpot dictatorships generally doesn't get you treated as a terrorist group either, just economic sanctions (which were already placed on the company by the US DoC, IIRC).
Here we go again... NSO Group has a long history of 0-click, 0-days against iMessage, and just a few months ago Kaspersky caught a different zero day iMessage exploit targeting their staff.
If Apple repeatedly fails at securing their devices from an attack vector that has been demonstrated over, and over, and over... no wonder China is banning government officials from using their devices.
To foster your point, I quote a comment from another post on HN now
> And the whole reason for the hydrogen burning [by SLS' engines] was to keep the space shuttle contractors jobs. Once again it's not a technical reason but a pork barrel one.
All things are political and technology is a thing.
That doesn't mean you can't analyze if a specific technical decision was made primarily on technical grounds/merit or if it is mostly a political one without a technical basis.
Hard truth for people to hear, here. So many insist that all technology is neutral. It's really satisfying to watch the super "enlightened/rational/logical" crowd react emotionally though.
The two issues are intertwined, and from their perspective the real risk is likely relying on a US company during a time when US/China relations are quite tense. But Apple repeatedly having 0-click 0-day iMessage exploits being used in the wild certainly doesn't help. At a minimum it's a very good justification for them to move to a domestic solution like Huawei and HarmonyOS.
>I think the reasons are far more political than technical.
You may be right but sometimes the local optics track better when a political reason is given and the local authorities might also expect better compliance with political reasoning. Similarly, "We are getting pwned," never tracks well. There are solid reasons, long ago, why China stopped using Nortel.
Sometimes even if they wanted to ban purely for security reasons, it would be blocked because it could trigger political repercussions, or seen as a offensive move by the opposition country
iPhones can be provisioned with pretty extensive profiles/MDM so I doubt that part. They probably see all the zero-days and decide game respects game. Just not secure enough for such an incredibly insular power structure as theirs, even without the US connections.
I don't get why they wouldn't already insist on Huwawei and HarmonyOS unless this is more of an Inner Party issue where all the high-ups use iPhone intentionally because its more private than Huawei
I wonder what phone Z has been using all this time...
Android is moving away from C and towards more secure languages. From one of their recent blog posts, the majority of code written for android is now in memory safe languages.
And for this problem specifically Android can use WUFFS. WUFFS is a special purpose language for Wrangling Untrusted File Formats Safetly. Trading away generality (the ability to write a program which say, sends email) gets them complete safety and excellent performance.
In Rust we can't trivially write a bounds miss by mistake. But in WUFFS we just can't write a bounds miss at all. Like, that's not a thing in WUFFS, it doesn't compile. You can write your own bounds checks and show WUFFS that works, or you can write code which clearly can't have a bounds miss with no checks, that works too. But you can't just "forget" or "screw up" those won't compile.
This would be frustrating in a general purpose language. WUFFS doesn't have a "Hello, World" program because it lacks both strings and the idea of outputing to the screen. But WUFFS isn't a general purpose language, however it's the correct way to write the code that takes image files we got from some dubious source and processes them.
When Apple announced they wanted to provide proper security for vulnerable iPhone users, this is what that would look like coming from a company which actually cared about security. What you got is what it looks like from a company which prioritised marketing.
> Not sure what the answer is for existing memory-unsafe code.
It's an engineering approach that involves writing "A buffer overflow issue was addressed with improved memory handling" an awful lot. Hopefully one day they will finish improving the memory handling!
Of course, it's not necessary to use a language where "Give control of your phone to remote attackers" is one of the possibilities at all.
General purpose languages, whether that's something like C or Rust or even Javascript are not the appropriate tool. Turing Completeness is a bad idea from a security point of view, not a wonderful feature.
On Android you can disable automatic link preview and downloading of MMS messages. You can also swap out the application that handles text messages entirely.
Also the application that handles text message is itself sandboxed and limited to a fixed set of permissions (to be fair, that include messaging other humans, so an exploit would still be very bad, just not "remote root" bad).
That's the part that is still unclear with this BLASTPASS business. Surely iOS isn't running the messaging app as a device root, right? There's some other presumably-unpatched privilege elevation attack going on?
The pixel is decent if using graphene is. Not sure if any system is good by default. Apple fans think their defaults are somehow more private or secure, mostly due to marketing.
Decent? From a security perspective it's superior to the iPhone. As for Graphene, unless you've personally vetted the code I don't see how it can be trusted. And I won't even go into the drama that OS comes with.
Then what do you call the custom OS that ships on the Pixel? Of course it's a custom built OS that's designed to work with the custom hardware on the Pixel.
>One device series does not offer a glimpse of the market.
Security is defined by the marriage of software and hardware. The reason the Pixel is so secure is because of this. The OP made a blanket statement which did not apply to the Android ecosystem.
Lockdown mode means you're able to use less stuff. In this case the "pass" feature doesn't exist in Lockdown mode and that's the attack target AIUI. So, if most people don't use it (because less stuff works) then it can be "successful" statistically because maybe the attackers aren't targeting the stuff you're allowed to use and you don't get exploited.
But this isn't a stable solution really. If Lockdown is popularized, NSO obviously will attack the things which are available in Lockdown. Apple shows no sign of only allowing it in Lockdown once it's actually safe - instead they've just arbitrarily decided what to allow and what not to allow.
Not entirely true because NSO (or their clients) have very specific target requeriments. If they want to target another Jamal Khashoggi and lockdown mode is used, they have their incentive to buy and create the demand for any 0day that they can then re-sell to their lovelly clients. No need statistics for this, the incentive is purely greed.
The downsides of using Lockdown Mode are minimal and well worth if you are an at-risk target. It's about reducing the attack surface and potentially getting notified if nation state actors try to attack your phone.
That's needlessly defeatist and "it's so bad out there there's no point in trying" is a thought terminating cliché.
We can and should improve development practises, mitigate vulnerabities (ASLR, WAFs, etc), isolate systems from each other, and model threats in a way that we know where and how to do those things. It's not easy but just moaning "everything sucks" isn't how to fix it.
Apple, Google, etc. have whole teams of talented people dedicated to doing exactly these types of things, and they undoubtedly help prevent many vulnerabilities from escaping the labs. Yet vulnerabilities are still created and exploited despite their best efforts. As long as software is created by imperfect humans, it will reflect the imperfections of its creators.
It's not knowable for any, but it is knowable for some. You just have to build systems that are in the some and are inherently safe. Difficult, not impossible.
I agree, I just wanted to clarify that isn't mathematically impossible to make provably secure systems. It's just hard enough that it's not often done.
As sibling commenter wrote: they do exactly that. But the same way we fail to see all the cases prevented by vaccines, you only see when their work fail to work in the rare case.
That depends entirely on what the software needs to do.
For image decoding in particular, you can put the software into an exceptionally restrictive sandbox, or use a language that builds in the same restrictions.
No I/O. No system calls. Just churn internally and fill a preallocated section of memory with RGBA.
The broader system will still have weaknesses, but it won't have this kind, and this kind keeps happening.
That's an awfully big preallocated array you have there. It would be pretty inefficient for that section of memory to be copied around, right? Let's map it into both processes. Also, image decoding is pretty hard, let's offload some of it to dedicated hardware. Of course, that hardware needs to have access to it mediated by the kernel. And the hardware needs to be able to access that shared memory, which was of course allocated correctly and the IOMMU setup was done correctly…
You see how even simple things are difficult to secure when they have to be implemented in practice?
Mapping pure RGBA across processes is safe, but also a single extra copy is not a big performance impact in the first place for an image decoder.
Configuring the IOMMU is one of the easiest parts of doing it in hardware. That's not going to make things "difficult to secure". And allocating the chunk of memory is trivial.
If you've seen an exploit caused by a big pre-allocated array of untrusted RGBA data, please explain how.
(If you mean they put evil data through it and then used a separate exploit to run it, that's not a vulnerability, that's just "data transfer exists".)
And you seeing someone screw up an IOMMU doesn't disqualify it from being one of the easiest parts of a hardware decoder.
Code to calculate size of preallocated array is incorrect. Size ends up too small or underflows.
Buffer is reused across calls. Buffer is actually mapped across processes and thus page-aligned. Code to check how much space is needed checks number of pages versus actual number of bytes, and fails to clear leftover data correctly.
Code receives RGBA buffer but expects some other encoding. Accidentally reads out of bounds as a result.
You can definitely say “oh these are stupid and I wouldn’t screw this up” but people do and that’s what really matters.
> Code to calculate size of preallocated array is incorrect. Size ends up too small or underflows.
If you go outside the array you copied/mapped out of the sandbox, then that doesn't let the attacker code escape the sandbox, you just put some of your own data onto the screen.
If you mean the sandbox isn't given enough memory, then that will make the sandbox exit when it hits unmapped addresses.
And how did you screw up length x width x 4?
> Buffer is reused across calls. Buffer is actually mapped across processes and thus page-aligned. Code to check how much space is needed checks number of pages versus actual number of bytes, and fails to clear leftover data correctly.
The sandboxed process doesn't have any way to exfiltrate data. At most it can display it back to you, which is not really any worse than innocent code which could also send back the leftover data.
> Code receives RGBA buffer but expects some other encoding. Accidentally reads out of bounds as a result.
Reads out of bounds and does what with it? That doesn't sound like a vulnerability to me. It might display private data or crash, but that's entirely of its own volition. The behavior would be the same between innocent code in the sandbox and malicious code in the sandbox.
There’s a million ways to screw that up. People botch SCM merges. People are hungover. People are distracted. People are tired. People are heartbroken. People are going through divorces. People have parents dying. People forget numbers. People make copy-paste mistakes. All the time.
> The sandboxed process doesn't have any way to exfiltrate data.
You can abuse it to gain a (known-page-offset) write primitive in the other, non-sandboxed process to which the buffer is also mapped.
But if you do that every image looks wrong and it's vanishingly unlikely to get into a release.
> You can abuse it to gain a (known-page-offset) write primitive in the other, non-sandboxed process to which the buffer is also mapped.
There's no reason to have the memory mapped into both processes at once, and you can't exploit the bytes you write without a real vulnerability.
Since it's a one-shot write into the buffer, if your intent is using it as an exploit step then you might as well encode an actual image with your exploit-assisting bytes.
> But if you do that every image looks wrong and it's vanishingly unlikely to get into a release.
The code doesn't have to be wrong for every input. It may be wrong just for pathological cases that don't occur in the field unless specifically crafted.
> Since it's a one-shot write into the buffer, if your intent is using it as an exploit step then you might as well encode an actual image with your exploit-assisting bytes.
The assumption was that the code tries to clean up the buffer immediately after use.
> The code doesn't have to be wrong for every input. It may be wrong just for pathological cases that don't occur in the field unless specifically crafted.
I could argue this more but it doesn't matter, that was just a little tangent, getting the size wrong will not let anything out of the sandbox.
> The assumption was that the code tries to clean up the buffer immediately after use.
Cleaning up would be removing the mmap. How are you going to exploit that? Your scenario is not very clear.
I think you're going for a situation where the sandboxed process can write to data in the host process outside the buffer? In a general sense I can imagine ways for that scenario to occur, but I can't figure out how you could get there via mmapping a buffer badly. A buffer mmap won't overlap anything else. If the mmap is too small then either process could read past the end, but would only see its own data (or a page fault).
> Cleaning up would be removing the mmap. How are you going to exploit that? Your scenario is not very clear.
If a buffer is going to be reused across calls, then cleaning after use is not the same thing as unmapping. One example for cleaning up a buffer after use would be zeroing.
If there's a bug in the calculation for the amount of zeroing needed, then leftover attacker-controlled data can bleed back from the sandboxed into the unsandboxed process and survive beyond the current transaction (because the code failed to zero the buffer correctly after use).
In other words, the attacker can now write arbitrary data into the unsandboxed process's memory at a semi-known location (known page offset) inside the mapped buffer. That data may not be very useful on its own, because it's still confined to the mmapped buffer. But it's now relatively well protected from reuse (until the next decoding task arrives).
That's plenty of time to do shenanigans. For example, you can combine it with an (unrelated) stack buffer overflow that may exist in the unsandboxed process, harmless on its own but more powerful if combined with an attacker-controlled gadget in a known location.
It's hard to see why the buffer wouldn't be per image. There's no reason to reuse that.
> In other words, the attacker can now write arbitrary data into the unsandboxed process's memory at a semi-known location (known page offset) inside the mapped buffer.
But what is the arbitrary data going to be?
1. If it's gadgets with known lower bits, then you could put that into a plain-old image file, no decoder exploits needed. Also this requires the second dumb mistake of the coder going out of their way to mark the buffer as executable.
2. If it's data you want to exfiltrate, you could just gather that after you trigger your unrelated exploit. This is only useful if everything aligns to drop the private data you want in that specific section of memory, and then the buffer is reused, and then the private data is removed from everywhere else, and then you run an unrelated exploit to actually give you control. This is exceptionally niche.
> It's hard to see why the buffer wouldn't be per image. There's no reason to reuse that.
Premature optimization is a thing. Most software developers are prone to it in one way or another. They may just assume a performance gain, design accordingly and move on. They may be working under a deadline tight enough so they never even consider checking their assumptions.
Or maybe the developer has actually run the experiment and found that reusing the buffer does yield a few percent of extra performance.
> But what is the arbitrary data going to be?
An internal struct whose purpose is to control the behavior of some unrelated aspect in the unsandboxed process. The struct contains a couple of pointers and, if attacker-controlled, ends up giving them an arbitrary process memory read/write primitive.
It sounds like you picked option 1 then, which means you don't need to take control of the sandbox. "Create an image that put arbitrary bytes into the buffer that stores its decoded form." simplifies to just "Create an image." There is no vulnerability here. This is just image display happening in a normal way. It's something to keep an eye on but not important itself. You have to add a vulnerability to get a vulnerability.
The original problem of preventing image decoding exploits has been solved in this hypothetical.
> Your original request was: “If you've seen an exploit caused by a big pre-allocated array of untrusted RGBA data, please explain how.”
I asked that in a context of whether you can contain vulnerabilities in a sandbox. If something doesn't even require a vulnerability, then it doesn't fit.
Also please note the words "caused by". A few helper bytes sitting somewhere are not the cause.
> Which is exactly how exploit chains work.
> A single vulnerability usually doesn’t achieve something dangerous on its own. But remove it from the chain and you lose your exploit.
Being part of an exploit chain doesn't by itself make something qualify as a vulnerability. (Consider arbitrary gadgets already in the program. You can't remove all bytes.) And I've never seen "you can send it bytes" described as a vulnerability before. Not even if you know the bytes will be stored at the start of a page!
What exactly is "an exceptionally restrictive sandbox"?
There are virtual machines such as JVM, V8, or even QEMU. These are sandboxes, which run either some special bytecode or native code with extreme performance drawbacks. Media decoders are performance- and energy-sensitive pieces of software in the end.
And media decoders actually ARE sandboxes of sorts. They are designed to interpret media formats, sometimes even Turing-complete bytecode in retrictive and isolated environments. And like any sandboxes, they too have bugs.
> And media decoders actually ARE sandboxes of sorts. They are designed to interpret media formats, sometimes even Turing-complete bytecode in retrictive and isolated environments. And like any sandboxes, they too have bugs.
It's pretty easy to sandbox a simple bytecode, but that's not the bulk of what a media decoder is doing. A plain old decoder is mostly not sandboxing what it does.
And I mean this extremely rhetorically. The software launched along with the iPhone 4S. Going by geekbench, that CPU was 20-30 times weaker than a modern iPhone on a per-core basis.
I know the screens on the new phones are 5x bigger, but there is plenty of room for that sandbox.
A few years ago kaspersky caught something that had to be removed by their rescue ISO. That ISO was blocked from downloading its updates and their UK support team couldn’t resolve the issue so never got to find out what it was, but it doesn’t look good there was a block preventing it from working.
The US could demand NSO on a platter if they realty wanted to, but they don't. Israel intelligence often aids the US alphabet boys and the NSA on various offensive operations and intelligence gathering, which is why NSO is allowed to keep doing business by both parties.
And the US barely has an inherent interest in Saudi Arabia. Israel is credited for helping the US "fight terrorism in the Middle East" or maintain those puppets, but really they're just helping us help them. When it comes to things that don't directly benefit Israel, they don't care. Israel has never even fought ISIS for example, the largest recent terrorist threat in the region. And they're allowed to maintain some neutrality in situations like Russia's attack on Ukraine. You might even call them neutral wrt ISIS since they aren't in this mostly symbolic list: https://www.state.gov/the-global-coalition-to-defeat-isis-pa...
For all the things Israel has done against the US's wishes (like West Bank settlement), the US has struggled to do just one thing Israel doesn't like: the Iran nuclear deal. But at least one of our major parties is somewhat willing to push it. I think they're also somewhat annoyed with our focus on Ukraine.
It’s truly shocking how misinformed you are about foreign policy.
Israel attempted to maintain some level of neutrality wrt Russia bec when they show preferences, Russia punishes the local Jewish population… which they promptly did as soon as Israel showed any support for Ukraine.
Israel shares a ton of intel with the US regarding many of the local terrorist organizations in the ME. Not to mention they’re flying sorties into Syrian airspace almost nightly. (Infamously Syria AA shot down a russian spy plane, killing 11, thinking they finally caught an Israeli plane)
And Israel’s absence from that symbolic list was likely a precondition to get many of those Arab and African nations on the list. Israel has ISIS locally so there’s no doubt they’re fighting isis.
Finally wrt the Jordan’s west bank: they lost it years ago and it’s so odd you keep calling it that… maybe use the actual name for the area and suddenly Israel’s policy will make sense.
> And Israel’s absence from that symbolic list was likely a precondition to get many of those Arab and African nations on the list. Israel has ISIS locally so there’s no doubt they’re fighting isis.
Israel has never fought in any battle involving ISIS or carried out any smaller strikes against them. Some others who aren't in the coalition, like Russia, have. https://en.wikipedia.org/wiki/List_of_wars_and_battles_invol... Their operations in Syria have been against regime forces, Hezbollah, and other targets unfriendly towards Israel, not ISIS.
Even if there's significant ISIS presence in Israel (which I've never heard of), it hasn't convinced them to help fight ISIS next door. Their stance is neutral, and they don't pretend otherwise.
> Israel attempted to maintain some level of neutrality wrt Russia bec when they show preferences, Russia punishes the local Jewish population… which they promptly did as soon as Israel showed any support for Ukraine.
I didn't say there was no reason, because obviously there would be, including trade. They have been allowed to stay mostly neutral.
> Finally wrt the Jordan’s west bank: they lost it years ago and it’s so odd you keep calling it that… maybe use the actual name for the area and suddenly Israel’s policy will make sense.
It's the common name for the region used in America, that's all. I truly don't know what other name you had in mind, but whatever you call it, the US has been unhappy with Israel's policy.
> Even if there's significant ISIS presence in Israel (which I've never heard of), it hasn't convinced them to help fight ISIS next door. Their stance is neutral, and they don't pretend otherwise.
this is simply false. They help a ton and as I've said they've directly attacked them both within Israel's 1919 borders and assisted with attacks elsewhere in the region.
> I didn't say there was no reason, because obviously there would be, including trade. They have been allowed to stay mostly neutral.
so then why bring it up? but again, trade had nothing to do with it, its to protect the innocent people who cannot leave russia and who will be discriminated against due to their religion (much like you are doing right now).
Are you an NSO psychopath or something?! Those funny propaganda jokes you're spewing are not working. The Israeli Hasbara lies are so bad and funny (Oh we won the land instead of we are scumy occupiers and land thieves haha.) This Israeli murder cult is sad and pathetic.
You're the one who has no argument, other than pure hatred. Any sane human being would be against criminal genocidal ideologies like Zionism and Nazism. Cheering the monstrous crimes of those political ideologies is nothing but vile hate. It is a murder cult, come with an argument or f off. It is not my job to educate you if you're brainwashed.
We bought a vassal state with highly competent people. They are generally independent in internal affairs and so NSO group would have nothing to do with the government.
But you're right, we don't need that one at all any longer. Someone once told me that Israel would align more closely with Russia if we did that, which is a laughable come back, like, okay? have fun with that.
Retracking the various forms of funding and defense contractor slush funds would be a simple step.
You seem to think the NSO is some rogue org within Israel. Couldn’t be further from the truth. It’s simply a plausible deniability org that works for everyone, and counts every single US intelligence agency among its clients. The fact that you’re all upset about what they’re doing is exactly the point, you’re pointing your finger the NSO when it’s the NSA who’s paying them off.
So at what point does the world bring sanctions against Israel for allowing organizations like this to exist there? Everyone knows NSO is just a dubiously legal version of common APT groups, so how do they still exist after these years?
Game theory. They alert the big players of who is doing what, if they didn’t exist, China and Russia would get all this business and be at the cutting edge (and not share the most pertinent info).
this is actually not specific to israel, there are similar groups in other countries. finfisher was somewhere in benelux I think? hacking team was in italy. these two were pretty much dismantled but are just regrouping under new names.
this is a problem of all nation states wanting this kind of service, not of israel in particular.
Any politician who attempted something like that would immediately find themselves on the wrong side of the table with various three letter agencies. You’re incredibly naive if you think Israel or really anyone else is responsible for this. They’re simply doing the bidding of various first world intel orgs that want/need plausible deniability.
2) EU politicians don't know about Android updates because they don't understand how SW works and most probably have iOS anyway. The only time they hear about tech is when some Joe Schmoe insults them on Facebook so they send the courts after Facebook to doxx that person and get Facebook to moderate and ban "hate speech" on their platform. All in a day's work.
Granted, US politicians don't understand how tech works either, judging by the senate hearing of TikTok CEO ("Mr. CEO, can TikTok access my WiFi?"), but they have a powerful local tech industry to tell politicians what they should lobby for, while the EU doesn't.
It's to do with whether Android as a platform is patching older devices.
If they aren't then it may be better for everyone that they are destroyed rather than allowing them to be compromised and exposing security risks to the user.
That wasn’t a bad or mock-worthy question at all. TikTok’s app requests access to devices on the user’s local network. Why does it do that? Officially in order to connect to TVs/speakers, but what else could it decide to do with the access it’s granted?
I don't think the EU is that much concerned about E-waste. If they were, they would not waste their time with chargers and instead focus on appliances and cars.
These things are not mutually exclusive, iirc they are doing appliances & cars, it's just that phones are often highlighted & batteries are usually the first thing that fails and can be easily replaced in all cases die to how build electronics. Charger standardization was in the pipeline for about a decade so not exactly a priority it being only passed now.
They didn't specify micro-b just that the phonemakers should settle on a port (which is partially why they didn't mandate it, the hope was that the industry would figure itself out but it didn't). Micro-b at the time was the most popular port for any handheld appliance. USB-C was only published in 2014.
I just checked for updates for my Apple Watch as well, and I see an update for it ("This update provides important security fixes and is recommended for all users" - watchOS 9.6.2). I remember attempting to update earlier in the week (Tuesday?), and I don't remember seeing anything that day. I don't know if the update is related or not, but I decided to share anyway.
I got a new phone last year because my old phone could not call emergency services. Even if it was still receiving updates it's not clear that this would be fixed though. Google seems to think that local regulations prevent them from fixing this users of certain of their phones on certain carriers.
I've decided to upgrade instead of fix a 3-year-old phone that was because even if I could fix the hardware I still would not be able to safely enable bluetooth due to an arbitrary code execution bug in android that was not patched. It was otherwise perfectly good for my needs. I could not install a custom ROM because my model was not well supported and any I could not rely on such ROMs passing safetynet/play secure checks which the extremely vulnerable OEM ROM would reliably pass.
Maybe not throw away but give away. Just look at how many people buy a new phone every 2 years or even shorter period. Something must happen to their old phones.
People do that exact same thing with iPhones, so clearly Apple must suck too. /s
A lot of phone purchases anymore fall into the fashion side of consumerism. They don't actually care about X, Y or Z new features. They just want to show off that they have the new thing.
>>> If the EU is so concerned about ewaste. They need to be more concerned about Android phones that are thrown away because they don’t get updates than a cord.
The issue is that NSO makes spyware available to far more countries by lowering the barrier to entry. Looking at their sales history, they have little criteria for sales besides following the preferences of the Israeli government. They sometimes sell to nations that generally promote human rights, and other times sell to nations that are very repressive yet important allies to Israel.
Yes. There's at least a principled discussion to be had about the pros and cons of something like "full disclosure" vs. "responsible disclosure", but based on everything that's come out about the company I'm pretty comfortable at least provisionally tossing NSO Group in the same bucket as the ghouls who charge prisoners a per-minute fee for phone calls to their families.
yes I can totally blame the bully for using the stick he just found, and selling their services of "beating up blameless victims" to the next-biggest bullies. that's what responsibility looks like. NSO group are 100% responsible for what happens here. (which doesn't pe-empt someone else also being 100% responsible, for the record. I know, sounds unintuitive :D )
There are some misconceptions in your comments that I think could be useful to clear up.
When exploiting a running system, your goal typically won't be to disable ASLR (which would only impact newly spawned processes), but instead to 'infoleak' where ASLR has placed important things you care about, so you know where to access them.
Modern devices have mechanisms like KPP/KTRR, though, which make it impossible to modify kernel code anyhow.
You also propose that CoreGraphics might not be sandboxed. CoreGraphics is a dynamic library which can be loaded into any process. It's _processes_ that are sandboxed, not dynamic libraries, so CoreGraphics can definitely exist in a process that has a sandbox profile applied just fine.
You also mention that graphics functions might not be sandboxed because they need to access graphics acceleration features. This is a good thought! In fact, the kernel extension that enables hardware graphics acceleration, IOMobileFramebuffer, is accessible from the app sandbox for this very reason. As a point of interest, many vulnerabilities have been discovered in IOMobileFramebuffer over the years -- it's an attractive target specifically because it's reachable from the app sandbox.
Lastly, you mention jailbreaking the sandbox. I know what you mean, but "jailbreaking" typically refers to a series of abilities, such as the ability to control the kernel task, the ability to create RWX pages, the ability to bypass the FreeBSD MAC policies, etc. The ability to bypass sandbox restrictions is only one condition of a jailbreak, and a sandbox escape doesn't imply a full jailbreak. Also, jailbreaking doesn't entirely break "the sandbox" -- it's a selective change that can be applied on a per-process basis.
> You also propose that CoreGraphics might not be sandboxed. CoreGraphics is a dynamic library which can be loaded into any process. It's _processes_ that are sandboxed, not dynamic libraries, so CoreGraphics can definitely exist in a process that has a sandbox profile applied just fine.
Surprisingly, the decoding process had an extra step that did decoding out of the sandbox not so long ago, hopefully it's fixed now.
For a „best guess“ you’re making quite detailed assumptions, of which I have no idea and you bring up no reasons how you can assume these.
In the note they state that the exploit was in image processing with PassKit and iMessage.
Your comment somehow reads like an extrapolation what might have happened without any more actual knowledge that you share.
> Do we want to encourage the use of LLMs in discussion?
Quite the opposite, but I think we need a better strategy. Downvoting a single offending comment is giving the spammer precise feedback on what worked and what didn't. Bot accounts should just be banned.
The bug is in the WebP decoder. Shellcode execution is not been a thing on iOS because of codesigning. ROP is basically not a thing on new devices because of PAC.
> The shellcode would then disable memory protections like ASLR and DEP that normally prevent arbitrary code execution. This would allow the attacker to execute a ROP gadget chain or other payload to jailbreak the sandbox and run remote commands on the device.
This sounds like chicken - egg scenario. Can you clarify how original shellcode bypasses those protections?
DEP is a Windows implementation of a non-executable stack, i.e., memory permissions that do not allow execution on specific pages. Depending on the situation, an attacker can e.g., mmap() a new page with the execute permission set, write his shellcode there and jump there. Another way to bypass the NX bit is to actually use gadgets (snippets of code essentially) that are already there in the code thus they can be executed and redirect your instruction pointer to those addresses. Reusing code is generally known as ROP, JOP etc. and is mitigated by PAC for ARM (after v.8.3) and CFI for Intel (11th Gen onwards I believe).
That being said, Apple implements a ton of mitigations, both on a hardware level and on a software level which generally makes exploits on Apple devices interesting to analyze and see how they bypassed stuff.
Edit: For clarity, Apple requires both codesigning and implements PAC, among others. mmap'ing or ROP won't make the cut in this case.
> By crafting a malicious JPEG/PNG that specified inflated dimensions, an attacker could cause the decoders to allocate an overly large buffer to hold the decoded pixel data. When copying pixel data from the file into the buffer, the attacker could overflow the bounds of the buffer and overwrite adjacent memory.
This is such a trivial exploit, why in the world they decided to write an own JPEG decoder, and stumble on the seemingly most trivial case of input sanitation?
I would add sole reliance on sandboxing instead of principial stance on writing on overall secure, high quality, and higly verified codebase is obviously failing.
Grsecurity generally focuses on the kernel side of things, although it does include a number of userspace mitigations as well. Still, when you have such ripe primitives not even Grsecurity can protect you.
What we really need is to just have radically lower bug density. Buffer overflows need to die. UAFs need to be made far less common. The "distance" between vulns needs to be greatly increased. Having design and validation issues sitting smushed between a dozen memory safety issues is just not something you can deal with through software mitigation techniques.
An app like iMessage is just too sensitive (ie: unauthenticated communication with many image parsers) to be built the way that it is. Fundamentally it just can't be safe without core components being rewritten with memory safety in mind. Grsecurity and other mitigations would be an awesome defense in depth and would be particularly helpful to avoid subsequent privescs, but I'm far more concerned with "anyone can text me an image and own me thanks to 1990s style bugs".
I'm not convinced that that's true. All we know is that one vulnerability was a "buffer overflow" - pretty vague. If this were, for example, an overflow on the heap, which Grsecurity mitigations would even impact it? Improved ASLR? Mprotect restrictions? There are very few mitigations in the Grsecurity patches that even touch userspace in a way that isn't focused on kernel protection.
Maybe if it's a stack based overflow something like PAX_RANDUSTACK would have had some impact but it depends.
And in case this at all comes off as me thinking anything negative about Grsecurity, I assure you that's not the case. I proudly wear the "Grsecurity Cheerleader" badge that Spender threw my way over a decade ago.
I would expect them to get to iCloud via this attach chain even with advanced data protection on. Which means if you are targeted, they not only get data from your iPhone but also all the data you backed up to iCloud from your other apple devices.
I suspect they would be able to compromise apps like 1Password through this. Which means all services whose password and 2FA is stored together in 1Password is compromised.
It is good to know lockdown mode stopped this attach chain.
Given this has happened so many times, new security posture for a normal/regular security conscious person should be:
1. Disable iMessage.
2. Enable lockdown mode.
3. Disable iCloud. (If you choose to keep iCloud enabled, definitely enable Advanced Data Protection and disable iCloud Web, disable passcodes and keychain on iCloud. Disable iCloud mail – it uses 3rdparty proofpoint for scanning – more surface area for compromise).
4. Don't store password and 2FA together in the same system like 1password. Always use FIDO2 physical key based 2FA, if available.
I just want to add this: these people operate pretty much in the open. They're not ashamed of it either, or else they wouldn't put it on their CV:
https://www.linkedin.com/company/nso-group/people/
That right there tells me that we as "the tech community" are way too okay with this sort of application of the tech. The tech we're all so convinced will "make the world a better place." /s