The current title "Apple is suing smartphone emulation software startup Corellium" takes away a lot of context of what the actual article is about. In my opinion, the article title (and the HN title I posted and edited to "Apple says researchers can vet its CSAM tools, but sues a startup that can") is not "clickbaitey" when it is actually factual:
> On Monday, Corellium announced a $15,000 grant for a program it is specifically promoting as a way to look at iPhones under a microscope and hold Apple accountable. On Tuesday, Apple filed an appeal continuing the lawsuit.
> ... “With their left hand, they make jail-breaking difficult and sue companies like Corellium to prevent them from existing. Now with their right hand, they say, ‘Oh, we built this really complicated system and it turns out that some people don’t trust that Apple has done it honestly—but it’s okay because any security researcher can go ahead and prove it to themselves.’”
I didn’t get that impression from the title, but I also knew they were referring to Corellium before clicking the link. I interpreted it as, “Apple says researchers can vet its CSAM tools despite aggressively suing the one company that makes such research feasible.”
I heard nothing about Corellium before, so my reading of the headline was that Apple is retaliating against a company for trying to audit their CSAM tools despite saying that it's totally fine if people want to do those audits.
As I see it, Apple had 3 options for how to deal with this case - in decreasing order of hostility:
1) Sue the company out of existence
2) Buy the company
3) Settle
They chose option #3, which from what I've seen is solely related to DMCA claims. With this in mind, why should Corellium be allowed to continue this work having just gotten out of the proverbial "doghouse"?
Apple's already marked them as a bad actor in this regard and continuing to make noise seems ill-advised.
Actually, Apple chose option #1, and when their ass was handed to them by the judge in the case, they opted for #3 to get out from under it.
No matter how they handled it though, the fact remains that they are providing lip service to the public ("security researchers can easily vet our methods") while continuing to fight against the very same researchers who might want to investigate this new scanning technology.
"Part of the two-year case was settled just last week—days after news of the company’s CSAM technology became public."
And
"On Monday, Corellium announced a $15,000 grant for a program it is specifically promoting as a way to look at iPhones under a microscope and hold Apple accountable. On Tuesday, Apple filed an appeal continuing the lawsuit."
Both the headline and the HN title (which is slightly different) use "is suing"--which is a tense in English that indicates an ongoing or even perpetual action--in a way that doesn't in any way imply direct causality on the nested prepositional phrase "that can / does just that": Apple is, in fact, suing--and has been, for a long time now!--a startup that not only "can" "vet its CSAM tools", but "is doing just that" (which maybe itself needs explanation: "just that" doesn't mean "only that" but means something like "emphatically that" in this context).
The quote from the Apple executive is misleading to outright dishonest.
> “Security researchers are constantly able to introspect what's happening in Apple’s [phone] software,” Apple vice president Craig Federighi said in an interview with the Wall Street Journal. “So if any changes were made that were to expand the scope of this in some way—in a way that we had committed to not doing—there’s verifiability, they can spot that that's happening.”
Apple uses complex cryptography to shield themselves and their list providers from accountability. You cannot determine if they've included non-child-abuse images in the database through inspection.
It's crazy that people argue what is being scanned, but the very fact that it will take place at all. Imagine a landlord coming to your place and looking around for drugs. This is insane violation of privacy regardless of intent.
Landlords have the right to enter their property at any time given a "reasonable" notice which is typically 24 hours and almost all states have provisions for "inspecting the premises". I don't have to imagine, this happens almost daily.
You'd have a darn good case for suing your landlord if they started going through your drawers, though. The inspection is for condition of the building and fixtures.
Nobody seems to remember that the EARN IT Act nearly passed in a bipartisan vote last year in the US. Apple's CSAM Scanning is better than that potentially coming back.
No, but you can verify that only photos uploaded to iCloud are scanned, and can verify that photo metadata beyond the security voucher is not generated or sent to anyone but Apple.
Richard Stallman was right about everything. In twenty years, people will be standing up for Apple as they upload your health data to your employer and insurance provider.
"It won't matter if you eat right and exercise," is a quote I expect to hear.
Because most people don't give a fuck on this topic. It ticks the boxes we need ticked.
You're different, that's cool. Shout it from the rooftops. I support you.
And VW has special software for emission tests.
The whole point of security tests is to test on the same iPhones the normal user has without the doubt of special test devices.
Yes, but they're loaner units primarily intended to contractually obligate security researchers to work for Apple. I trust a security researcher using an official Apple device about as well as a Trump-funded 2020 election audit.
Apple uses the technical protections on the phone to make it very difficult to actually even be in a position to do security research without also being NDA'd. There is no owner override like there is on, say, M1 Macs. Apple's position is that nobody but them loads OS code onto your iPhone. Not even you - and you can't practically do any research or auditing on things like the CSAM scanner without having the ability to poke around in the OS.
If Apple had an owner override on iOS-fused devices, then we could load our own kernels, call into the neuralhash framework, and so on to actually validate that the system does what it says. But that would also mean that Epic could sell Fortnite skins outside of the App Store, and we can't have that. So instead we need to gag and muzzle security researchers... which also makes them no longer independent auditors of the iOS security model.
Researchers can vet the client-side part of the tools, but per the Security Device program agreement they can’t discuss anything publicly without Apple giving them the go ahead.
And presumably the hash match database downloaded to the device is encrypted and unable to be examined.
Even if they weren’t lying through their teeth, this changes nothing. My objection is to Apple believing that they can use a device I own for proactive law enforcement and then acting on that belief. Nothing about how they do that is even worth considering. Apple is spying on you for the police.
> Pedos are extremely tech-savvy, they need to be to survive.
This is not at all borne out in reality. When the FBI rounded up a big ring of CSAM creators/consumers a few years back it came out that they (the people sharing) had rules for how to interact with the community that would have fully protected them, but many of them were sloppy. Same thing with the amount of CSAM that FB reports.
> [In 2018, Facebook Messenger] was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports.
The narrative here--which I feel like a lot of people aren't grokking somehow--is that to analyze Apple's CSAM tools you need to be able to extract them from a phone to debug and work with them, which involves reverse engineering the implementation; and Apple recently said quite strongly that a reason you can trust their client-side CSAM is because, by virtue of being on the client, security researchers can do this analysis.
Only, simultaneously, Apple hates the idea that people ever should get access to the software that runs on their phones and reverse engineer it: they tend to downplay results that are found in a way that often involves going to war with the security research community, they sued Corellium--which provides tooling to security researchers--and insisted that their clients were doing things that were inherently illegal, and they are so stingy with giving general access to their devices that not only can you not opt out of their lockdown they won't sell you special bright yellow open devices either... after many years of pleading with them, they finally decided to allow some researchers access, but it requires not only being invited but then signing off on gag clauses that are generally considered to violate the ethical responsibility of practitioners.
It thereby feels like Apple is talking out of both sides of their mouth... though, of course, that's nothing new for them :/. On the one hand, they want to claim that security researchers are important to their overall security strategy; but, on the other, they simultaneously abuse and prosecute people who dare to either directly pull apart their systems or have the audacity to provide the tools required for others to do so.
And, for anyone who is stuck in the mental frame "BuT I tHoUgHt ApPlE lOvEs SeCuRiTy ReSeArChErS", barely over a year ago (wow time flies when you are living alone and physically falling apart during a pandemic, huh? ;P), I wrote a thread on Twitter that documented a ton of the issues that we run into with Apple, including using specific examples, and touched on this lawsuit against Corellium. FWIW, I don't personally know of anyone in the security industry that thinks Apple is doing well on this front, and I doubt many exist.
Also: here is a thread on Twitter from a few days ago (started by Runa Sandvik, the senior director of information security at the New York Times) about Apple's recent statements, as well as a direct link to a reply sub-thread from Kurt Opsahl--the Deputy Executive Director and General Counsel of the EFF--that quickly got updated re the Corellium appeal.
(edit) I am realizing it is probably also worth explaining another key detail here that is probably more than just a bit confusing: one reason this is particular news right now is because, in addition to the big CSAM background story, Apple just announced an appeal of the case they lost to Corellium.
I think it is important to triple underscore that: a lot of people know about how Corellium and Apple recently settled, but that was over other claims that Apple (seemingly) gave up on; Apple can't appeal that AFAIK. However, in December, Apple had most of its (extremely weak...) case dismissed by the judge.
> U.S. District Judge Rodney Smith ruled in favor of Corellium LLC, saying its software emulating the iOS operating system that runs on the iPhone and iPad amounted to “fair use” because it was “transformative” and helped developers find security flaws.
It almost certainly isn't the case that Apple decided to do this appeal because of Corellium's press release, as it almost certainly takes more than less-than-a-day to put that together and file it ;P. It will be interesting to see if Apple manages to put together a more coherent argument in their appeal.
> Back in December, Apple lost a copyright lawsuit against security research company Corellium, and today, Apple filed an appeal in that case, reports Reuters.
I realize that “child safety features” is the absurd euphemism in the title of the article itself, but the title really should be changed here on HN. Allowing a title like “child safety features” to describe this technology within a community that both knows better and is prominently featured in Google gives that description credence. It should be changed to something like “…invasive screening technology” or “…government backdoor into your iPhone”.
As I understand what has been said, the CSAM detection is part of the iCloud Photos upload process, via attaching some metadata to the photos that're being uploaded. So you could block this service, but it'd be functionally equivalent to just disabling iCloud Photos entirely.
The distinction there is that it's the server doing the matching and reporting, not the client. People expect remote servers to be accessible to government searches, but not their own personal devices.
This distinction is even codified in U.S. law. The government needs a warrant to search your phone, but only needs a subpoena to search a remote server that's storing your files[1].
But yes, I can see why that distinction might feel a little arbitrary at times, particularly in the modern age where cloud storage is so common. Perhaps the 4th amendment should cover third parties storing "papers, and effects" on a person's behalf.
> The distinction there is that it's the server doing the matching and reporting, not the client. People expect remote servers to be accessible to government searches, but not their own personal devices.
Speaking solely for myself, I don't see a meaningful difference between these cases. They're both "content you upload to a server is scanned", with the only difference being that the scan happens immediately before upload rather than sometime after upload.
My opinion would be notably changed if Apple was scanning content you're not uploading, but the current system doesn't seem to allow for that.
The difference is you need just to change a flag in the process to scan all your pictures on the device no matter if iCloud or not.
It's their software and you have only their promise they only scan pictures meant to be stored in the iCloud.
The reason the distinction is meaningful here is that Apple end to end encrypts iCloud uploads, meaning that Apple does not have access to the files. Scanning on the device is this weird middleground where they do sort of have access, and in some cases might very well end up with access (false positives).
Apple doesn't end to end encrypt iCloud Photo uploads -- they do transfer and store them encrypted, but they still have a key to view them if they want to.
> iCloud secures your information by encrypting it when it's in transit, storing it in iCloud in an encrypted format, and using secure tokens for authentication. For certain sensitive information, Apple uses end-to-end encryption. This means that only you can access your information, and only on devices where you’re signed into iCloud. No one else, not even Apple, can access end-to-end encrypted information.
The notification has one title, the title here has another, the article title has another, and I'm immediately assaulted with pop-ups and page scroll shifts... it can be REALLY annoying checking HN sometimes, you know?!
I got an ad overlay from the top of my screen, a growing banner from the bottom, and a fade-in pop-up over that. Within five seconds of loading the page, precisely 0% of it remained unobscured.
Why should I? I don't bother, that way I know which sites to not visit because they can't put together a usable website. If I go out of my way to force their site to be useable then they see they get traffic while doing such things instead of learning not to, or going out of business.
uBO shows you what it has blocked on each site. So you can see which sites would subject you to ads without actually having to expose yourself to the psychological terror of seeing those ads.
It also blocks things that you don't see (tracking), improves performance and prevents malware infections.
Edit: Replies are far more informed than I am. I'm using Firefox on Android for the reasons outlined in my original comment, preserved below
---
> Serious question: Why do you browse the web without an ad blocker? I can't imagine subjecting myself to that kind of torture.
As best as I know: doing this on mobile (assuming that's their platform) requires both:
* a non-iOS device (Android basically)
* a non-Chromium browser on said device (Firefox basically)
That pairing is the only reason I can adblock on mobile. Not sure if things changed on Chrome or related browsers on Android, but as best as I remember, iOS and Android+Chrome aren't adblock-friendly.
NextDNS works really nicely on OS/Apps. Also blocks ad/tracking content in iOS. It's actually quite easy to roll this yourself, which I intend to: wireguard+pihole.
would need to subject oneself to the pushy notifications, whims of Brendan Eich, and the dubious approach of the BAT cryptocurrency [1], but the Chromium based Brave browser on Android has a built in ad-blocker.
You are not wrong, I'm using Vivaldi on Android and it has built-in ad blocker. On this page though are two big dialogs: for cookies on top and "2 free stories remaining" on bottom, so even with ad blocker only 1/3 of the screen is visible until you manually remove these annoyances.
I turned off adblock to check, this is madness. 2 slide ins (top & bottom) for upselling their paid subscriptions, instantly another overlay sliding in on the top about cookies, scrolling down just to try to bring the content up darkens the screen with another overlay asking for newsletter signup, briefly followed by a slide in from the right promoting some other product. All before seeing a single word of article content. If anybody every tells me using adblock would be unethical, this is my new goto site why I think the opposite.
That's the beauty of using Firefox + uBlock Origin on both your computer and non-ios mobile - you will completely avoid these kind of irritating annoyances, experience quicker loading and more responsive websites and save a lot of bandwidth. (On ios AdGuard does the same but is limited because of Safari. Google is also working to cripple ad-blocking on Chrome, so Firefox is currently the best browser to comprehensively avoid such ad-infestation and trackers online).
I’ve been using 1Blocker on Mac and iOS for years and every once in a great while I have it disabled temporarily and can’t believe people what people have to deal with where everything is covered in ads. Every once in a while some feature on a site doesn’t work so I click disable content blockers for that one page temporarily then all is good or can always just open up chrome for a particular site on those rare occasions.
I’m using AdGuard on iOS. Whereas 1Blocker only allows one type of filtering to be active at a time in the free version, AdGuard can block many things in Safari on iOS for free. Ads, trackers, social widgets, annoyances. All blocked for free by AdGuard.
I recently put NoScript into a less-trust mode and manually enable sites and links if a webpage I want is broken.
It is pretty amazing to auto-bypass paywalls, and how much faster sites load, that you can see how many external JavaScript sources there are on every site by default.
It’s a little annoying when I realize that X or Y page doesn’t look or work right and need to adjust, then reload, maybe a couple times, but overall worth it! Magic is right.
This is outdated reporting, Apple did drop the suit against Corellium a few days back.
Funnily enough the same NeuralHash has already been generated for completely different images, so good luck explaining why your 4th of July pics cost all of your safety vouchers to Apple/FBI
> In the lawsuit, Apple argued that Corellium violated its copyrights, enabled the sale of software exploits used for hacking, and shouldn’t exist. The startup countered by saying that its use of Apple’s code was a classic protected case of fair use. The judge has largely sided with Corellium so far. Part of the two-year case was settled just last week—days after news of the company’s CSAM technology became public.
> On Monday, Corellium announced a $15,000 grant for a program it is specifically promoting as a way to look at iPhones under a microscope and hold Apple accountable. On Tuesday, Apple filed an appeal continuing the lawsuit.
Monday, August 16: Corellium launches its "Open Security Initiative" to fund "research projects designed to validate any security and privacy claims for any mobile software vendor". The announcement prominently lists Apple's privacy and security claims about its CSAM scanning as one of the topics that would be eligible for funding under this initiative. (https://www.corellium.com/blog/open-security-initiative)
August 2019 - Apple sued iOS virtualization provider Corellium for copyright infringement and DMCA violations
December 29, 2020 - Apple loses copyright claims in lawsuit against U.S. security bug startup
August 5, 2021 - Apple announces new protections for child safety
August 17, 2021 - Apple says researchers can vet its child safety features. But it’s suing a startup that does just that.
Unless there is an iTimemachine I struggle to see how Apple sued a company in August 2019 for saying in August 2021 it will vet help vet its CSAM tools announced in August 2021.
Apple settled the lawsuit on August 10, 2021, just to file an appeal on August 17, 2021 (one day after Corellium's announcement). That is the focus of the article, not the original 2019 filing.
From the Reuters link in the article:
> The appeal came as a surprise because Apple had just settled other claims with Corellium relating to the Digitial Milennium Copyright Act, avoiding a trial.
> Experts said they were also surprised that Apple revived a fight against a major research tool provider just after arguing that researchers would provide a check on its controversial plan to scan customer devices.
It’s going to be practically impossible for most researchers to vet the CSAM detection tool without Corellium. It’s already very difficult with Corellium, but Apple is going out of their way to ensure such research is infeasible in most cases. That isn’t limited to their CSAM detection tool—hence the lawsuit that predates that tool.
Apple is claiming that researchers can vet the CSAM detection feature while simultaneously attempting to take down organizations that make such research possible. It’s a stupid statement on their part.
Neither the headline nor the HN title say that the lawsuit is "for saying" anything... they both merely--and very correctly--claim that Apple "is suing a startup that does just that". Are you claiming that Apple is not suing Corellium? Alternatively, are you claiming Corellium does not "do[] just that"?
So, it is maybe worth explaining this a bit: Apple lost these claims in December... and like, really badly: the jugs outright dismissed them; but then the trial was going to continue with the rest of the claims, which were settled.
Apple is now appealing the claims they lost, not the ones they settled (they can't do that: that would undermine the premise of settling anything at all). Legal complaints are not atomic all-or-nothing affairs in this way.
Thanks for the added context. I assumed it was either something along those lines or the settlement was never actually finalized. It’ll be interesting to see what comes of all of this.
Note to self: make sure every settlement agreement includes a clause like "the agreed-to settlement amount triples on every subsequent legal action by $other_party concerning this subject".