Hacker News new | past | comments | ask | show | jobs | submit login

This is truly a hacker’s retort.

It attacks Cellebrite's ability to operate by casting doubt on the reports generated by the product that their customers may wish to use in court.

It places them in legal peril from Apple, and removes any cover Apple would have to not take legal action. (I assume someone at Apple knew they were shipping their DLLs?)

It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.

edited to add a bonus one:

Publish some data about what they are doing to help create a roadmap for any other app that doesn't want their data to be scanned.




All that trouble becaused a bag conveniently "fell from a truck". All in all I'm really happy for all this.


I am happy to see the bag survived its most untimely truck tumble while remaining a e s t h e t i c a l l y - - - p l e a s i n g.


What a sturdy bag it is!


I love Kim Zetter's tweet:

Must have been a turnip truck https://twitter.com/KimZetter/status/1384936715769503745

That's a dig at the turnipesque savvy of Cellebrite, which royally stumbled into this.

cf. https://www.dwt.com/blogs/privacy--security-law-blog/2015/03... > FCC chair Wheeler said the FCC “didn’t just fall off the turnip truck.” Through CALEA, CPNI, and CSRIC, Wheeler said the agency has been working to protect consumer privacy.


Yup. Those things have way to "fell from a truck". Another win for the "fell from a truck" gang ;)


I found that funny too. It sounds to me like a good way to end up with the device to analyze without being constrained by a contract or EULA prohibiting it.


Yes, it's known as a euphemism. TFA just took the joke to hilarious extremes with the photos.

https://www.phrases.org.uk/meanings/fell-off-the-back-of-a-t...


TFA? I know you're referring to Moxie (somehow) but I can't figure out the acronym.


IIRC "The Fucking Article". I think the expletive originally was because people "wouldn't read TFA", but eventually it just kinda became the way to refer to the article linked from a Hacker News thread.


Oooh of course! Thanks!


Also can be "the fine article" when used in a non-antagonistic way!

I think it originated with "RTFM" which was an old unix admin way of saying read the fucking manual (or man page).


Yes RTFM I'm very familiar with, which is why I should've connected the two.


Indeed, how convenient. If it truly did fall off the truck right while he is on a walk then there is the possibility that is a rubber duckie attack. This is basically the equivalent of leaving a USB flash drive lying around. I hope the author took the necessary precautions when reverse engineering the device. Companies like cellebrite have deep connections to certain three letter communities that staging this sort of attacks trivial.


"falling off a truck" is slang for "was stolen".


TIL: https://idioms.thefreedictionary.com/fall+off+a+truck

Edit: looking up a bit more, it seems like this idiom is used to denote goods sold for cheap because they were stolen. Like "Bob is selling genuine iPhones very cheap, I fear they fell from the back of a truck".

Edit edit: I initially took it as "we won't tell how we got this", because I didn't know this idiom, but it seems several people agree with this interpretation. Not necessarily stolen, but obtained from an undisclosed source.


That isn't quite right. Although it's commonly used to describe something that's been stolen, it's more generally used to indicate that the speaker doesn't want to talk about where it came from. That's how it's been used in this article.


Given the sort of business Cellebrite is in, they would probably still want to treat anything connected to it with an overabundance of caution.


I think we can be relatively confident that the connected machine was airgapped and perhaps run in a VM.

Perhaps even in a faraday cage..


> (...) and perhaps run in a VM.

One of the screenshots[0] shows the VMware Tools Service running, so yeah, looks like a virtualized guest.

[0]: https://signal.org/blog/images/cellebrite-dlls-loaded.png


> If it truly did fall of the truck

lol


Seeing reactions like GP’s I’m surprised at how many people don’t know this expression.


I thought it was an euphemism because they couldn't reveal who gave it to them. Confessing it was stolen, even as an euphemism, is too blatant to be taken seriously IMO.


Even if you didn't, i think its pretty obvious from context.


If English is your second language, you may not have come across it. It's a very informal and infrequent idiom.


FWIW we have the exact same expression in French “c’est tombé du camion”


Dito for German: "vom Laster gefallen"


Add one more: Anyone publicly attacking Signal's privacy in the future is painting a very large target on their forehead.


There's a difference between attacking their privacy (which is fair and should be done regularly if users' privacy is at stake) and claiming access to secure data that is wholly untrue.


> It attacks Cellebrite's ability to operate by casting doubt on the reports generated by the product that their customers may wish to use in court.

Fortunately, parallel construction means you never really have to throw out bad evidence as long as you can find some good evidence too!


Knowing that at least one row of data in a database might have been modified randomly means you can't fully trust any one line in the database completely.

It reminds me of the story of https://en.wikipedia.org/wiki/Annie_Dookhan


Sure, but the data on the phone will lead you to evidence in the real world, which will be meaningful proof that can be used in court.


You either will have to come up with a plausible (literally, 'warranted') way to have gotten that data in the real world without relying on the data on the phone as the reason you went looking, or it will likely be thrown out due to it being "Fruit of the poisonous tree".


That's precisely what parallel construction is: a lie told by investigators to the court to sidestep the poison tree, bolstered by real evidence specifically gathered to lie outside of the branches of same.

It's a method that crooked law enforcement uses to deceive courts. It's so common as to have its own name now.


Right, but that implies you can construct an entire chain that doesn't include checking their phone. Just pointing out, per the post i was replying to, it's not simply use the phone, get other evidence, don't worry about the phone's evidence being thrown out.


That's literally what it is, except "get other evidence" means "construct a plausible story that gets you to the same point".


No, that's not the case here. You don't need a parallel construction in this example, because the UFED extraction (even if tainted by a similar exploit) wasn't illegally obtained.


Didn’t know that, thank you!


Sounds like a US/UK-centric view of things. There's many countries where evidence can be used in court regardless of how it was obtained, Swedish and Germany to name two.


Fair point.


False data isn't FOTPT.

If I search a phone with a tainted UFED and get a conversation between Bob (my subject) and Carl (his friend) about the drugs they're selling, that conversation either exists or doesn't exist. Now, let's assume that the court won't accept this evidence, based on a defense argument that it should be inadmissible after seeing the vulnerabilities of UFED, as detailed by Signal.

The next investigative step in to go interview and search Carl, since there is probable cause to believe that a conversation occurred about their drug dealing on his phone. Unless I a) know my UFED is vulnerable and b) have reason to believe the text conversation between Bob and Carl is fake, my warrant for Carl's phone is valid. Now, I search Carl's phone with the same UFED and find the exact same conversation.

At this point, it's still possible for the UFED to have made the conversation up on both sides and for both extractions, and this would probably make the resulting conversation (and potentially everything in the report) inadmissible, but any admissions from Bob or Carl, including based on questions asked about the conversation itself, would still be admissible. I could show the report to Bob and Carl as evidence against them and get a confession, which would be admissible. Additionally, if the court determines that the UFED report is inadmissible based on the potentially for it to be forensically unsound, I would still have the phones themselves. UFED (except in rare circumstances) requires the phone to be unlocked before it can make its extraction. As such, I could manually browse the phone to the conversation in question and take photos of the phone with the real conversations between Bob and Carl. I could also verify through an ISP/messenger app that evidence of those conversations occurred (for example, metadata demonstrating matching times and message sizes that align with the potentially-fabricated message).

The FOTPT defense only applies to illegally obtained evidence. Assuming you obtained a valid warrant or consent to conduct the search, there was nothing illegal about the UFED extraction that would make the FOTPT defense applicable.


In your example, it's true that you could question Carl, but the a warrant to search Carl based on the false data should be overturned later.


It's not knowingly false, so the warrant would remain valid.


https://www.reuters.com/article/us-dea-sod/exclusive-u-s-dir...

~~~ The undated documents show that federal agents are trained to “recreate” the investigative trail to effectively cover up where the information originated, a practice that some experts say violates a defendant’s Constitutional right to a fair trial. If defendants don’t know how an investigation began, they cannot know to ask to review potential sources of exculpatory evidence - information that could reveal entrapment, mistakes or biased witnesses. ~~~

If you believe any of "fruit of the poisonous tree" stuff makes any difference, I've got a bridge to sell you. There is clear evidence that DEA agents are trained to create a "clean" story of evidence around the illicit information they get.


It doesn't matter. This story/example from Signal has nothing to do with FOTPT.


Cellebrite acquired BlackBag Technologies recently. BlackBag emerged from Apple's security team.


Fitting name.

Black-bag the opposition politican, and then black-bag her phone.


A little different from brown bag, in which the non-traceable payment travels under the table.


Or dog shit under a door.


Sadly I suspect the people in law enforcement who make purchasing decisions never read the Signal blog, and therefore all these points will be moot.


They don't have to read that.

The defense lawyers have to read it, and the people in law enforcement need to read the cases where judges throw out Cellebrite evidence based on that.


The problem with that is the cases would need to get to the discovery stage.

95% of all criminal cases in the US are Plead out largely because the defendant can not afford competent legal representation

This is why all kinds of questionable investigative tactics are still used even some that have clearly been ruled unconstitutional, they know most of the time it will not matter, they just need to get enough to make an arrest, the system will destroy the person anyway no conviction needed, and most likely the person will plead guilty even if innocent just to make the abuse stop


Do all defendants not have the right to an attorney in criminal cases?

If evidence is obviously (or with high chance) disputable, then it puts pressure even in those cases where the attorney recommends the defendant to eventually plead.

Or maybe the attorneys available to those who can't afford are just crap in general?


>>Do all defendants not have the right to an attorney in criminal cases?

In theory yes, but in most cases that will be a overworked lawyer that has 100's or 1000's of other active cases, and will not spend time other than negotiation the best deal.

Even if you happen to get a good public defender that has the ability to devote lots of time to your individual case, they would have no budget to hire an expert witness to present the vulnerabilities in the Cell-bright software to a jury

Your best bet would be if the ACLU or EFF took an interest in your case, but they take on very few cases relatively speaking and tend to focus on precedent setting cases, or cases that have high public interest (or can be made to have high public interest)

>Or maybe the attorneys available to those who can't afford are just crap in general?

In some cases they are incompetent, however in most cases they are just underfunded and massively over worked. In most jurisdictions the public defenders office is funded is 1/10 or less of what the prosecutors office is funded at.


The worst enterprises using Cellebrite don't really have to worry about defense lawyers.


No, but Cellebrite does because their credibility is what sells their products to law enforcement. It wouldn't kill all their sales, but enough to be painful.


If you, as I do, hold that personal devices should be private, then you're probably happy with the extraction tools being weak. Let them remain complacent.


That won't do much. In order to throw the evidence out, it needs to be shown that something was actually compromised by the tool, not just that it is possible.

Consider https://www.securityweek.com/forensics-tool-flaw-allows-hack.... Have any cases been thrown out? I don't think so.


This.


They won't be moot when defense lawyers bring them up.


Doesn't matter. When you can go through every message on someones phone back for years, I'm sure you can find something to put nearly anyone in prison for.

No need to tell the court how you found out about the lawnmowing for the neighbour that was never reported to the IRS...


When you can call into question the data integrity of the items on the device and whether the information from that device is accurate or was inserted by the machine used to break into it, that is some very basic fourth amendment stuff that could possibly get all items taken from the device deemed inadmissible.


Eh, this goes two ways. Cellebrite is rarely going to result in the only meaningful evidence that proves a single element of the offense. Instead, it is often used to further an investigation in order to find evidence that is more damning and of a higher evidentiary value. Fortunately for law enforcement, the integrity of the Cellebrite-obtained data is all that important if it leads to further evidence that is more significant.


I don’t think that’s true. There’s a legal idea of “fruit of the poisonous tree”[0] that basically says you can’t use bad evidence, either in court or as an excuse to collect more, valid evidence. The defense attorney would say “if it hadn’t been for that completely untrustworthy Cellebrite evidence, the police wouldn’t have been able to get that search warrant they used to find the gun at his house, so we want that thrown out.” And the judge would probably go along with that, and if they don’t an appeals court probably would.

[0] https://www.law.cornell.edu/wex/fruit_of_the_poisonous_tree


> I don’t think that’s true. There’s a legal idea of “fruit of the poisonous tree”[0] that basically says you can’t use bad evidence, either in court or as an excuse to collect more, valid evidence.

I think the police have been using "parallel construction" to get around that for some time.

https://en.wikipedia.org/wiki/Parallel_construction

> Parallel construction is a law enforcement process of building a parallel, or separate, evidentiary basis for a criminal investigation in order to conceal how an investigation actually began.[1]

> In the US, a particular form is evidence laundering, where one police officer obtains evidence via means that are in violation of the Fourth Amendment's protection against unreasonable searches and seizures, and then passes it on to another officer, who builds on it and gets it accepted by the court under the good-faith exception as applied to the second officer.[2] This practice gained support after the Supreme Court's 2009 Herring v. United States decision.


While I'm sure it happens, I don't think that "evidence laundering" is particularly common, especially at the federal level. Cases I ran required an "initial notification" that succinctly described how our agents were notified of the potential criminal activity. The fear of having a case thrown out, or being turned down months into a high-level investigation because an attorney is uncomfortable with the likely outcome, is huge in ensuring a valid investigation is run.

Now, that's not to say that cops wouldn't do this in order to finally get a case around a particular subject who was able to sidestep previous investigations or something. I just doubt that it happens often enough to be worthwhile.


A defense team would need to show that the report had indeed been spoiled with such an exploit as demonstrated by the Signal team. Just because the possibility exists doesn't mean it happened. If there is a significant evidence report from a cellebrite pull, it almost always means that it either successfully unlocked the device or acquired a full physical image or both.

A report doesn't have to be generated by PA. A forensic examiner is free to use other methods to examine the binary. So long as the examiner can explain all the actions and any artifacts that would be left behind.


Correct!

Plus, most law enforcement seizes the device and keeps it until after the trial. If there were valid arguments against the authenticity of data in the extraction report, it would be easy to verify that data's existence by checking the phone, re-extracting the data using a known-clean UFED, etc. This isn't the end of the world by any means for legal mobile device searches.


Except that if the device is compromised, it could have changed the data on the phone. The phone, as evidence, can't be trusted anymore.


Signal never indicated this in the blog. They said that the phone would have a file that could be used to victimize the UFED PC after the extraction completes. It's plausible that the UFED could be infected post-extraction with malware that re-establishes a connection to the phone to infect it in reverse, but this is extremely unlikely and it would be easy to determine (assuming the UFED still exists and hasn't been meaningfully modified since the extraction.

For the UFED Touch, for example, the device runs the extraction and saved the result to a flash drive or external drive. This is then reviewed with the UFED software on another machine (laptop, desktop, whatever). What you're describing would mean that the extraction takes place (one-way. The Touch requests an ABD or iTunes backup process, phone responds with the backup files). The the malicious file in the backup pops and executes a command that runs software on the phone, thus infecting the phone with false data to cover the tracks and make the data on the report match the device. This is unreasonably complex, and I doubt any judge would accept it as enough to consider the data inadmissible. Let alone the fact that the data likely exists elsewhere in a verifiable way (Facebook Messenger, WhatsApp, Google Drive, etc), which the initial extraction results should give probable cause to the cops to obtain.


Even if parallel construction wouldn't exist, in Germany, for example, illegally obtained evidence is still valid in court.


To be more pedantic, evidence derived from illegally obtained evidence may be admissible but afaik the illegally obtained evidence is not.

In the Metzler kidnapping case the first confession was not admissible in court which had been obtained under threat of torture.


Parallel construction is still a possibility.

Not to mention regimes that don’t actually care about things like evidence being “legally admissible”.


They, and the people who support them, do. Moreover, not all of their users are involved in ethically questionable activity. Many of them are scrupulously following due process to thwart people doing really bad things.


Good. They will then collect useless data that a lawyer will destroy in court.


It was brazen enough to pop an iTunes GUI a few years back.


IANAL, and I don't speak for Apple, but as far as I can tell, the part about Apple is nonsense.

CoreFoundation is open source: https://github.com/opensource-apple/CF

libdispatch is open source: https://apple.github.io/swift-corelibs-libdispatch/post/libd...

ASL is open source: https://opensource.apple.com/source/syslog/syslog-349.1.1/li...

The objective C runtime is open source: https://github.com/opensource-apple/objc4/blob/master/APPLE_...

icu, pthread, zlib, and libxml did not even originate at Apple.


Yes, but the copy shipped is signed by Apple. i.e. they did not compile their own copy.


> signed by Apple

Signed by Apple's key!


MobileDevice.framework isn't open source.


> It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.

That does not make me feel good about Signal.


I've seen this reaction a few times. Can you say more? Presumably Signal users value privacy, and the implication is that when hacking tools used to violate that privacy are applied to a device running Signal, it may try to interfere and prevent the extraction to some degree. This seems like an ideal feature for a private messenger.

In contrast, it would strike me as strange if a Signal user switched to another messenger that allowed the data extraction because they were uncomfortable with Signal blocking it.


It's kind've hard to argue about having random unknown data laying around, but...

The insinuation that my device may be host to something potentially malicious is concerning. It gets a little worse that it can change, without notice. I'd tend to trust Signal and their security, but the potential for that mechanism to be hijacked is always present. They've certainly made it hard, though, and I think the folks at Signal are probably clever enough that my anemic observations don't tell the whole story.


Everyone can inspect the Signal app's source code, though, and make sure that nothing funny is happening with the "aesthetically pleasing" files it downloads.


If they publish the aesthetically pleasing file source code doesn't that defeat the point?


> The insinuation that my device may be host to something potentially malicious is concerning. It gets a little worse that it can change, without notice.

Do you use an iPhone or a phone with Google Play Services on it?

Those both have the technical capability of receiving targeted updates.


Not just that. In the case of Google Play Services, SafetyNet also downloads random binary blobs from Google and executes them as root. Makes me feel a lot… safer about my phone.


I really seriously doubt that anyone would ever advance the idea that Signal had deliberately framed them by creating false data on their phone. I don't see this as much more than pointing out that Cellebrite has vulnerabilities, just like the ones they exploit.


You wouldn't imply that Signal had framed you. You would imply that someone else had framed you using the same vulnerabilities as Signal has now indicated exists. i.e. You can't trust Cellebrite because it's now known to be trivial to subvert their software. It's also difficult for Cellebrite to prove that there aren't remaining vulnerabilities in their software since Signal didn't disclose the problems they found and won't do so unless Cellebrite discloses the exploits they claim to be using in Signal.


It depends on the standard of reliability required. A good defence legal team might use this to argue that the phone data wouldn't be sufficient evidence in the actual criminal proceedings, you do need to prove things beyond all reasonable doubt there; however, even with all these caveats it would be sufficient to use that phone data for investigative purposes and as probable cause for getting a warrant for something else, and then use that for the actual conviction.

For example, let's say they extract the Signal message data from your phone. You successfully convince everyone that this data might have been the result of some tampering. However, that doesn't prevent investigators from using that message history to find the other person, and get them to testify against you.


That’s not true, at least if we’re talking fourth amendment issues in the US. If the evidence was thrown out, any additional information gleaned from that evidence could be thrown out too. And in a scenario like what you described, it likely would.

That’s not a guarantee, of course, and it could be possible for police to corroborate that you had contact with someone else in another way (through records from a wireless carrier or by doing shoe leather investigative work) and use try to get data on that person to get them to testify, but if their only link was through messages that had been deemed inadmissible, they can’t use that witness.

The more likely question in a scenario you describe would be if the compromised Signal data would be enough to raise questions about the validity of all the data on the device. I.e., if Signal is out, can they use information from WhatsApp or iMessage or whatever. Past case law would suggest that once compromised, all of the evidence from the device is compromised — but a judge might rule otherwise.

It would be cool if Signal or another app could use those exploits they’ve uncovered to inject randomized data into the data stores of other messaging applications too. You know. Just as an experiment.


Wrong. You're talking about "fruit of the poisonous tree", which only involves evidence found as a result of an illegal search. A legal search that results in "false" evidence, which is later used to find "real" evidence will still meet sufficiency in court, because the now-"real" evidence is still admissible, just like the "false" evidence is admissible. It's just that the defense can have the "false" evidence thrown out because it's not verifiable.

In the example being discussed here, it's absolutely still fine for LE and there isn't a need to find a secondary source that side-steps the FOTPT argument. In fact, I could drag a subject into an interview, take his phone and present a warrant to search his phone.

If he refuses to unlock the phone, I can say "fine, we'll use our special software instead" and take it back to another room. After the interview, I give him back his phone and immediately bring in his co-conspirator. If I show the conspirator fake messages between him and my first subject and get him to testify against my subject, that's still admissible. If this is the case, why would using potentially-false/unverifiable Cellebrite data be inadmissible?


You can't just claim an unknown entity framed you and hope to get anywhere. Heck, you could just as well claim that Cellebrite themselves had it in for you.

Cellebrite has never claimed any particular exploits in Signal. Signal is exploitable in this particular way for entirely obvious and common reasons.


You'd claim that the tooling used and thus the evidence is unreliable. Not because of yourself or anybody targeting yourself, but due to other actors attacking Cellebrite and leaving you as collateral damage. You'd base this on testimony from other (court-authorized) experts, perhaps even the CEO of a major privacy app. Would be an interesting trial to follow in the US, not sure I'd want to be the defendant though.


It's about casting doubt on their software and it's trustworthyness.

In computer forensics it's ALL about being able to verify, without a shadow of doubt that something is what they say it is. Chain of custody rules everything. This blasts a huge gaping hole in all that. He's proven that chain of custody can be tampered with and undetected. Files can be planted, altered or erased. Reports can altered. Timestamps can be changed. The host OS can be exploited. It calls all past and future cellbrite reports into question. Cellbrite can no longer guaranty their software acts in a consistent reliable verifiable way. It leaves doubt.


> In computer forensics it's ALL about being able to verify, without a shadow of doubt that something is what they say it is

Mostly. The other side gets all the evidence that the opposing side sees. They both get a chance to review it.

> Chain of custody rules everything.

Agree.

> This blasts a huge gaping hole in all that.

Not really. The analysis goes in two steps. One is to pull all the data from the phone, in a chain-of-custody manner. In an adversarial case, both sides can do this.

The collection and analysis go into two steps. First is moving the data to windows box. Next is the analysis. As I understand it, the analysis portion is where things can explode. Then, if in the hands of someone skilled in forensics, the extracted data would be saved in some other device, possibly to be shared with the other side. Then the risky, potentially explosive analysis would be done. It is very unlikely that all previous cases exist on that device and nowhere else.

Therefore,

> It calls all past and future cellbrite reports into question.

is not true, as the extracted files are likely not on the collecting windows device.

In any case, it is not clear how many uses of this device are in actual legal environments.


> As I understand it, the analysis portion is where things can explode.

This very blog post says they have found similar vulnerabilities in both steps.


If the data collection step can possibly be affected by things like media file exploits then that would be a much bigger problem by itself. Cellebrite would have no reason to execute or interpret anything off the target device in this stage. If they were doing that then the Signal article would of pointed that out first.


By mearly backing up phone with cellebrite it may run exploits.

Exploits may fuck up phone, backup and even cellebrite host os.

As such, phone, backed up data and reports are useless going forward.


Why would it have to be an unknown entity? I imagine in at least some court cases there could be potential antagonists to pin the blame on.


You can claim that by having signal on your phone, it probably compromised the evidence gathering and you didn't know about it and you don't know how, so that evidence is not trustworthy. Kind of like police opening anti-tamper / anti-shoplifting seals which ruin the item they are trying to confiscate with a large amount of dye.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: