It attacks Cellebrite's ability to operate by casting doubt on the reports generated by the product that their customers may wish to use in court.
It places them in legal peril from Apple, and removes any cover Apple would have to not take legal action. (I assume someone at Apple knew they were shipping their DLLs?)
It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.
edited to add a bonus one:
Publish some data about what they are doing to help create a roadmap for any other app that doesn't want their data to be scanned.
I found that funny too. It sounds to me like a good way to end up with the device to analyze without being constrained by a contract or EULA prohibiting it.
IIRC "The Fucking Article". I think the expletive originally was because people "wouldn't read TFA", but eventually it just kinda became the way to refer to the article linked from a Hacker News thread.
Indeed, how convenient. If it truly did fall off the truck right while he is on a walk then there is the possibility that is a rubber duckie attack. This is basically the equivalent of leaving a USB flash drive lying around. I hope the author took the necessary precautions when reverse engineering the device. Companies like cellebrite have deep connections to certain three letter communities that staging this sort of attacks trivial.
Edit: looking up a bit more, it seems like this idiom is used to denote goods sold for cheap because they were stolen. Like "Bob is selling genuine iPhones very cheap, I fear they fell from the back of a truck".
Edit edit: I initially took it as "we won't tell how we got this", because I didn't know this idiom, but it seems several people agree with this interpretation. Not necessarily stolen, but obtained from an undisclosed source.
That isn't quite right. Although it's commonly used to describe something that's been stolen, it's more generally used to indicate that the speaker doesn't want to talk about where it came from. That's how it's been used in this article.
I thought it was an euphemism because they couldn't reveal who gave it to them. Confessing it was stolen, even as an euphemism, is too blatant to be taken seriously IMO.
There's a difference between attacking their privacy (which is fair and should be done regularly if users' privacy is at stake) and claiming access to secure data that is wholly untrue.
Knowing that at least one row of data in a database might have been modified randomly means you can't fully trust any one line in the database completely.
You either will have to come up with a plausible (literally, 'warranted') way to have gotten that data in the real world without relying on the data on the phone as the reason you went looking, or it will likely be thrown out due to it being "Fruit of the poisonous tree".
That's precisely what parallel construction is: a lie told by investigators to the court to sidestep the poison tree, bolstered by real evidence specifically gathered to lie outside of the branches of same.
It's a method that crooked law enforcement uses to deceive courts. It's so common as to have its own name now.
Right, but that implies you can construct an entire chain that doesn't include checking their phone. Just pointing out, per the post i was replying to, it's not simply use the phone, get other evidence, don't worry about the phone's evidence being thrown out.
No, that's not the case here. You don't need a parallel construction in this example, because the UFED extraction (even if tainted by a similar exploit) wasn't illegally obtained.
Sounds like a US/UK-centric view of things. There's many countries where evidence can be used in court regardless of how it was obtained, Swedish and Germany to name two.
If I search a phone with a tainted UFED and get a conversation between Bob (my subject) and Carl (his friend) about the drugs they're selling, that conversation either exists or doesn't exist. Now, let's assume that the court won't accept this evidence, based on a defense argument that it should be inadmissible after seeing the vulnerabilities of UFED, as detailed by Signal.
The next investigative step in to go interview and search Carl, since there is probable cause to believe that a conversation occurred about their drug dealing on his phone. Unless I a) know my UFED is vulnerable and b) have reason to believe the text conversation between Bob and Carl is fake, my warrant for Carl's phone is valid. Now, I search Carl's phone with the same UFED and find the exact same conversation.
At this point, it's still possible for the UFED to have made the conversation up on both sides and for both extractions, and this would probably make the resulting conversation (and potentially everything in the report) inadmissible, but any admissions from Bob or Carl, including based on questions asked about the conversation itself, would still be admissible. I could show the report to Bob and Carl as evidence against them and get a confession, which would be admissible. Additionally, if the court determines that the UFED report is inadmissible based on the potentially for it to be forensically unsound, I would still have the phones themselves. UFED (except in rare circumstances) requires the phone to be unlocked before it can make its extraction. As such, I could manually browse the phone to the conversation in question and take photos of the phone with the real conversations between Bob and Carl. I could also verify through an ISP/messenger app that evidence of those conversations occurred (for example, metadata demonstrating matching times and message sizes that align with the potentially-fabricated message).
The FOTPT defense only applies to illegally obtained evidence. Assuming you obtained a valid warrant or consent to conduct the search, there was nothing illegal about the UFED extraction that would make the FOTPT defense applicable.
~~~
The undated documents show that federal agents are trained to “recreate” the investigative trail to effectively cover up where the information originated, a practice that some experts say violates a defendant’s Constitutional right to a fair trial. If defendants don’t know how an investigation began, they cannot know to ask to review potential sources of exculpatory evidence - information that could reveal entrapment, mistakes or biased witnesses.
~~~
If you believe any of "fruit of the poisonous tree" stuff makes any difference, I've got a bridge to sell you. There is clear evidence that DEA agents are trained to create a "clean" story of evidence around the illicit information they get.
The defense lawyers have to read it, and the people in law enforcement need to read the cases where judges throw out Cellebrite evidence based on that.
The problem with that is the cases would need to get to the discovery stage.
95% of all criminal cases in the US are Plead out largely because the defendant can not afford competent legal representation
This is why all kinds of questionable investigative tactics are still used even some that have clearly been ruled unconstitutional, they know most of the time it will not matter, they just need to get enough to make an arrest, the system will destroy the person anyway no conviction needed, and most likely the person will plead guilty even if innocent just to make the abuse stop
Do all defendants not have the right to an attorney in criminal cases?
If evidence is obviously (or with high chance) disputable, then it puts pressure even in those cases where the attorney recommends the defendant to eventually plead.
Or maybe the attorneys available to those who can't afford are just crap in general?
>>Do all defendants not have the right to an attorney in criminal cases?
In theory yes, but in most cases that will be a overworked lawyer that has 100's or 1000's of other active cases, and will not spend time other than negotiation the best deal.
Even if you happen to get a good public defender that has the ability to devote lots of time to your individual case, they would have no budget to hire an expert witness to present the vulnerabilities in the Cell-bright software to a jury
Your best bet would be if the ACLU or EFF took an interest in your case, but they take on very few cases relatively speaking and tend to focus on precedent setting cases, or cases that have high public interest (or can be made to have high public interest)
>Or maybe the attorneys available to those who can't afford are just crap in general?
In some cases they are incompetent, however in most cases they are just underfunded and massively over worked. In most jurisdictions the public defenders office is funded is 1/10 or less of what the prosecutors office is funded at.
No, but Cellebrite does because their credibility is what sells their products to law enforcement. It wouldn't kill all their sales, but enough to be painful.
If you, as I do, hold that personal devices should be private, then you're probably happy with the extraction tools being weak. Let them remain complacent.
That won't do much. In order to throw the evidence out, it needs to be shown that something was actually compromised by the tool, not just that it is possible.
Doesn't matter. When you can go through every message on someones phone back for years, I'm sure you can find something to put nearly anyone in prison for.
No need to tell the court how you found out about the lawnmowing for the neighbour that was never reported to the IRS...
When you can call into question the data integrity of the items on the device and whether the information from that device is accurate or was inserted by the machine used to break into it, that is some very basic fourth amendment stuff that could possibly get all items taken from the device deemed inadmissible.
Eh, this goes two ways. Cellebrite is rarely going to result in the only meaningful evidence that proves a single element of the offense. Instead, it is often used to further an investigation in order to find evidence that is more damning and of a higher evidentiary value. Fortunately for law enforcement, the integrity of the Cellebrite-obtained data is all that important if it leads to further evidence that is more significant.
I don’t think that’s true. There’s a legal idea of “fruit of the poisonous tree”[0] that basically says you can’t use bad evidence, either in court or as an excuse to collect more, valid evidence. The defense attorney would say “if it hadn’t been for that completely untrustworthy Cellebrite evidence, the police wouldn’t have been able to get that search warrant they used to find the gun at his house, so we want that thrown out.” And the judge would probably go along with that, and if they don’t an appeals court probably would.
> I don’t think that’s true. There’s a legal idea of “fruit of the poisonous tree”[0] that basically says you can’t use bad evidence, either in court or as an excuse to collect more, valid evidence.
I think the police have been using "parallel construction" to get around that for some time.
> Parallel construction is a law enforcement process of building a parallel, or separate, evidentiary basis for a criminal investigation in order to conceal how an investigation actually began.[1]
> In the US, a particular form is evidence laundering, where one police officer obtains evidence via means that are in violation of the Fourth Amendment's protection against unreasonable searches and seizures, and then passes it on to another officer, who builds on it and gets it accepted by the court under the good-faith exception as applied to the second officer.[2] This practice gained support after the Supreme Court's 2009 Herring v. United States decision.
While I'm sure it happens, I don't think that "evidence laundering" is particularly common, especially at the federal level. Cases I ran required an "initial notification" that succinctly described how our agents were notified of the potential criminal activity. The fear of having a case thrown out, or being turned down months into a high-level investigation because an attorney is uncomfortable with the likely outcome, is huge in ensuring a valid investigation is run.
Now, that's not to say that cops wouldn't do this in order to finally get a case around a particular subject who was able to sidestep previous investigations or something. I just doubt that it happens often enough to be worthwhile.
A defense team would need to show that the report had indeed been spoiled with such an exploit as demonstrated by the Signal team. Just because the possibility exists doesn't mean it happened. If there is a significant evidence report from a cellebrite pull, it almost always means that it either successfully unlocked the device or acquired a full physical image or both.
A report doesn't have to be generated by PA. A forensic examiner is free to use other methods to examine the binary. So long as the examiner can explain all the actions and any artifacts that would be left behind.
Plus, most law enforcement seizes the device and keeps it until after the trial. If there were valid arguments against the authenticity of data in the extraction report, it would be easy to verify that data's existence by checking the phone, re-extracting the data using a known-clean UFED, etc. This isn't the end of the world by any means for legal mobile device searches.
Signal never indicated this in the blog. They said that the phone would have a file that could be used to victimize the UFED PC after the extraction completes. It's plausible that the UFED could be infected post-extraction with malware that re-establishes a connection to the phone to infect it in reverse, but this is extremely unlikely and it would be easy to determine (assuming the UFED still exists and hasn't been meaningfully modified since the extraction.
For the UFED Touch, for example, the device runs the extraction and saved the result to a flash drive or external drive. This is then reviewed with the UFED software on another machine (laptop, desktop, whatever). What you're describing would mean that the extraction takes place (one-way. The Touch requests an ABD or iTunes backup process, phone responds with the backup files). The the malicious file in the backup pops and executes a command that runs software on the phone, thus infecting the phone with false data to cover the tracks and make the data on the report match the device. This is unreasonably complex, and I doubt any judge would accept it as enough to consider the data inadmissible. Let alone the fact that the data likely exists elsewhere in a verifiable way (Facebook Messenger, WhatsApp, Google Drive, etc), which the initial extraction results should give probable cause to the cops to obtain.
They, and the people who support them, do. Moreover, not all of their users are involved in ethically questionable activity. Many of them are scrupulously following due process to thwart people doing really bad things.
> It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.
I've seen this reaction a few times. Can you say more? Presumably Signal users value privacy, and the implication is that when hacking tools used to violate that privacy are applied to a device running Signal, it may try to interfere and prevent the extraction to some degree. This seems like an ideal feature for a private messenger.
In contrast, it would strike me as strange if a Signal user switched to another messenger that allowed the data extraction because they were uncomfortable with Signal blocking it.
It's kind've hard to argue about having random unknown data laying around, but...
The insinuation that my device may be host to something potentially malicious is concerning. It gets a little worse that it can change, without notice. I'd tend to trust Signal and their security, but the potential for that mechanism to be hijacked is always present. They've certainly made it hard, though, and I think the folks at Signal are probably clever enough that my anemic observations don't tell the whole story.
Everyone can inspect the Signal app's source code, though, and make sure that nothing funny is happening with the "aesthetically pleasing" files it downloads.
> The insinuation that my device may be host to something potentially malicious is concerning. It gets a little worse that it can change, without notice.
Do you use an iPhone or a phone with Google Play Services on it?
Those both have the technical capability of receiving targeted updates.
Not just that. In the case of Google Play Services, SafetyNet also downloads random binary blobs from Google and executes them as root. Makes me feel a lot… safer about my phone.
I really seriously doubt that anyone would ever advance the idea that Signal had deliberately framed them by creating false data on their phone. I don't see this as much more than pointing out that Cellebrite has vulnerabilities, just like the ones they exploit.
You wouldn't imply that Signal had framed you. You would imply that someone else had framed you using the same vulnerabilities as Signal has now indicated exists. i.e. You can't trust Cellebrite because it's now known to be trivial to subvert their software. It's also difficult for Cellebrite to prove that there aren't remaining vulnerabilities in their software since Signal didn't disclose the problems they found and won't do so unless Cellebrite discloses the exploits they claim to be using in Signal.
It depends on the standard of reliability required. A good defence legal team might use this to argue that the phone data wouldn't be sufficient evidence in the actual criminal proceedings, you do need to prove things beyond all reasonable doubt there; however, even with all these caveats it would be sufficient to use that phone data for investigative purposes and as probable cause for getting a warrant for something else, and then use that for the actual conviction.
For example, let's say they extract the Signal message data from your phone. You successfully convince everyone that this data might have been the result of some tampering. However, that doesn't prevent investigators from using that message history to find the other person, and get them to testify against you.
That’s not true, at least if we’re talking fourth amendment issues in the US. If the evidence was thrown out, any additional information gleaned from that evidence could be thrown out too. And in a scenario like what you described, it likely would.
That’s not a guarantee, of course, and it could be possible for police to corroborate that you had contact with someone else in another way (through records from a wireless carrier or by doing shoe leather investigative work) and use try to get data on that person to get them to testify, but if their only link was through messages that had been deemed inadmissible, they can’t use that witness.
The more likely question in a scenario you describe would be if the compromised Signal data would be enough to raise questions about the validity of all the data on the device. I.e., if Signal is out, can they use information from WhatsApp or iMessage or whatever. Past case law would suggest that once compromised, all of the evidence from the device is compromised — but a judge might rule otherwise.
It would be cool if Signal or another app could use those exploits they’ve uncovered to inject randomized data into the data stores of other messaging applications too. You know. Just as an experiment.
Wrong. You're talking about "fruit of the poisonous tree", which only involves evidence found as a result of an illegal search. A legal search that results in "false" evidence, which is later used to find "real" evidence will still meet sufficiency in court, because the now-"real" evidence is still admissible, just like the "false" evidence is admissible. It's just that the defense can have the "false" evidence thrown out because it's not verifiable.
In the example being discussed here, it's absolutely still fine for LE and there isn't a need to find a secondary source that side-steps the FOTPT argument. In fact, I could drag a subject into an interview, take his phone and present a warrant to search his phone.
If he refuses to unlock the phone, I can say "fine, we'll use our special software instead" and take it back to another room. After the interview, I give him back his phone and immediately bring in his co-conspirator. If I show the conspirator fake messages between him and my first subject and get him to testify against my subject, that's still admissible. If this is the case, why would using potentially-false/unverifiable Cellebrite data be inadmissible?
You can't just claim an unknown entity framed you and hope to get anywhere. Heck, you could just as well claim that Cellebrite themselves had it in for you.
Cellebrite has never claimed any particular exploits in Signal. Signal is exploitable in this particular way for entirely obvious and common reasons.
You'd claim that the tooling used and thus the evidence is unreliable. Not because of yourself or anybody targeting yourself, but due to other actors attacking Cellebrite and leaving you as collateral damage. You'd base this on testimony from other (court-authorized) experts, perhaps even the CEO of a major privacy app. Would be an interesting trial to follow in the US, not sure I'd want to be the defendant though.
It's about casting doubt on their software and it's trustworthyness.
In computer forensics it's ALL about being able to verify, without a shadow of doubt that something is what they say it is. Chain of custody rules everything. This blasts a huge gaping hole in all that. He's proven that chain of custody can be tampered with and undetected. Files can be planted, altered or erased. Reports can altered. Timestamps can be changed. The host OS can be exploited. It calls all past and future cellbrite reports into question. Cellbrite can no longer guaranty their software acts in a consistent reliable verifiable way. It leaves doubt.
> In computer forensics it's ALL about being able to verify, without a shadow of doubt that something is what they say it is
Mostly. The other side gets all the evidence that the opposing side sees. They both get a chance to review it.
> Chain of custody rules everything.
Agree.
> This blasts a huge gaping hole in all that.
Not really. The analysis goes in two steps. One is to pull all the data from the phone, in a chain-of-custody manner. In an adversarial case, both sides can do this.
The collection and analysis go into two steps. First is moving the data to windows box. Next is the analysis. As I understand it, the analysis portion is where things can explode. Then, if in the hands of someone skilled in forensics, the extracted data would be saved in some other device, possibly to be shared with the other side. Then the risky, potentially explosive analysis would be done. It is very unlikely that all previous cases exist on that device and nowhere else.
Therefore,
> It calls all past and future cellbrite reports into question.
is not true, as the extracted files are likely not on the collecting windows device.
In any case, it is not clear how many uses of this device are in actual legal environments.
If the data collection step can possibly be affected by things like media file exploits then that would be a much bigger problem by itself. Cellebrite would have no reason to execute or interpret anything off the target device in this stage. If they were doing that then the Signal article would of pointed that out first.
You can claim that by having signal on your phone, it probably compromised the evidence gathering and you didn't know about it and you don't know how, so that evidence is not trustworthy. Kind of like police opening anti-tamper / anti-shoplifting seals which ruin the item they are trying to confiscate with a large amount of dye.
It attacks Cellebrite's ability to operate by casting doubt on the reports generated by the product that their customers may wish to use in court.
It places them in legal peril from Apple, and removes any cover Apple would have to not take legal action. (I assume someone at Apple knew they were shipping their DLLs?)
It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.
edited to add a bonus one:
Publish some data about what they are doing to help create a roadmap for any other app that doesn't want their data to be scanned.