Hacker News new | past | comments | ask | show | jobs | submit login
Exploiting vulnerabilities in Cellebrite UFED and Physical Analyzer (signal.org)
1897 points by derekerdmann on April 21, 2021 | hide | past | favorite | 332 comments



Wow, that video made my day. This bit is key:

> "For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question."

They've may have just got a lot evidence collected using Cellebrite from phones with (or without) Signal installed on them thrown out of court.

I don't recall the details, but there was an absolute unsubstantiated speculative and surely fictional rumor of at least one entirely theoretical zero-day non-gif formatted image file that exploited a similar class of vulnerability in what was probably not a market leading tool used tangentially for the same purposes, floating around well over a decade ago as well.

I for one am very glad that these hypothetical issues have almost surely been fixed.


The video made my inner child feel truly vindicated with my choice of username.


Same here.


Same.


Protip: nice try, but very new accounts on HN show up green for some period of time.


well, close enough for me.


/u/hprotagonist - I've always thought you had the best username on HN, for what it's worth!


i'm consistently amazed it was available.


Mwahahahaaa!


I’m envious.


…of your awesome username. I wasn’t being snarky!


This is truly a hacker’s retort.

It attacks Cellebrite's ability to operate by casting doubt on the reports generated by the product that their customers may wish to use in court.

It places them in legal peril from Apple, and removes any cover Apple would have to not take legal action. (I assume someone at Apple knew they were shipping their DLLs?)

It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.

edited to add a bonus one:

Publish some data about what they are doing to help create a roadmap for any other app that doesn't want their data to be scanned.


All that trouble becaused a bag conveniently "fell from a truck". All in all I'm really happy for all this.


I am happy to see the bag survived its most untimely truck tumble while remaining a e s t h e t i c a l l y - - - p l e a s i n g.


What a sturdy bag it is!


I love Kim Zetter's tweet:

Must have been a turnip truck https://twitter.com/KimZetter/status/1384936715769503745

That's a dig at the turnipesque savvy of Cellebrite, which royally stumbled into this.

cf. https://www.dwt.com/blogs/privacy--security-law-blog/2015/03... > FCC chair Wheeler said the FCC “didn’t just fall off the turnip truck.” Through CALEA, CPNI, and CSRIC, Wheeler said the agency has been working to protect consumer privacy.


Yup. Those things have way to "fell from a truck". Another win for the "fell from a truck" gang ;)


I found that funny too. It sounds to me like a good way to end up with the device to analyze without being constrained by a contract or EULA prohibiting it.


Yes, it's known as a euphemism. TFA just took the joke to hilarious extremes with the photos.

https://www.phrases.org.uk/meanings/fell-off-the-back-of-a-t...


TFA? I know you're referring to Moxie (somehow) but I can't figure out the acronym.


IIRC "The Fucking Article". I think the expletive originally was because people "wouldn't read TFA", but eventually it just kinda became the way to refer to the article linked from a Hacker News thread.


Oooh of course! Thanks!


Also can be "the fine article" when used in a non-antagonistic way!

I think it originated with "RTFM" which was an old unix admin way of saying read the fucking manual (or man page).


Yes RTFM I'm very familiar with, which is why I should've connected the two.


Indeed, how convenient. If it truly did fall off the truck right while he is on a walk then there is the possibility that is a rubber duckie attack. This is basically the equivalent of leaving a USB flash drive lying around. I hope the author took the necessary precautions when reverse engineering the device. Companies like cellebrite have deep connections to certain three letter communities that staging this sort of attacks trivial.


"falling off a truck" is slang for "was stolen".


TIL: https://idioms.thefreedictionary.com/fall+off+a+truck

Edit: looking up a bit more, it seems like this idiom is used to denote goods sold for cheap because they were stolen. Like "Bob is selling genuine iPhones very cheap, I fear they fell from the back of a truck".

Edit edit: I initially took it as "we won't tell how we got this", because I didn't know this idiom, but it seems several people agree with this interpretation. Not necessarily stolen, but obtained from an undisclosed source.


That isn't quite right. Although it's commonly used to describe something that's been stolen, it's more generally used to indicate that the speaker doesn't want to talk about where it came from. That's how it's been used in this article.


Given the sort of business Cellebrite is in, they would probably still want to treat anything connected to it with an overabundance of caution.


I think we can be relatively confident that the connected machine was airgapped and perhaps run in a VM.

Perhaps even in a faraday cage..


> (...) and perhaps run in a VM.

One of the screenshots[0] shows the VMware Tools Service running, so yeah, looks like a virtualized guest.

[0]: https://signal.org/blog/images/cellebrite-dlls-loaded.png


> If it truly did fall of the truck

lol


Seeing reactions like GP’s I’m surprised at how many people don’t know this expression.


I thought it was an euphemism because they couldn't reveal who gave it to them. Confessing it was stolen, even as an euphemism, is too blatant to be taken seriously IMO.


Even if you didn't, i think its pretty obvious from context.


If English is your second language, you may not have come across it. It's a very informal and infrequent idiom.


FWIW we have the exact same expression in French “c’est tombé du camion”


Dito for German: "vom Laster gefallen"


Add one more: Anyone publicly attacking Signal's privacy in the future is painting a very large target on their forehead.


There's a difference between attacking their privacy (which is fair and should be done regularly if users' privacy is at stake) and claiming access to secure data that is wholly untrue.


> It attacks Cellebrite's ability to operate by casting doubt on the reports generated by the product that their customers may wish to use in court.

Fortunately, parallel construction means you never really have to throw out bad evidence as long as you can find some good evidence too!


Knowing that at least one row of data in a database might have been modified randomly means you can't fully trust any one line in the database completely.

It reminds me of the story of https://en.wikipedia.org/wiki/Annie_Dookhan


Sure, but the data on the phone will lead you to evidence in the real world, which will be meaningful proof that can be used in court.


You either will have to come up with a plausible (literally, 'warranted') way to have gotten that data in the real world without relying on the data on the phone as the reason you went looking, or it will likely be thrown out due to it being "Fruit of the poisonous tree".


That's precisely what parallel construction is: a lie told by investigators to the court to sidestep the poison tree, bolstered by real evidence specifically gathered to lie outside of the branches of same.

It's a method that crooked law enforcement uses to deceive courts. It's so common as to have its own name now.


Right, but that implies you can construct an entire chain that doesn't include checking their phone. Just pointing out, per the post i was replying to, it's not simply use the phone, get other evidence, don't worry about the phone's evidence being thrown out.


That's literally what it is, except "get other evidence" means "construct a plausible story that gets you to the same point".


No, that's not the case here. You don't need a parallel construction in this example, because the UFED extraction (even if tainted by a similar exploit) wasn't illegally obtained.


Didn’t know that, thank you!


Sounds like a US/UK-centric view of things. There's many countries where evidence can be used in court regardless of how it was obtained, Swedish and Germany to name two.


Fair point.


False data isn't FOTPT.

If I search a phone with a tainted UFED and get a conversation between Bob (my subject) and Carl (his friend) about the drugs they're selling, that conversation either exists or doesn't exist. Now, let's assume that the court won't accept this evidence, based on a defense argument that it should be inadmissible after seeing the vulnerabilities of UFED, as detailed by Signal.

The next investigative step in to go interview and search Carl, since there is probable cause to believe that a conversation occurred about their drug dealing on his phone. Unless I a) know my UFED is vulnerable and b) have reason to believe the text conversation between Bob and Carl is fake, my warrant for Carl's phone is valid. Now, I search Carl's phone with the same UFED and find the exact same conversation.

At this point, it's still possible for the UFED to have made the conversation up on both sides and for both extractions, and this would probably make the resulting conversation (and potentially everything in the report) inadmissible, but any admissions from Bob or Carl, including based on questions asked about the conversation itself, would still be admissible. I could show the report to Bob and Carl as evidence against them and get a confession, which would be admissible. Additionally, if the court determines that the UFED report is inadmissible based on the potentially for it to be forensically unsound, I would still have the phones themselves. UFED (except in rare circumstances) requires the phone to be unlocked before it can make its extraction. As such, I could manually browse the phone to the conversation in question and take photos of the phone with the real conversations between Bob and Carl. I could also verify through an ISP/messenger app that evidence of those conversations occurred (for example, metadata demonstrating matching times and message sizes that align with the potentially-fabricated message).

The FOTPT defense only applies to illegally obtained evidence. Assuming you obtained a valid warrant or consent to conduct the search, there was nothing illegal about the UFED extraction that would make the FOTPT defense applicable.


In your example, it's true that you could question Carl, but the a warrant to search Carl based on the false data should be overturned later.


It's not knowingly false, so the warrant would remain valid.


https://www.reuters.com/article/us-dea-sod/exclusive-u-s-dir...

~~~ The undated documents show that federal agents are trained to “recreate” the investigative trail to effectively cover up where the information originated, a practice that some experts say violates a defendant’s Constitutional right to a fair trial. If defendants don’t know how an investigation began, they cannot know to ask to review potential sources of exculpatory evidence - information that could reveal entrapment, mistakes or biased witnesses. ~~~

If you believe any of "fruit of the poisonous tree" stuff makes any difference, I've got a bridge to sell you. There is clear evidence that DEA agents are trained to create a "clean" story of evidence around the illicit information they get.


It doesn't matter. This story/example from Signal has nothing to do with FOTPT.


Cellebrite acquired BlackBag Technologies recently. BlackBag emerged from Apple's security team.


Fitting name.

Black-bag the opposition politican, and then black-bag her phone.


A little different from brown bag, in which the non-traceable payment travels under the table.


Or dog shit under a door.


Sadly I suspect the people in law enforcement who make purchasing decisions never read the Signal blog, and therefore all these points will be moot.


They don't have to read that.

The defense lawyers have to read it, and the people in law enforcement need to read the cases where judges throw out Cellebrite evidence based on that.


The problem with that is the cases would need to get to the discovery stage.

95% of all criminal cases in the US are Plead out largely because the defendant can not afford competent legal representation

This is why all kinds of questionable investigative tactics are still used even some that have clearly been ruled unconstitutional, they know most of the time it will not matter, they just need to get enough to make an arrest, the system will destroy the person anyway no conviction needed, and most likely the person will plead guilty even if innocent just to make the abuse stop


Do all defendants not have the right to an attorney in criminal cases?

If evidence is obviously (or with high chance) disputable, then it puts pressure even in those cases where the attorney recommends the defendant to eventually plead.

Or maybe the attorneys available to those who can't afford are just crap in general?


>>Do all defendants not have the right to an attorney in criminal cases?

In theory yes, but in most cases that will be a overworked lawyer that has 100's or 1000's of other active cases, and will not spend time other than negotiation the best deal.

Even if you happen to get a good public defender that has the ability to devote lots of time to your individual case, they would have no budget to hire an expert witness to present the vulnerabilities in the Cell-bright software to a jury

Your best bet would be if the ACLU or EFF took an interest in your case, but they take on very few cases relatively speaking and tend to focus on precedent setting cases, or cases that have high public interest (or can be made to have high public interest)

>Or maybe the attorneys available to those who can't afford are just crap in general?

In some cases they are incompetent, however in most cases they are just underfunded and massively over worked. In most jurisdictions the public defenders office is funded is 1/10 or less of what the prosecutors office is funded at.


The worst enterprises using Cellebrite don't really have to worry about defense lawyers.


No, but Cellebrite does because their credibility is what sells their products to law enforcement. It wouldn't kill all their sales, but enough to be painful.


If you, as I do, hold that personal devices should be private, then you're probably happy with the extraction tools being weak. Let them remain complacent.


That won't do much. In order to throw the evidence out, it needs to be shown that something was actually compromised by the tool, not just that it is possible.

Consider https://www.securityweek.com/forensics-tool-flaw-allows-hack.... Have any cases been thrown out? I don't think so.


This.


They won't be moot when defense lawyers bring them up.


Doesn't matter. When you can go through every message on someones phone back for years, I'm sure you can find something to put nearly anyone in prison for.

No need to tell the court how you found out about the lawnmowing for the neighbour that was never reported to the IRS...


When you can call into question the data integrity of the items on the device and whether the information from that device is accurate or was inserted by the machine used to break into it, that is some very basic fourth amendment stuff that could possibly get all items taken from the device deemed inadmissible.


Eh, this goes two ways. Cellebrite is rarely going to result in the only meaningful evidence that proves a single element of the offense. Instead, it is often used to further an investigation in order to find evidence that is more damning and of a higher evidentiary value. Fortunately for law enforcement, the integrity of the Cellebrite-obtained data is all that important if it leads to further evidence that is more significant.


I don’t think that’s true. There’s a legal idea of “fruit of the poisonous tree”[0] that basically says you can’t use bad evidence, either in court or as an excuse to collect more, valid evidence. The defense attorney would say “if it hadn’t been for that completely untrustworthy Cellebrite evidence, the police wouldn’t have been able to get that search warrant they used to find the gun at his house, so we want that thrown out.” And the judge would probably go along with that, and if they don’t an appeals court probably would.

[0] https://www.law.cornell.edu/wex/fruit_of_the_poisonous_tree


> I don’t think that’s true. There’s a legal idea of “fruit of the poisonous tree”[0] that basically says you can’t use bad evidence, either in court or as an excuse to collect more, valid evidence.

I think the police have been using "parallel construction" to get around that for some time.

https://en.wikipedia.org/wiki/Parallel_construction

> Parallel construction is a law enforcement process of building a parallel, or separate, evidentiary basis for a criminal investigation in order to conceal how an investigation actually began.[1]

> In the US, a particular form is evidence laundering, where one police officer obtains evidence via means that are in violation of the Fourth Amendment's protection against unreasonable searches and seizures, and then passes it on to another officer, who builds on it and gets it accepted by the court under the good-faith exception as applied to the second officer.[2] This practice gained support after the Supreme Court's 2009 Herring v. United States decision.


While I'm sure it happens, I don't think that "evidence laundering" is particularly common, especially at the federal level. Cases I ran required an "initial notification" that succinctly described how our agents were notified of the potential criminal activity. The fear of having a case thrown out, or being turned down months into a high-level investigation because an attorney is uncomfortable with the likely outcome, is huge in ensuring a valid investigation is run.

Now, that's not to say that cops wouldn't do this in order to finally get a case around a particular subject who was able to sidestep previous investigations or something. I just doubt that it happens often enough to be worthwhile.


A defense team would need to show that the report had indeed been spoiled with such an exploit as demonstrated by the Signal team. Just because the possibility exists doesn't mean it happened. If there is a significant evidence report from a cellebrite pull, it almost always means that it either successfully unlocked the device or acquired a full physical image or both.

A report doesn't have to be generated by PA. A forensic examiner is free to use other methods to examine the binary. So long as the examiner can explain all the actions and any artifacts that would be left behind.


Correct!

Plus, most law enforcement seizes the device and keeps it until after the trial. If there were valid arguments against the authenticity of data in the extraction report, it would be easy to verify that data's existence by checking the phone, re-extracting the data using a known-clean UFED, etc. This isn't the end of the world by any means for legal mobile device searches.


Except that if the device is compromised, it could have changed the data on the phone. The phone, as evidence, can't be trusted anymore.


Signal never indicated this in the blog. They said that the phone would have a file that could be used to victimize the UFED PC after the extraction completes. It's plausible that the UFED could be infected post-extraction with malware that re-establishes a connection to the phone to infect it in reverse, but this is extremely unlikely and it would be easy to determine (assuming the UFED still exists and hasn't been meaningfully modified since the extraction.

For the UFED Touch, for example, the device runs the extraction and saved the result to a flash drive or external drive. This is then reviewed with the UFED software on another machine (laptop, desktop, whatever). What you're describing would mean that the extraction takes place (one-way. The Touch requests an ABD or iTunes backup process, phone responds with the backup files). The the malicious file in the backup pops and executes a command that runs software on the phone, thus infecting the phone with false data to cover the tracks and make the data on the report match the device. This is unreasonably complex, and I doubt any judge would accept it as enough to consider the data inadmissible. Let alone the fact that the data likely exists elsewhere in a verifiable way (Facebook Messenger, WhatsApp, Google Drive, etc), which the initial extraction results should give probable cause to the cops to obtain.


Even if parallel construction wouldn't exist, in Germany, for example, illegally obtained evidence is still valid in court.


To be more pedantic, evidence derived from illegally obtained evidence may be admissible but afaik the illegally obtained evidence is not.

In the Metzler kidnapping case the first confession was not admissible in court which had been obtained under threat of torture.


Parallel construction is still a possibility.

Not to mention regimes that don’t actually care about things like evidence being “legally admissible”.


They, and the people who support them, do. Moreover, not all of their users are involved in ethically questionable activity. Many of them are scrupulously following due process to thwart people doing really bad things.


Good. They will then collect useless data that a lawyer will destroy in court.


It was brazen enough to pop an iTunes GUI a few years back.


IANAL, and I don't speak for Apple, but as far as I can tell, the part about Apple is nonsense.

CoreFoundation is open source: https://github.com/opensource-apple/CF

libdispatch is open source: https://apple.github.io/swift-corelibs-libdispatch/post/libd...

ASL is open source: https://opensource.apple.com/source/syslog/syslog-349.1.1/li...

The objective C runtime is open source: https://github.com/opensource-apple/objc4/blob/master/APPLE_...

icu, pthread, zlib, and libxml did not even originate at Apple.


Yes, but the copy shipped is signed by Apple. i.e. they did not compile their own copy.


> signed by Apple

Signed by Apple's key!


MobileDevice.framework isn't open source.


> It makes a thinly-veiled threat that any random Signal user's data may actively attempt to exploit their software in the future and demonstrates that it's trivial to do so.

That does not make me feel good about Signal.


I've seen this reaction a few times. Can you say more? Presumably Signal users value privacy, and the implication is that when hacking tools used to violate that privacy are applied to a device running Signal, it may try to interfere and prevent the extraction to some degree. This seems like an ideal feature for a private messenger.

In contrast, it would strike me as strange if a Signal user switched to another messenger that allowed the data extraction because they were uncomfortable with Signal blocking it.


It's kind've hard to argue about having random unknown data laying around, but...

The insinuation that my device may be host to something potentially malicious is concerning. It gets a little worse that it can change, without notice. I'd tend to trust Signal and their security, but the potential for that mechanism to be hijacked is always present. They've certainly made it hard, though, and I think the folks at Signal are probably clever enough that my anemic observations don't tell the whole story.


Everyone can inspect the Signal app's source code, though, and make sure that nothing funny is happening with the "aesthetically pleasing" files it downloads.


If they publish the aesthetically pleasing file source code doesn't that defeat the point?


> The insinuation that my device may be host to something potentially malicious is concerning. It gets a little worse that it can change, without notice.

Do you use an iPhone or a phone with Google Play Services on it?

Those both have the technical capability of receiving targeted updates.


Not just that. In the case of Google Play Services, SafetyNet also downloads random binary blobs from Google and executes them as root. Makes me feel a lot… safer about my phone.


I really seriously doubt that anyone would ever advance the idea that Signal had deliberately framed them by creating false data on their phone. I don't see this as much more than pointing out that Cellebrite has vulnerabilities, just like the ones they exploit.


You wouldn't imply that Signal had framed you. You would imply that someone else had framed you using the same vulnerabilities as Signal has now indicated exists. i.e. You can't trust Cellebrite because it's now known to be trivial to subvert their software. It's also difficult for Cellebrite to prove that there aren't remaining vulnerabilities in their software since Signal didn't disclose the problems they found and won't do so unless Cellebrite discloses the exploits they claim to be using in Signal.


It depends on the standard of reliability required. A good defence legal team might use this to argue that the phone data wouldn't be sufficient evidence in the actual criminal proceedings, you do need to prove things beyond all reasonable doubt there; however, even with all these caveats it would be sufficient to use that phone data for investigative purposes and as probable cause for getting a warrant for something else, and then use that for the actual conviction.

For example, let's say they extract the Signal message data from your phone. You successfully convince everyone that this data might have been the result of some tampering. However, that doesn't prevent investigators from using that message history to find the other person, and get them to testify against you.


That’s not true, at least if we’re talking fourth amendment issues in the US. If the evidence was thrown out, any additional information gleaned from that evidence could be thrown out too. And in a scenario like what you described, it likely would.

That’s not a guarantee, of course, and it could be possible for police to corroborate that you had contact with someone else in another way (through records from a wireless carrier or by doing shoe leather investigative work) and use try to get data on that person to get them to testify, but if their only link was through messages that had been deemed inadmissible, they can’t use that witness.

The more likely question in a scenario you describe would be if the compromised Signal data would be enough to raise questions about the validity of all the data on the device. I.e., if Signal is out, can they use information from WhatsApp or iMessage or whatever. Past case law would suggest that once compromised, all of the evidence from the device is compromised — but a judge might rule otherwise.

It would be cool if Signal or another app could use those exploits they’ve uncovered to inject randomized data into the data stores of other messaging applications too. You know. Just as an experiment.


Wrong. You're talking about "fruit of the poisonous tree", which only involves evidence found as a result of an illegal search. A legal search that results in "false" evidence, which is later used to find "real" evidence will still meet sufficiency in court, because the now-"real" evidence is still admissible, just like the "false" evidence is admissible. It's just that the defense can have the "false" evidence thrown out because it's not verifiable.

In the example being discussed here, it's absolutely still fine for LE and there isn't a need to find a secondary source that side-steps the FOTPT argument. In fact, I could drag a subject into an interview, take his phone and present a warrant to search his phone.

If he refuses to unlock the phone, I can say "fine, we'll use our special software instead" and take it back to another room. After the interview, I give him back his phone and immediately bring in his co-conspirator. If I show the conspirator fake messages between him and my first subject and get him to testify against my subject, that's still admissible. If this is the case, why would using potentially-false/unverifiable Cellebrite data be inadmissible?


You can't just claim an unknown entity framed you and hope to get anywhere. Heck, you could just as well claim that Cellebrite themselves had it in for you.

Cellebrite has never claimed any particular exploits in Signal. Signal is exploitable in this particular way for entirely obvious and common reasons.


You'd claim that the tooling used and thus the evidence is unreliable. Not because of yourself or anybody targeting yourself, but due to other actors attacking Cellebrite and leaving you as collateral damage. You'd base this on testimony from other (court-authorized) experts, perhaps even the CEO of a major privacy app. Would be an interesting trial to follow in the US, not sure I'd want to be the defendant though.


It's about casting doubt on their software and it's trustworthyness.

In computer forensics it's ALL about being able to verify, without a shadow of doubt that something is what they say it is. Chain of custody rules everything. This blasts a huge gaping hole in all that. He's proven that chain of custody can be tampered with and undetected. Files can be planted, altered or erased. Reports can altered. Timestamps can be changed. The host OS can be exploited. It calls all past and future cellbrite reports into question. Cellbrite can no longer guaranty their software acts in a consistent reliable verifiable way. It leaves doubt.


> In computer forensics it's ALL about being able to verify, without a shadow of doubt that something is what they say it is

Mostly. The other side gets all the evidence that the opposing side sees. They both get a chance to review it.

> Chain of custody rules everything.

Agree.

> This blasts a huge gaping hole in all that.

Not really. The analysis goes in two steps. One is to pull all the data from the phone, in a chain-of-custody manner. In an adversarial case, both sides can do this.

The collection and analysis go into two steps. First is moving the data to windows box. Next is the analysis. As I understand it, the analysis portion is where things can explode. Then, if in the hands of someone skilled in forensics, the extracted data would be saved in some other device, possibly to be shared with the other side. Then the risky, potentially explosive analysis would be done. It is very unlikely that all previous cases exist on that device and nowhere else.

Therefore,

> It calls all past and future cellbrite reports into question.

is not true, as the extracted files are likely not on the collecting windows device.

In any case, it is not clear how many uses of this device are in actual legal environments.


> As I understand it, the analysis portion is where things can explode.

This very blog post says they have found similar vulnerabilities in both steps.


If the data collection step can possibly be affected by things like media file exploits then that would be a much bigger problem by itself. Cellebrite would have no reason to execute or interpret anything off the target device in this stage. If they were doing that then the Signal article would of pointed that out first.


By mearly backing up phone with cellebrite it may run exploits.

Exploits may fuck up phone, backup and even cellebrite host os.

As such, phone, backed up data and reports are useless going forward.


Why would it have to be an unknown entity? I imagine in at least some court cases there could be potential antagonists to pin the blame on.


You can claim that by having signal on your phone, it probably compromised the evidence gathering and you didn't know about it and you don't know how, so that evidence is not trustworthy. Kind of like police opening anti-tamper / anti-shoplifting seals which ruin the item they are trying to confiscate with a large amount of dye.


Cellebrite doesn't even have a bug bounty programme or contact to report their bugs.

Last year I've managed to gain partial access to one of their systems and it took me weeks emailing their internal email addresses to finally fix the bug. They were total ass about it.

Now I've got complete access to their entire database and I don't know what do. Can HN advise?


Well definitely don't share it with DDOS secrets, news outlets or any other major company that would potentially report on it. That would be very bad press for Cellebrite, and if they connected it to you they could be very annoying - though again, they would need pretty good evidence connecting it to you and things like TOR and proper privacy practices would make that very difficult. So definitely don't do that, or use something like tails to post it to multiple SecureDrop outlets.


This definitely isn't a trap.


Post is here and let folks take a look.


Really REALLY bad idea - this is one of law enforcement's larger pet gadgets and companies, so the GP would not only have a particularly enthusiastic mob coming after them, said mob's pitchforks would have automatic cannon launchers and EMPs and push-button-activated nunchucks and all kinds of other crazy things that aren't legal for standard-issue pitchforks.

So if the database is fingerprintable to the GP specifically in any way, they're very very dead. And the random username doesn't even count here; they probably didn't post from Tor, so their real IP is connected to this post.


I think it's a fair guess that a security researcher like that knows how to post on hn without leaving their home address. It's not particularly difficult.


Well this would also require him to have been properly anonymous when reporting the bugs last year.


Unfortunately HN / Cloudflare / Google still block some Tor users with the privacy invasive software known as ReCAPTCHA. Not sure if they’ve switched to hcaptcha yet.


This is irrelevant to the thread. ReCAPTCHA isn't a privacy invasion at all for this use case.


> In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software.

I wish I could see those files in action...


I wonder if the intention here is to deter Cellebrite from parsing Signal files? Or to pressure them into fixing their security vulnerabilities?


Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time.

Pretty sure it's the former, since the above is a way to ensure that Cellebrite can't just gather all implied exploit files and make sure they've got those specific problems all patched. This is, quite literally, an informational attempt at guerilla/asymmetric warfare, where Signal is trying to make engaging with them too costly, while also making a few blows quite a bit above their weight level. Cellebrite now has to decide whether to keep after this adversary that both is hard to pin down, ambushes them, and has shown it can hit them really hard where it matters (credibility, and thus their pocket book).


This indeed looks like a FUD statement, implying that they can have an infinite amount of potential vulnerabilities. Realistically though, writing parsers that do not yield control of your whole device is not that complex. The people exploiting iOS zero days can certainly do it.


You're not wrong at all, but if they're shipping these garbage ancient versions of ffmpeg, there are likely oodles of other bugs lurking around. And, if Cellebrite acts like most other companies who've had their awful security exposed, they will fix only this bug and leave everything else.


It's not that hard but neither is shipping patched versions of ffmpeg. This company will have some catching up to do.


But it might be easier for Cellebrite to just stop exfiltrating data from Signal. Of course, other apps could discover similar vulnerabilities.


That's not enough. With file system permission, Signal could place files anywhere (like prepared gifs in the Pictures folder).

I think this taints any phone having Signal installed.


the signal are capable for finding more exploit with more time. important piece is that exists now a reasonable doubt on data from the celebrite, so it are not so good for evedince.


Nah, Cellebrite will panic for a bit at the possibility of facing repercussions but ultimately not commit enough effort to change anything. Cellebrite's counterparties, however, might not be so complacent.


Signal should generalise this into a library so that other app vendors can include these perfectly cromulant files


That would reveal all the exploits to Cellebrite, which Signal is trying to avoid.


I imagine many brother app vendors, who may or may not maintain good relationships with Signal might possibly have found a usb drive containing the relevant data on the street. (pure speculation, i don't know anything about moxie, but judging by his tone, i wouldn't be shocked)


hehe.

Now imagine if Hack Back laws actually passed... companies like Whisper Systems would have had impunity for even more shenanigans :)


or just flipping them off, which seems OK too.


I don't get it, can anyone elaborate on what they are talking about there?


They are implying that future versions of Signal will drop random files on your phone that "may or may not" cause damage to Cellebrite systems.

They are basically putting the threat out that if you use Cellebrite on Signal in the future, you might not get the data you expect, and at worst, it may corrupt the report/evidence.

This also brings into question the chain of custody, as an untrusted device being imaged can alter reports of unrelated devices.


Damn, a chain of custody where the thing in evidence is also part of not only its own chain but also those of other evidence acquired afterwards? I can't imagine what kind of case law exists around that, but I'm sure it's hilarious!


> also those of other evidence acquired afterwards

And prior extracts on the device.


Which is what I don't really understand - it seems like Cellebrite could spin this in their favor so law enforcement would need to purchase a new kit for each device?


Signal is going to start attacking third-party tools once it's installed on your phone.

It's as though Theo decided that OpenSSH should respond to portscanners by trying to pwn the source systems.


No, because that would be active retaliation.

More realistically it is like dropping a file on your private file server DONT_RUN_THIS_BLOWS_UP_YOUR_COMPUTER.exe. You never run it, but maybe somebody exploits your file server, gets all your files, and automatically runs them?

Oh well.


It really is like dropping a file on your private file server DONT_RUN_THIS_BLOWS_UP_YOUR_COMPUTER.exe - but contrary to your expectations, it's not "oh well", if you placed it there with the intent to trap someone who you expect to be looking at your computer, you may well be liable if their computer blows up, there's no significant difference from active retaliation - the consequences are there, the intent is there, the act is there, it's pretty much the same.

Of course, if some criminal exploits your file server, they are not likely to press charges, but if it triggers on law enforcement who have a warrant to scan your fileserver, that's a different issue.

You'd be just as liable as for physical boobytraps on your property, with pretty much the same reasoning.


The beauty though, is that law enforcement now can't even know before plugging in and scanning a device whether they'll actually be pwned.

They have to use the exploit to figure out if the phone can nuke that hardware's usability in the future or integrity of any locally stored, non-offsited data.

UNLESS Cellebrite can produce publically for a court of law proof that any potential exploit isn't a valid concern, which means spilling implementation details about how the device works.

Nobody can continue to shut up AND maintain the status quo. Either everyone clams, and Signal can sow reasonable doubt without challenge, crippling Cellebrite's value as a forensic tool. Or someone has to open up about the details of their tool, which, like it or not, will speak very loudly about the ways and methods behind these exploits.

The Checkmate is implied, and oh my, is it deafening.


> if you placed it there with the intent to trap someone who you expect to be looking at your computer, you may well be liable if their computer blows up

Liable for what? You haven’t promised that the code is safe, and they chose to run it.

> there's no significant difference from active retaliation

There is a significant difference, in active retaliation you choose to attack someone elseks computer, with a trap file the attacker chooses to run files they have stolen from you. Big difference.

> You'd be just as liable as for physical boobytraps on your property, with pretty much the same reasoning.

The reasoning is different, lethal or injurious man traps are prohibited because you don’t respond to trespassing with lethal force and you don’t know who or what may trigger the trap. Man traps that lock the intruder in a room without injuring them are fine, and used in high security installations.


And why shouldn’t OpenSSH do that?


Because I have zero interest in running attack software.


signal wants to pick a fight with a grey company that gets money for cracking apps? not a good idea


They're already picking a fight with Cellebrite simply by existing, as Signal is antithetical to everything that Cellebrite stands for.


buying a safe != killing the guy thats invading your house


I think this would be more like including exploding dye packs in your bags of money.


one could view make of an e2e encrypt app that is cause problem for polices as "not a good idea" but there must be some person for to do it.


Cellebrite's initial response[1] includes this gem

"We have strict licensing policies that govern how customers are permitted to use our technology and do not sell to countries under sanction by the US, Israel or the broader international community."

And these policies are obviously quite effective at preventing such uses.

[1] https://www.theregister.com/2021/04/21/signal_cellebrite/


ah. Those must also be modules that fell from the truck.... (these companies clearly have trucks like sieves) I have no doubt that already, cellebrite are on the phone ordering more secure, err 'trucks'


This is something I have personally looked at as an owner of a UFED touch device (1st gen). By default your software runs in a non-priviledged account but who's to say one of files isn't just straight up being read by FFMPEG and adding or removing evidence from the final report.

The official Cellebrite policy has always been "don't worry, if you get stuck, we can send you an expert to testify to the reliability of the scientific evidence due to previous cases" but what happens when the pyramid of previous cases fall apart? Do you suddenly own a paperweight?

I've also published papers (with NIST's help) on using consumer grade hardware for forensics and why testing your tools across a wide variety of scenarios is critical.


> As just one example (unrelated to what follows), their software bundles FFmpeg DLLs that were built in 2012 and have not been updated since then.

This purported vulnerability does not rely on FFmpeg, hence the disclaimer.


As a Signal user and moxie fan I love that post, but I worry that it places Signal in legal peril from Apple.

My fear, and prediction, is that the authorities will frame this as an even more egregious attack on law enforcement and that interfering with investigations is a crime (I'm not a lawyer, but I play one in hacker news comments, and that sounds like a crime). They'll lean on the app stores and the app stores will lean on or remove Signal.


1. Any app could do it.

2. Signal stirred FUD in a blog post. That's a very different thing from actually doing it.


Well, if you read the whole blog post, it certainly seems like they're actually doing it.


Nah. The cost/benefit of saber rattling makes tons of sense while the cost/benefit of actually doing it makes much less sense. Probably.

No amount of certainty about Marlinspike's actions should comfort Cellebrite, though, because Moxie Marlinspike isn't the only person allowed on the app store.


I’m not sure what you mean. The end of the post pretty clearly describes the framework they’re using to roll out these exploits as latent files within the Signal app.


The end of the post is extremely specifically and carefully not describing a framework for rolling out files to exploit these vulnerabilities; those files as described do nothing, and serve only aesthetic purposes. While it's easy to read that as a wink that they are exploiting the vulnerabilities they found while maintaining plausible deniability that they aren't, it's equally possible it's the other way around: they aren't rolling out exploits but want people reading the blog post to believe that they are. Or that they want to lay out the framework so that others can do so, but aren't actually going to follow through themselves. As written it's essentially unverifiable, obviously on purpose.


The optimal thing for them to do would be to build the framework and ship partially corrupted JPEGs that don't actually do anything nasty to Cellebrite. Cellebrite can verify that the machinery is there (not a totally idle threat) but no one can prove that Signal has actually done anything illegal. Cellebrite then wastes a bunch of times gathering and analyzing the files without actually learning anything from it. They also get more incentive to start finding and fixing their software's vulnerabilities, which throws off work schedules.

And Signal can develop a few land mines to deploy at any time, and just... hold on to them for a rainy day.


Even if Signal put the files out there and explicitly owned up to it, I struggle to see how it could be even remotely illegal. It's not their fault some other company's faulty product falls apart when it hits bad data in their app.


Agreed. Obviously, I'm not a lawyer so who knows. But it seems ridiculous that you could break into someone else's device and run all of their files and then come after them legally because a file you effectively stole didn't run properly on your computer.

At some point if someone breaks through multiple levels of advanced security to, say, steal your gun and then shoot themselves in the face with it, whose fault is that really...


I have a new found perspective for the malware/spyware industry after watching The Dissident.

I am SO IMPRESSED with this middle finger from the Signal team.

https://www.imdb.com/title/tt11382384/


I don't understand the seeming incongruity between these two statements:

On the one hand:

> One way to think about Cellebrite’s products is that if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.

But on the other hand:

> We are of course willing to responsibly disclose the specific vulnerabilities we know about to Cellebrite if they do the same for all the vulnerabilities they use in their physical extraction and other services to their respective vendors, now and in the future.

If UFED just copies data from unlocked phones, why would they be using vulnerabilities to do so?

I guess my question is, is Cellebrite capable of copying locked devices, or more to the point - has vulnerabilities to unlock devices without knowing the access PIN?


> is Cellebrite capable of copying locked devices, or more to the point - has vulnerabilities to unlock devices without knowing the access PIN?

Yes, they even brag about it in their marketing materials: https://www.cellebrite.com/en/a-practical-guide-to-checkm8/

That's a public vunerability, it's anyone's guess how many nonpublic ones they're using.


Cellebrite claims,

"Lawfully access locked devices with ease Bypass pattern, password or PIN locks and overcome encryption challenges quickly on popular Android and iOS devices"

https://www.cellebrite.com/en/ufed/


>If UFED just copies data from unlocked phones, why would they be using vulnerabilities to do so?

I think what he meant to say was that if Cellebrite is used on your locked phone it could be the equivalent of a person having your unlocked phone in their hands where they can do whatever they want. Only cellbrite doesn't do look at random things, it grabs everything.

Cellebrite devices are also frequently used by phone carriers to image your old phone and transfer the data to your newly purchased phone, to give you a clearer idea of what they can do.


they could use vulnerabilities to extract more data. probably it's common to do some obfuscation of data which celebrite might have reverse engineered.


Based on the post, it sounds like there's some data parsing going on (possibly to present the data in a user-friendly way?), and the parsing step uses outdated versions of software (such as ffmpeg) which have well-documented vulnerabilities in them.


Truly a jaw dropping blog post, as the top comment currently states, Apple may be legally required to at the very least, comment on this situation.


> Apple may be legally required to at the very least, comment on this situation.

"Required" to comment? By whom and for what reason?


Sending a cease and desist to Cellebrite for shipping their DLLs in their product I imagine.

Obviously there may be some backchannel, but that is probably how it would go if you assume Apple and Cellebrite have no relationship.


They could send such a C&D, and they may be inclined to for the sake of public perception, but what would "legally require" them to do so?


If they ignore cellebrite using their stuff they may have waived their right to be upset about someone else doing the same thing.


Copyright law isn't the same thing as Trademark law. Copyrights don't expire because the rightsholder failed to enforce their rights, only trademarks do.


Vigilantibus non dormientibus aequitas subvenit.

The principles of laches may still apply and provide a defence for Cellebrite if Apple makes an "unreasonable delay".


Would they have a relationship? Cellebrite is an ant to Apple. Why would Apple risk credibility and security by helping them?


Because Apple cares a lot about their security, and the perception of security, and they have a lot of lawyers with nasty teeth. Using stolen unlicensed Apple technology to hack people's iOS devices is like eating a bear cub in front of mama bear.


>"Required" to comment? By whom and for what reason?

By stockowners who might not like their valuable IP used in this way or by this company without permission?


Don't confuse the thing where a company has to defend its trademarks to keep them with copyrights (which do not require active defense to maintain).

This, if Apple does pursue it, is a copyright matter, not trademark.


Now if only I could use legitimate tools to access my own Signal data on an iOS device.


Have you tried going out for a walk and looking for trucks with small packages falling off?


doesn't an itunes backup contain all the app data?


I own a Cellebrite, and yeah you are right. The Cellebrite box is nothing other than a phone backup tool. The nice thing it does is implement every backup sync protocol for every version of every mobile OS so you don't have to spend a whole day trying different combinations of iTunes and such.

The "Physical Analyzer" is just a forensics tool. There are dozens of competitors out there that will take a phone and surface the things that might be interesting in a court case or law enforcement investigation.

The product Signal didn't talk about - which I think is the one they are upset about - is Cellebrite Premium. That is their service where law enforcement can send locked or damaged devices to their lab and get back a an image to load into PE. However in 99% of cases devices are either accessed because they are running old software with public vulnerabilities, or using the magic phrase "would you mind unlocking your phone so we can clear this matter up?"


I tried to use adb backup to backup my Chrome history/tabs, but it's empty because it has android:allowBackup=false. So unless they're also rooting the phone (which usually wipes the data) or have some 0-day privilege escalations, some apps can't be backed up this way.


I don't know much about Android so I can't help you, but it is much more likely that the company that employs dozens of mobile forensics and hardware engineers and has close relationships with Android device makers figured out a workaround.


Not obviously for Signal


By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me.

Nailed it!


So I wonder, why disclose this?

This will just prompt Cellebrite to improve its security process and sandbox the entire tool.

If they wanted to destroy the credibility of the tool, using the vulnerabilities to silently tamper with the collected data or even leaking it online would be a much better option and hit them without any warning, not only jeopardizing those cases but forever casting doubt on not just Cellebrite but their competitor tools.


> sandbox the entire tool.

Sandboxing doesn't really help. The problem isn't that the tool is used to infect the rest of the system, but that the tool itself is compromised, the reports it generates are compromised, and and past reports may be compromised. Unless you're pushing that data outside the sandbox (which is a hole in the sandbox, and while much more limited might also be an exploit vector or a way to cause problems in the data) it's still fair game if the sandboxed tool is compromised.

There's multiple reasons to disclose it. First, because as another comment noted it attacks the credibility of the company, and credibility is very important for tools used in court.

Second, because their main goal is to protect Signal, not attack Cellebrite. Making Signal a problem to attempt to gather data about will possibly make them just blacklist Signal as an app they gather for. This could be temporary, but since Signal alluded to many exploits and that they have a bunch queued up for the future, it will always be a risk for Cellebrite to attempt to gather info from Signal, so they might just continue to skip it.


This is too quick a dismissal: if they sandboxed each extraction tool they'd be more likely to be able to say that a compromised tool did not compromise the entire system or data collected by other tools. This is exactly why programs like browsers, messaging clients, etc. have moved things like media decoders into separate processes, especially since those tools can be sandboxed quite aggressively whereas a monolithic program will use a fair number of different permissions.


Sure, sandboxing all the individual components and not just the whole would help. That's not what was being suggested though, and is a significantly more complex and labor intensive task than even just fixing all the the included libraries to be more recent and not have known exploits (even thought it would pay dividends later). I wasn't dismissing sandboxing as an effective tool, just noting that it's likely not all that effective to put it all in one sandbox as suggested (and since actually fixing all the problems is likely a lot of work, it doesn't negate the effectiveness of the strategy of Signal by providing an easy solution to the problem).


Any court case where Cellebrite's tools have been used are now in jeopardy since the defence can just say that they were hacked by someone else. There's now reasonable doubt that Cellebrite can't be trusted. This damages their reputation with governments too.


Not really. The same circumstances exist for almost all digital evidence. Of course, a lot of Cellebrite usage is extrajudicial already.


If you're failing some basic security it isn't going to give much confidence.

But also users don't know now if their systems will explode if they try to gather Signal (or other app) data.


The quality of forensics software is extremely low; a similar story was once written about EnCase, and had zero impact on any legal case anywhere.



As in https://www.securityweek.com/forensics-tool-flaw-allows-hack...., yet it is used in cases large and small, civil, criminal, federal state.


Matt Blaze did some research on this, and it seems to turn out that when you put an argument like this in front of a judge or jury, ultimately you have to back it up with evidence that it actually happened; it's not enough to say that the potential existed. Which makes sense, because the potential exists for a lot of stuff, including stuff we don't often talk about.


I think this post by signal is not much beyond exceedingly well-crafted nerd sniping [edited for claity]


This is unlikely to be the case, despite the vulnerabilities that are described.

The process of e-discovery is rife with risks of this sort. When you forensically collect data from a random set of devices from a party that may or may not have porn, HIPAA, GDPR, sample viruses, malware, who know what all.

The short version of it even if the inhaling of this data crashes the device, there are mitigations and protections that will allow the evidence to be ultimately produced.

A crash of the windows host in collection will not invalidate the case.

disclosure: ex-CSO of Relativity, leading provider of e-discovery software.


The vulnerability claimed here doesn't necessarily crash the computer running the software. It runs arbitrary code, and said code is able to modify the data Cellebrite extracts. It is not clear whether it is possible to detect whether data collected in the past is compromised.

There may be mitigations, but without knowing the full details of the exploit, it sounds a lot like reasonable doubt to me. A good lawyer would spin it exactly that way, putting any cases without sufficient corroborating evidence in jeopardy.


If the data is off on another server, it seems unlikely that past cases can be compromised.

There are a whole set of rules about challenging evidence, including electronic evidence. Keep in mind that the other side gets a crack at it also. It is unlikely that the whole case would be thrown out because of a corrupted file. Reasonable doubt is not part of the forensic process--this is what a jury needs to consider to render a verdict.

As pointed out elsewhere, many uses of this tool are extrajudicial.


> If the data is off on another server, it seems unlikely that past cases can be compromised.

That isn't the point being made here. The data could have been compromised at the point of when it was gathered, not any time later.


Whether Cellebrite is secure or not has really not much impact on Signal. Shore up Cellebrite's security, don't, either way, pretty much same threat to users. But calling them out like this could force them to placate their customers by spending money on software security --- something they apparently haven't been doing --- and inflicting costs on your adversary is good praxis.


I bet Sun Tzu would also extol the value of allowing your weekend to remain complacent. This disclosure may make them harden up.


Sandboxing doesn't fix the problem. The problem isn't the same as a consumer app where you're trying to protect the OS from being rooted. Their problem is they need to protect the integrity of the report it generates because that's the thing that makes them money.

edit:

One thing they could try to do is to sandbox the parser itself to lower attack surface area... but the damage is done here and I really doubt they will win a security tit-for-tat with Signal.


The public disclosure about the Apple DLLs could potentially be used to drag Apple into any legal case between somebody versus Cellebrite. The disclosure needs to be public versus private or under seal or whatever to absolve the Cellebrite counterparty of any liability from reverse engineering. Suddenly Apple is now in potential collusion with Cellebrite. Or maybe not. This public disclosure makes the threat of Discovery a bit less toothless.

IANAL but I could imagine Cellebrite has existing or pending litigation where this disclosure upsets their position.


I think they mention Apple in an attempt to force them to defend their copyright, else they risk losing it.

I assume Apple will choose to file for copyright infringement than risk being accused of collusion and lose the copyright on that iTunes or parts of it.


That's not how copyrights work, you're confusing it with trademarks.


Right, but Apple now has to choose between suing Cellebrite and (tacitly) condoning their behavior. Not the same as losing the copyright, for sure, but still.

Not clear which the parent was talking about. Maybe both?


does cellebrite appear in any legit court cases? from this blog post it sounds like only "authoritarian" regimes use it. i doubt it would appear in any legit case. it's a shady tool. they'll use it to gather info but will not present this info directly in court, instead use it to gather legitimate proof, if needed.


Yes. You’ll find many results for the keyword ‘cellebrite’ on law.justia.com, canlii.org and austlii.edu.au. These free databases are far from comprehensive and most cases won’t go on appeal or end up in a legal research database at all, so this is just the tip of the iceberg.


Cellebrite tech is widely used by western authorities (as well as some “authoritarian” regimes)


It seems to be a retaliatory measure against this:

> When Cellebrite announced that they added Signal support to their software, all it really meant was that they had added support to Physical Analyzer for the file formats used by Signal.

Your case is valid about potential judiciary impact, but it would require for Signal to monitor cases involving Cellebrite and step forward to help the defense while unprompted to do so. Furthermore, Cellebrite clients seems to include entities that do not care so much about a fair trial.


Probably disclosure is the best option.

Silently tamper with the data might cross a legal line. doing this might put at risk current or past cases where there is a legitimate reason to use this sort of tool.

Privacy can be hard. While i 100% defend everybody has the right to privacy, i can also see the need for the capability to break it. Maybe the answer for this is a very tight regulation around the uses of this kind of hardware/software, but that regulation would have to keep up with the pace of technology


> Privacy can be hard. While i 100% defend everybody has the right to privacy, i can also see the need for the capability to break it. Maybe the answer for this is a very tight regulation around the uses of this kind of hardware/software, but that regulation would have to keep up with the pace of technology

i've always wondered if there could be a cryptographic solution to this. issuing decryption keys to governments seems rife for abuse, but some sort of multi-party situation where governmental and non-governmental entities have to cooperate (with actual multiparty key material) to perform a decryption authorized by warrants- with said non-governmental agencies acting as a check on usage of those warrants, their frequency and the eventual publication of their usage could be an interesting approach.

personally i think strong encryption should be a requirement for digital evidence, but even that can be forged.

strange times we live in.


> i've always wondered if there could be a cryptographic solution to this. issuing decryption keys to governments seems rife for abuse, but some sort of multi-party situation where governmental and non-governmental entities have to cooperate (with actual multiparty key material) to perform a decryption authorized by warrants- with said non-governmental agencies acting as a check on usage of those warrants, their frequency and the eventual publication of their usage could be an interesting approach.

I would be willing to consider a solution where you need to have agreement from the UN Security council (maybe just the permanent members) in order to decrypt whatever you want. If you can get those countries to agree, it's probably important.


For one thing, it could otherwise waste a lot of time for the poor white hat hacker who tries to figure out why this oddly formatted file suddenly exists in the app data.

And it doesn't destroy the credibility of the tool to silently mess with its data. People have to know it's happening.


Signal may have had countermeasures in place long before the blog post, as well.


> Also of interest, the installer for Physical Analyzer contains two bundled MSI installer packages named AppleApplicationsSupport64.msi and AppleMobileDeviceSupport6464.msi. These two MSI packages are digitally signed by Apple and appear to have been extracted from the Windows installer for iTunes version 12.9.0.167.

Couldn't Apple now sue Cellebrite?


Yup, Signal just handed Apple's lawyers a loaded gun. Who knew I'd be enjoying Signal and Apple tag-teaming Cellebrite.


I wonder if they will. That would show it their "our customers privacy is the most important thing for us"-stance is something which they seriously mean.


A reminder that you can pair lock your iPhone to prevent analysis by Cellebrite or similar tools: https://arkadiyt.com/2019/10/07/pair-locking-your-iphone-wit...


All of these are based on the assumption that the attacker has physical access to the unlocked phone, right?

I'm trying to understand the risk profile here.

I guess I see the value for, e.g., a border crossing, where they can inconvenience you and ask you to unlock your phone, but instead of flicking through your messages briefly, they authorize a pairing and quickly backup your entire disk content. You expected a quick perusal by a human, but unknowingly gave them a lot more. If you've blocked pairing, they can't get nearly as much data as quickly.

But if you're being investigated for committing a crime, everything we think we know about device unlocking is still true, right? They'd need me to unlock it before it'd trust a new device to pair to, and they'd need a court order to get me to unlock it for them. Five quick taps of the power button and biometric unlocks are off--now they need my passcode.

Perhaps there's still value, even in that case, in that if I were compelled via court order to give my passcode, they still can't quickly / easily dump the disk contents from a device pairing. Although I imagine if you have the passcode there's probably many other ways of accomplishing the same result.


> unlocked phone

Well, mostly yes, that's considering Cellebrite doesn't have 0-days or other exploits which can send a SMS to the device or similar things. Using Cellebrite's software you can also send silent SMS, so it's not far off either.

A german Cellebrite ambassador showed me and colleagues the mentioned tools of the blog post and told us he participates at Law Enforcement raids. At 6 in the morning they raid the houses of the suspects, detain them and immediately ask for PINs and passwords. He said that surprisingly often it works and no further decryption tries have to be performed.


> if I were compelled via court order to give my passcode

In the US that won't work if unlocking the device requires a password or pin. In practice, you can't be compelled to provide that unless you openly admit that you know it. (Even then, the 5th amendment might afford you some protection.) YMMV, IANAL, etc.


IANAL but AFAIK you can be held in contempt of court if you don't provide it when asked if they're already convinced it's yours and has incriminating evidence on it. Article below, although it looks like fairly recent (not super established) jurisprudence.

[1] https://goldsteinmehta.com/blog/can-the-police-force-you-to-...


At issue there is the "foregone conclusion" exception to the 5th amendment. As far as I understand things you've lost the case by that point anyway.

Even then, I believe there would still be the additional issue of demonstrating that the defendant actually knows the password. Which is why I previously mentioned that if you openly admit to knowing the password then you likely have a problem. (This came up in a case where the defendant admitted to the FBI that he was capable of decrypting an external hard drive but refused to do so because "we both know what's on there".)


> At issue there is the "foregone conclusion" exception to the 5th amendment.

The unclear part seems to be how strong the evidence needs to be that the device is yours and that the evidence is on there. In this case, his sister testified to both. But would it be strong enough with forensic evidence alone? Unclear.

> As far as I understand things you've lost the case by that point anyway.

Perhaps, but there may still be significance to what's on that drive. There may be incriminating evidence there for other crimes for which you're not yet being prosecuted.

> Even then, I believe there would still be the additional issue of demonstrating that the defendant actually knows the password.

They're saying the sister's testimony was sufficient to prove that he knew the passwords previously.

Proving present-day capability to decrypt doesn't seem to be necessary, at least in the article I linked.

> The federal court denied the Motion to Quash and directed Doe to unlock the devices for the investigators. Doe did not appeal, but he refused to unlock some of the devices, claiming that he had forgotten the passwords. He was eventually held in contempt by the District Court, and the Court ordered that he remain in federal custody until he was willing to unlock the devices.

The accused claimed he could not decrypt the hard drive because he had forgotten the passwords, but he was still being held in contempt.


> The unclear part seems to be how strong the evidence needs to be that the device is yours and that the evidence is on there.

The 5th amendment protects you from having to testify against yourself, but it doesn't protect you from having to turn over incriminating evidence against yourself. The 4th amendment protects your stuff, but only up to the point of requiring probable cause for a warrant.

At issue in these sorts of encrypted storage scenarios is whether you would be incriminating yourself by demonstrating that you know the password. Knowing the password for an encrypted device basically proves that it's your device, so forcing you to decrypt a device would amount to forcing you to testify against yourself in the event that there is doubt about whether the device is yours.

So to force you to decrypt a device there needs to be a warrant for the contents, and it needs to be no doubt about the fact that it is indeed your device. So it needs to be a foregone conclusion that the device is yours, but only requires probable cause to believe that something specific and illegal is stored on the device.


Thanks. That helps a lot to clarify.


Do we (reasonably) know if this still works?


There was a vulnerability in this technique that was fixed in iOS 11: https://labs.f-secure.com/advisories/apple-ios-host-pairing-.... If someone found another vulnerability and shared it with Cellebrite, then it doesn't work. If they haven't, then it still does.


This still works as written. Just test it yourself with a Mac and Apple Configurator.


I mean, “the iPhone prevents well-behaved software from accessing data without a password” and “software, known to exploit vulnerabilities to get around security features, currently doesn’t have any such exploits” are very different.


Every stone we can put in the way of surveillance helps.


My question was ambiguous, what I meant was whether or not there were any known exploits to work around pair locking (all of my iOS devices are pair-locked). I didn't know about the exploit that lights0123 linked to, but it appears that has been fixed.


Are there similar features for Android devices? A database of Cellebrite-resistant phones, perhaps?


How do Cellebrite maintain "Chain of custody"? If they need to modify (hack) the device to get access. I was of the understanding, that if any file is modified then "chain of custody" is no longer in good standing, and therefore cannot be used as evidence.


You wish. All they need to do is track which files they modify and not touch the other ones. I doubt it would be fruitful to explore that angle in court


I attended a conference back in the early 00's, where a member of GHCQ presented. He spoke about assisting on 911, obtaining forensic evidence from hard drives and the lengths that his team had to go to, to make sure that no files where changed while creating a clone of the drives. He stated that they can not just turn on a computer when they have seized it, as there would be about 800 files altered before Windows had even booted to the login screen. This, he stated, would destroy the chain of evidence. He might have been full of shit though.


I hope Cellbrite users like the rhythm and lyrics of Never gonna give you up.


This rocks so hard. Also the Prodigy soundtrack! Takes me back.


Hack the Planet o/


This is a really great piece. I have two observations.

1) "..saw a small package fall off a truck ahead of me..."

2) The very last paragraph is just great!


> One way to think about Cellebrite’s products is that if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.

Aren't Cellebrite products/services more advanced than that? I mean don't they use publicly unknown zerodays to extract data from locked phones?


AFAIK there are various “levels” of Cellebrite’s products, from “I’m a phone shop and I want something help me make a phone backup so I can restore it” all the way to “tools to break into locked iPhones”.


They are more advanced typically than just extracting data from a phone. Not sure to which extent they advertise it brazenly though. Fairly certain they blog about it a lot


the cellebrite ambassador we talked to (as private company) basically bragged they were the ones that unlocked the San Bernadirno iPhone. I'm sure towards government officials and Law Enforcement they brag even more.



Hm, interesting. So there are articles saying it's Cellebrite https://www.reuters.com/article/us-apple-encryption-cellebri..., then others saying it was unmentioned professional hackers and then your article where all three possibilities are mentioned.


Yeah, that wasn't them... so whoever that was lied


Signal have just added files to compromise Cellebrite to their default installation !

"In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage.

These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software.

Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files."


This is pretty irksome. I get how satisfying it must feel, but the one thing I want as a Signal proponent is for the app to be boring and reliable. That means make it easy to use enough to be mainstream, squash bugs, and do all the lovely security work you do.

That does not mean adding stuff like untraceable cryptocurrency payments or very publicly tweaking the noses of law enforcement, and bragging about how you're putting exploits in your app to hack them.

This isn't 1993 and the last thing we need is more pretexts to ban E2E encrypted apps in the countries where they're needed the most. I think this trades a moment's satisfaction for a very bad long-term outcome.


The problem with being boring while attacking powerful institutions (like LEOs or nation states) is that it only works as long as you're small enough to stay below their radar. After a certain point, the material reality will sink in that you're a threat, and they're going to take action against you regardless of how you carry yourself. It's totally possible, given that we're starting to hear more and more accounts of powerful people using Signal, that we're approaching that tipping point, and a more gloves-off approach might be necessary.

That being said, I agree with you 100% on the cryptocurrency payments issue and think that was a misstep on their part.


Signal isn't going to actually do it, they know how that would end, they're just playing the FUD game in the other direction. Which I am 100% on board with.


Maybe the one thing worse than boasting that you're putting malware in your product is boasting about it and not doing it.


They are not putting malware into their app. They are adding athletically pleasing files to their app. Is it Signals fault if someone else's software doesn't work properly with them? How can Signal test every piece of software to make sure it's compatible with their own software? Especially when the other software is using Signal in a unintended way.

It's not signals job to secure 3rd party software, that's entirely on the 3rd party.


Intent matters, come on, your arguments are ridiculuous. If one of the aesthetically pleasing files turns out to contain an exploit targeted at Cellebrite software, then it wouldn't be hard to convince a jury that this isn't a coincidence but intentional malware, especially combined with this wink-wink bragpost.

It's not Signal's job to secure third party software, they can intentionally post incompatible data, but it definitely is their job (just as everyone else's, mandated by criminal law) to abstain from any activities that would tamper with evidence. If that incompatible data isn't limited to randomness or crashes but contains, quoting the article "a file that executes arbitrary code on the Cellebrite machine" or "undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine" then it obviously was intentionally made that way, which crosses the line and at that point yes, it's definitely Signal's fault. If that gets actually executed on a machine owned not by Signal but e.g. some law enforcement agency, then Signal and any involved developers personally may face criminal charges.

It does not even need to involve any computer specific laws (though those are also likely to apply) - if there's an incident where evidence got disrupted, if there's evidence that this incident was caused by "aesthetically pleasing files" developed by Signal, and there's some evidence (e.g. this blogpost) that they made these files knowing that they might result in other evidence being destroyed - that's completely enough, that's tampering with evidence, a felony. Go directly to jail, do not collect $200, don't expect sympathy from law enforcement and the legal system.

In my personal opinion this is all FUD and scaremongering, because actually doing so carries little benefit and high risk for Signal. But, of course, no one can be sure.


You're allowed to host malware if you want. If you tell people not to run it then how can it be your fault? That's not tampering with evidence.


Because Celebrite's software is shown to be exploitable, how do you know it was Signal's files and not someones else corrupting it?


Malware? Any countermeasure against state surveillance is a good thing for us.


Is it malware if users desire for their devices to be resistant to surveillance tools?


Those are not related issues - it's malware or not malware based on what it does or did (e.g. did it corrupt data on someone else's computer system because it was intended to do just that thing?) regardless of the reason for placing it there.

If you can't figure out a way to satisfy your desire for your devices to be resistant to surveillance tools with legal means, well, then you can't satisfy that desire.

Furthermore, not only the ends don't justify the means, the ends can be prohibited too - if you explicitly design something to destroy your own data knowing that this data would get used in a criminal investigation, that may be a crime on its own (tampering with evidence/obstruction of justice, location matters of course); you don't have to testify against yourself, but destroying evidence is a crime even if it's your property (e.g. throwing your gun into a river after a shooting so it wouldn't be found) and furthermore in that case the court may be allowed to assume that the destroyed evidence was unfavorable to you, that the data contained the damning things they expected to find there - so if you want to protect your devices from surveillance tools operated with a legal warrant, you might want to consult a lawyer to find out if that's a good idea in your jurisdiction, it may well be worse for you than doing nothing.


> If you can't figure out a way to satisfy your desire for your devices to be resistant to surveillance tools with legal means, well, then you can't satisfy that desire.

I fear this part of your argument is fallacious. A peaceful and hilarious response to an abuse is not illegal. An if it is, it should not be. This post just highlights how petty the means and methods used by "some companies" are. Moxie is rightfully mocking Cellebrite and its customers, and it's fantastic.

About the obstruction of justice... If governments around the world are having an appetite for abuses, we shall not fall for it. There are other ways to get to criminals. Tell state officers they can do their job without abusing the private sphere of citizens.

It's Cellebrite devices that should be made illegal, because "justice" should not depend on surveillance tools. Just days ago the same US IC shared a memo/report about the rise of authoritarianism and how western democracies are threatened by it. Let's stop this double standard.


It is software that many of us are perfectly content with possessing on our devices. Cellebrite may not want it on their devices, but it's not like we consent to Cellebrite copying over our data. Holding us or Signal responsible for this is akin to holding a dog or its owner responsible for a burglar getting mauled - it would be one matter if the dog mauled a guest, but you can hardly blame either with unauthorised entry. In this case, exploiting this can be considered a form of digital rights management.

As for tampering with evidence, your claims are certainly overbroad, otherwise one could consider Signal's vanishing messages and deliberate lack of logging to be tampering with evidence. In any case, an individual is hardly responsible for those actions; they presumably did not deliberately seek to destroy Cellebrite data since Signal may choose to do this entirely at random.


goodware ??


The client code is open source, so it should be pretty easy to tell if they actually do it.


I would assume their aesthetically pleasing files would not be part of their client code since the clients are fetching these files from Signal.

So whether they're fetching aesthetically pleasing files would be easy to tell, but trickier to tell if those files actually have payload.


Automatically open the files with Cellebrite software tools in a sandbox environment?


Wouldn't necessarily work:

> Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

You would have to have an already existing Signal account that's been around for some time and hope you get sent one of these files.


Signal has a strong ideology. If you don’t want to be a part of that then don’t use the app.


Secure messaging apps aren't very thick on the ground. I don't have to agree with Signal on every issue, I just want them to ease off the 4-Loko.


I agree and find it more than irksome -- it really makes me want to stop using Signal and stop recommending it to anyone; moreso even than the recent scamcoin integration.

The reason I want e2e encryption is because I want control of my devices, control of my information, control of what's going on. It's not Moxie's phone to drop random files on to, regardless of purpose. It's my phone, and I consider programs that are doing things that I'm not aware of malware.

(Admittedly, Android is rife with stuff I don't want going on, so it's not really my phone, it's Google's and Motorola's and a bunch of other entities who have their tentacles in it, but still...)

Maybe the last paragraph is a joke, and they have no intention of randomly placing files on unwitting client machines. It's open source, so I could compile the client myself and make sure it's not doing anything funny. What a pain though. At that point, so much trust is lost in the organization and codebase that really I need to find some other messaging protocol / app / network.


I think there's a chance the two topics you mention might be connected. The loud backlash to the cryptocurrency scheme took many by surprise and seeded doubt about Moxie's ideological commitment. This post might be seen as an attempt to recover some street cred.


Signal is almost like anti-virus software against Cellebrite.


>We are of course willing to responsibly disclose the specific vulnerabilities we know about to Cellebrite if they do the same for all the vulnerabilities they use in their physical extraction and other services to their respective vendors, now and in the future.

Mwahahahaha. Tell us how you hack our app and everyone else's, and we'll tell you how we hacked yours.

The middle finger is strong with this one.


If it's true that you can grab a Cellebrite hardware piece without too much difficulty (Ebay, etc - and note I'm not speaking from expertise so someone please fact check me), I'd find it hard to believe Apple wouldn't have done this kind of inspection themselves and/or noticed those DLLs being shipped.

Curious if there'll be a response of sorts.


I am reasonably confident that Apple is a Cellebrite customer. Their security team certainly has access to forensic tools from other vendors. That team also spawned BlackBag Technologies, which is now part of Cellebrite.


Apple uses Cellebrite devices in their stores to transfer data from devices, I believe.


>By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite.

That's just hilarious! Nice way of saying we got our hands onto one of these boxes, but we don't want to reveal how. It fell of a truck.


"I was recently out for a walk when I saw a small package fall off a truck ahead of me."... I I laughed :)))


Crossing the streams, the US Postal Inspectors Service (which hosts iCOP, detailed in a recent Yahoo story) are a Cellebrite customer:

https://www.uspis.gov/wp-content/uploads/2020/02/FY-2019-ann... (p. 35)

See: https://news.ycombinator.com/item?id=26892180 https://news.yahoo.com/the-postal-service-is-running-a-runni...


While this report is entertaining to read, I have to wonder about possible downstream repercussions of the implications within the last paragraph; if you're in police custody or worse and your Signal app contains some 'aesthetically pleasing files' that interfere with the authoritarian software, it's likely going to be your ass on the line for all sorts of charges.

Don't get me wrong, the implication is enough to discredit Cellebrite, but my initial thoughts are that either this bluff gets called, or there's a non-zero risk of someone landing in even hotter water down the line for using Signal. Of course, this assumes that you're not already neck-deep for having encrypted data and upholding your right to privacy.


They literally said the unit fell off a truck. Funny...

Correctly me if I am wrong, but did they really say they were going to be doing active attacks against Cellebrite units? Also funny... but they probably are not actually going to be doing that.


They didn't actually say anything of the sort. They may have implied some stuff. Anything they did imply wouldn't be an active attack though, it would be a passive one, triggered only if Cellebrite tried to gather data from the Signal app on phones. Not gathering info from a phone, or not gathering Signal data from a phone, would both be ways Cellebrite could avoid this potential passive attack.


The digital equivalent of "stop hitting yourself". Notwithstanding their crypto issue, this gives me renewed confidence in Signal's team.


To me it seems more like the equivalent of leaving booby trapped packages to be found by porch pirates. Or putting laxatives (or worse) in your sandwich to get back at the unknown coworker stealing your lunch. Both of which are considered illegal in the US.

Assuming these files actually contain exploits. Maybe they do maybe they don't. You feeling lucky Cellebrite?


The question of whether damaging reports would be illegal is separate from whether booby traps are illegal. And they're not, in the broad case: Booby trapped packages are only illegal if they cause bodily harm or damage or are negligent along those lines.


The actual crime would be different; the former would be assault, while the latter would be unauthorized access under the CFAA, but I think the principle applies to both. In both cases the intent to cause harm exists, and doesn't become irrelevant just because the victim had to put themselves in a situation to trigger that harm. If anything I think there is a stronger case against leaving exploits around where software might scan them, because while stealing is illegal, scanning files typically is not (if you have legal rights to access the device, via ownership or warrant).


Ahhh... Au contraire. The Cellebrite exploits a device for root. Technically that is sidestepping most permissions frameworks as a matter of expediency, but I assure you scanning things and being able to access them by default is not a given. Access Controls are digital fiefdoms unto the implementer's design, and if it is so that a scan should be responded to with a malicious payload, it is not at all anything more than a quirk of the configuration of that device.

If you want to start trying to project human legality into the computing world, you're going to have a really, really bad time. Human legal logic and digital logic do not at all mix.

Things get even hairier with things like a hard disk full of a nation state's classified info, where a root terminal has been left open.

The computer will not argue a lick about producing those contents, but I assure you, someone else will most vigorously object.

And by CFAA, and everything else under the sun, Law Enforcement wants extra-priviliged access reserved for themselves which no inherent property of digital logic or programming need guarantee.

Practically implementing such programs/filesystems/systems-as-a-whole is an exercise left to the reader. As is the consequences of doing so in a particularly authoritarian leaning society at the moment.


I read nothing about an attack of any method or type at all.

If Cellebrite decides to punch a spiky rock they could have just not done that in the first place.


https://www.iowajustice.com/ is amazing at UFED defense.


> Since almost all of Cellebrite’s code exists to parse untrusted input that could be formatted in an unexpected way to exploit memory corruption or other vulnerabilities in the parsing software, one might expect Cellebrite to have been extremely cautious. [...]

Yeah, but they probably figured they're not being attacked. But now? Now they'll have to figure they are.


my TL;DR (but its a damn good read and funny too)

- Cellebrite helps oppressive regimes read your messages

- Signal keeps your messages private

- Cellebrite announces "Signal support"

- Signal finds 9 years of vulnerabilities in Cellebrite

- Signal permanently pwns Cellebrite

You come at the king, you'd best not miss.


To be fair, this vulnerability disclosure is worthless, because it wasn’t actually disclosed—the true vulnerability is purported to be in the file that Signal uses to execute arbitrary code, of which the details are not shared. We are relying on pure trust that the video demonstration of the purported vulnerability is not a forgery.

Additionally, I see from the video that the purported vulnerability is present in UFED version 7.40.0.229. There is nothing stopping Cellebrite from patching this purported vulnerability, and shipping trustworthy versions of UFED going forward.

If there is a concern that the purported vulnerability still exists, the burden of proof will be with the person claiming the vulnerability exists, for each new version of UFED. Cellebrite doesn’t even need to implement actual code, but merely increment the UFED version number. It will be an endless cat and mouse game driven by baseless claims from both sides.

Since this vulnerability has not been reproduced by third parties, it could be equally likely that Signal is using a psyop rather than exploiting a genuine vulnerability. In either scenario, it casts doubt on Cellebrite; the damage is done by convincing you, the reader.


Many vulnerabilities are disclosed without simultaneous disclosure of the PoC. That doesn't make it worthless.

Also, not disclosing specifics is reasonable here, given that the vendor is themselves known for using, hoarding, and selling access to 0days.

There is no obligation for a researcher to share their research with such a corrupt vendor.


I agree that the vendor is detestable, but we must decouple that sentiment from the idea that this blog post compromises Cellebrite’s product or credibility in any way.

As it stands, the vulnerability is not reproducible by anyone other than Signal. Reproducibility is key in the scientific method and in the court of law.


The post seems to provide a straight-forward map to at least one vulnerability. If they have FFMPEG DDLs that have not been updated since 2012 and they are used on files found on the file system, you just need to find a relevant vulnerability and craft an appropriate media file. These vulnerabilities are well-known and well-documented. It just doesn't point to the specific DLL & CVE, but otherwise seems like it would be relatively easy to figure out.


"These two MSI packages are digitally signed by Apple"

Couldn't Apple simply revoke the signature?


The digital signature is not needed to run the software.


The dll wil be flagged as malware in major av's if they revoked it


This isn't the first time moxie0 found something important on a street, is it?


I'd also like to get really excited about this. Can someone ELI5?


>Since almost all of Cellebrite’s code exists to parse untrusted input that could be formatted in an unexpected way to exploit memory corruption or other vulnerabilities in the parsing software, one might expect Cellebrite to have been extremely cautious.

>Looking at both UFED and Physical Analyzer, though, we were surprised to find that very little care seems to have been given to Cellebrite’s own software security.

People keep saying this. It has never changed since the 90s. There is no bar to become a "software engineer".


Damn. That video say it all.


Something about this smells a little off.

If Moxie can get his hands on these devices and hack them, why can't Apple or Google, with all their resources, seem to be capable of REing them to fix the mobile device bugs they currently exploit?

Tinfoil hat perspective suggests they don't want to.


kmk


i find it remarkably unbelievable someone would put a cellebrite bag in the back of a truck given the price alone.. and the timing too. sure


"fell off the back of a truck" is an idiom [1]. It's not meant literally.

[1] https://www.phrases.org.uk/meanings/fell-off-the-back-of-a-t...


It's like "a little bird told me".


The "parallel construction" of the civilian world.


they actually put it a photo of it on the street too


oh ok thats a new one for me obviously, but the street photo got me. lol


I bet the tool was bought from a supplier but Signal team can't disclose it because source protection.


[flagged]


This is irresponsible and inflammatory. Has nothing to do with the discussion. Please stop.


“Fell off the back of a truck” is slang for “obtained through illicit means.” It comes from the excuse that criminals used to give when caught with stolen property: I didn’t steal it, it just fell off a truck.


thank you for clarifying, english is not my main language and that was taken at face value. The going on a walk and picture was too good


Any idea what this means? It is at the bottom of the article:

"In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software.[...]"


They're alluding to the fact that they can randomly pop Cellebrite installations by planting anti-Cellebrite malware on their users phones.


> By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite. Inside, we found the latest versions of the Cellebrite software, a hardware dongle designed to prevent piracy (tells you something about their customers I guess!), and a bizarrely large number of cable adapters.

Does anyone find a package dropping off a truck and first take a picture of it, pick it up and go home to open it?

Even if someone picks up the package, usually taking a picture of it doesn't come to their mind. It's an unsual bit in the story. Unless, they went back and put the bag on the road to show that it was found just for the sake of "recreating the story" purposes.

How does something like a small briefcase just "fall from a truck"? By what mechanism? Briefcase would be stored inside the cabin.

If you're the author, can you explain my suspicion?


“Fell off the back of a truck” is usually a metaphor for either stolen or otherwise acquired in a way they don’t want to explain.


Thanks, I didn't catch this.


Its a loquacious take on “fell off the back of a truck”.


It's a joke.


Sorry, please go ahead and downvote my comment so its not cluttering up :)


plausible deniability




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: