Hacker News new | past | comments | ask | show | jobs | submit | tony101's comments login

From their FAQ:

“Permission Slip helps you exercise your right to privacy under the California Consumer Privacy Act (CCPA) by acting as your ‘authorized agent’ and sending data requests to companies for you.”

https://www.permissionslipcr.com/faq.php

The California Attorney General is monitoring companies’ compliance with authorized agent requests:

“The sweep also focuses on businesses that failed to process consumer requests submitted via an authorized agent, as required by the CCPA. Requests submitted by authorized agents include those sent by Permission Slip, a mobile application developed by Consumer Reports that allows consumers to send requests to opt-out and delete their personal information.”

https://oag.ca.gov/news/press-releases/ahead-data-privacy-da...



Okay. That link explains a little bit. Enough that I'd be willing to check it out now, in comparison to other tools like DeleteMe.

But it would have been nice to see that same information on the landing page for the software. Or at least have that link included in the original post here on HN.


Well that was a fast fail. I decided to try to have them request that Home Depot stop selling my data, and then the program seems to have hung on a blank page.

I hope the problem is that their servers are overloaded due to "HN Hug-o-Death", and not something else.


Ok, we've changed to that from https://www.permissionslipcr.com/ above.


There are a lot more words there, but no more explanation of what it is.



free as in "open source" not free as in free software. you cant do squat with the software, you cant submit bug fixes or fork and roll out your own. this is for complying with license requirements.

oh, also the github repo last i heard was very far behind in commits from their internal live branches. they push to github as an afterthought


You are assuming rights that free / open-source software never promised.

The company owns the code copyright, so they can choose to release any part of the code they wish; they can delay as much as they want.

You never had the right to submit bug fixes to the original project. Open source is about source code, not about having a responsive community.


i am basing these "assumptions" based on what matrix offers. why is matrix the polar opposite of these "issues" then?

what benefit do you get from "centralization" of signal at the hands of moxie that you cant get from a decentralized nature of matrix which is similar to what we call today as "Email"?

sure, that is why i said "complying with open source" but at the same time not being free software. sure its not actively hostile towards users like whatsapp is but it isnt much better anyways. i m not asking for a right to submit bug fixes. the reason i am sticking to that is because they grant me that right and for me, that is a big deal. the whole concept of "free software" is about the community helping each other to build software that can solve problems, not comply with the letter of the license and sticking with that.

>The company owns the code copyright, so they can choose to release any part of the code they wish; they can delay as much as they want.

so tomorrow there is a severe flaw found in the code of the app or the server and the company can decide to wait it out and internally fix the issue after a long time without users knowing about it and then they release the fix as a normal thing, sure they have every "right" to do that but is it really fitting to what we today associate with free software community? imagine that with say the linux kernel. people would sit on zero day bugs with fixes just because they "own" the copyright.


> i am basing these "assumptions" based on what matrix offers. why is matrix the polar opposite of these "issues" then?

Because it has different objectives?

> what benefit do you get from "centralization" of signal at the hands of moxie that you cant get from a decentralized nature of matrix which is similar to what we call today as "Email"?

If this is actually a good faith question (despite the overwhelming amount that's been written by a lot of people explaining exactly this point), I suggest you stop and think about what the answer might be. Then maybe go and read what Moxie (and others) have written setting out their values and why they chose a particular direction. You don't have to agree with it, but consider that others do.

> the whole concept of "free software" is about the community helping each other to build software that can solve problems, not comply with the letter of the license and sticking with that.

No, "Free Software" is about ensuring the end-user's freedoms (see Stallman etc.). The signal client and servers are both open source and free - if you want to take apart, modify and run your own instances of them, you're free to do so. The restrictions that Signal likes to impose is about connecting to their instance of a service. This has nothing to do with Free Software.

> so tomorrow there is a severe flaw found in the code of the app or the server and the company can decide to wait it out and internally fix the issue after a long time without users knowing about it and then they release the fix as a normal thing, sure they have every "right" to do that but is it really fitting to what we today associate with free software community? imagine that with say the linux kernel. people would sit on zero day bugs with fixes just because they "own" the copyright.

This is literally how modern software development works for most large projects. Open source doesn't grant you the right to view every single commit that eventually makes it into the tree at the point that it is created.


What stops you forking and running your own server? Plenty of people have done that.


can you drop a few links to such Signal servers?


Not public ones, no.


ProtonMail managed to figure it out after the DDoS attack they faced 5 years ago:

https://protonmail.com/support/knowledge-base/email-ddos-pro...


ProtonMail managed to figure it out after the DDoS attack they faced 5 years ago:

https://protonmail.com/support/knowledge-base/email-ddos-pro...


A reminder that you can pair lock your iPhone to prevent analysis by Cellebrite or similar tools: https://arkadiyt.com/2019/10/07/pair-locking-your-iphone-wit...


All of these are based on the assumption that the attacker has physical access to the unlocked phone, right?

I'm trying to understand the risk profile here.

I guess I see the value for, e.g., a border crossing, where they can inconvenience you and ask you to unlock your phone, but instead of flicking through your messages briefly, they authorize a pairing and quickly backup your entire disk content. You expected a quick perusal by a human, but unknowingly gave them a lot more. If you've blocked pairing, they can't get nearly as much data as quickly.

But if you're being investigated for committing a crime, everything we think we know about device unlocking is still true, right? They'd need me to unlock it before it'd trust a new device to pair to, and they'd need a court order to get me to unlock it for them. Five quick taps of the power button and biometric unlocks are off--now they need my passcode.

Perhaps there's still value, even in that case, in that if I were compelled via court order to give my passcode, they still can't quickly / easily dump the disk contents from a device pairing. Although I imagine if you have the passcode there's probably many other ways of accomplishing the same result.


> unlocked phone

Well, mostly yes, that's considering Cellebrite doesn't have 0-days or other exploits which can send a SMS to the device or similar things. Using Cellebrite's software you can also send silent SMS, so it's not far off either.

A german Cellebrite ambassador showed me and colleagues the mentioned tools of the blog post and told us he participates at Law Enforcement raids. At 6 in the morning they raid the houses of the suspects, detain them and immediately ask for PINs and passwords. He said that surprisingly often it works and no further decryption tries have to be performed.


> if I were compelled via court order to give my passcode

In the US that won't work if unlocking the device requires a password or pin. In practice, you can't be compelled to provide that unless you openly admit that you know it. (Even then, the 5th amendment might afford you some protection.) YMMV, IANAL, etc.


IANAL but AFAIK you can be held in contempt of court if you don't provide it when asked if they're already convinced it's yours and has incriminating evidence on it. Article below, although it looks like fairly recent (not super established) jurisprudence.

[1] https://goldsteinmehta.com/blog/can-the-police-force-you-to-...


At issue there is the "foregone conclusion" exception to the 5th amendment. As far as I understand things you've lost the case by that point anyway.

Even then, I believe there would still be the additional issue of demonstrating that the defendant actually knows the password. Which is why I previously mentioned that if you openly admit to knowing the password then you likely have a problem. (This came up in a case where the defendant admitted to the FBI that he was capable of decrypting an external hard drive but refused to do so because "we both know what's on there".)


> At issue there is the "foregone conclusion" exception to the 5th amendment.

The unclear part seems to be how strong the evidence needs to be that the device is yours and that the evidence is on there. In this case, his sister testified to both. But would it be strong enough with forensic evidence alone? Unclear.

> As far as I understand things you've lost the case by that point anyway.

Perhaps, but there may still be significance to what's on that drive. There may be incriminating evidence there for other crimes for which you're not yet being prosecuted.

> Even then, I believe there would still be the additional issue of demonstrating that the defendant actually knows the password.

They're saying the sister's testimony was sufficient to prove that he knew the passwords previously.

Proving present-day capability to decrypt doesn't seem to be necessary, at least in the article I linked.

> The federal court denied the Motion to Quash and directed Doe to unlock the devices for the investigators. Doe did not appeal, but he refused to unlock some of the devices, claiming that he had forgotten the passwords. He was eventually held in contempt by the District Court, and the Court ordered that he remain in federal custody until he was willing to unlock the devices.

The accused claimed he could not decrypt the hard drive because he had forgotten the passwords, but he was still being held in contempt.


> The unclear part seems to be how strong the evidence needs to be that the device is yours and that the evidence is on there.

The 5th amendment protects you from having to testify against yourself, but it doesn't protect you from having to turn over incriminating evidence against yourself. The 4th amendment protects your stuff, but only up to the point of requiring probable cause for a warrant.

At issue in these sorts of encrypted storage scenarios is whether you would be incriminating yourself by demonstrating that you know the password. Knowing the password for an encrypted device basically proves that it's your device, so forcing you to decrypt a device would amount to forcing you to testify against yourself in the event that there is doubt about whether the device is yours.

So to force you to decrypt a device there needs to be a warrant for the contents, and it needs to be no doubt about the fact that it is indeed your device. So it needs to be a foregone conclusion that the device is yours, but only requires probable cause to believe that something specific and illegal is stored on the device.


Thanks. That helps a lot to clarify.


Do we (reasonably) know if this still works?


There was a vulnerability in this technique that was fixed in iOS 11: https://labs.f-secure.com/advisories/apple-ios-host-pairing-.... If someone found another vulnerability and shared it with Cellebrite, then it doesn't work. If they haven't, then it still does.


This still works as written. Just test it yourself with a Mac and Apple Configurator.


I mean, “the iPhone prevents well-behaved software from accessing data without a password” and “software, known to exploit vulnerabilities to get around security features, currently doesn’t have any such exploits” are very different.


Every stone we can put in the way of surveillance helps.


My question was ambiguous, what I meant was whether or not there were any known exploits to work around pair locking (all of my iOS devices are pair-locked). I didn't know about the exploit that lights0123 linked to, but it appears that has been fixed.


Are there similar features for Android devices? A database of Cellebrite-resistant phones, perhaps?


I wonder if the intention here is to deter Cellebrite from parsing Signal files? Or to pressure them into fixing their security vulnerabilities?


Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time.

Pretty sure it's the former, since the above is a way to ensure that Cellebrite can't just gather all implied exploit files and make sure they've got those specific problems all patched. This is, quite literally, an informational attempt at guerilla/asymmetric warfare, where Signal is trying to make engaging with them too costly, while also making a few blows quite a bit above their weight level. Cellebrite now has to decide whether to keep after this adversary that both is hard to pin down, ambushes them, and has shown it can hit them really hard where it matters (credibility, and thus their pocket book).


This indeed looks like a FUD statement, implying that they can have an infinite amount of potential vulnerabilities. Realistically though, writing parsers that do not yield control of your whole device is not that complex. The people exploiting iOS zero days can certainly do it.


You're not wrong at all, but if they're shipping these garbage ancient versions of ffmpeg, there are likely oodles of other bugs lurking around. And, if Cellebrite acts like most other companies who've had their awful security exposed, they will fix only this bug and leave everything else.


It's not that hard but neither is shipping patched versions of ffmpeg. This company will have some catching up to do.


But it might be easier for Cellebrite to just stop exfiltrating data from Signal. Of course, other apps could discover similar vulnerabilities.


That's not enough. With file system permission, Signal could place files anywhere (like prepared gifs in the Pictures folder).

I think this taints any phone having Signal installed.


the signal are capable for finding more exploit with more time. important piece is that exists now a reasonable doubt on data from the celebrite, so it are not so good for evedince.


Nah, Cellebrite will panic for a bit at the possibility of facing repercussions but ultimately not commit enough effort to change anything. Cellebrite's counterparties, however, might not be so complacent.


Signal should generalise this into a library so that other app vendors can include these perfectly cromulant files


That would reveal all the exploits to Cellebrite, which Signal is trying to avoid.


I imagine many brother app vendors, who may or may not maintain good relationships with Signal might possibly have found a usb drive containing the relevant data on the street. (pure speculation, i don't know anything about moxie, but judging by his tone, i wouldn't be shocked)


hehe.

Now imagine if Hack Back laws actually passed... companies like Whisper Systems would have had impunity for even more shenanigans :)


or just flipping them off, which seems OK too.


The warrant is actually quite specific, down to the exact URL paths of the shells to be searched and removed.

See pages 20 and 21 of https://www.justice.gov/opa/press-release/file/1386631/downl...


How many IPs are they allowed to break into to find the webserver behind those domains?


Why would it not be a valid warrant? The web shells are evidence of a crime.

Also, the typical remedy for a defective warrant is suppression of seized evidence, not criminal prosecution.


It is not a crime because they had a search warrant signed by a judge. https://www.justice.gov/opa/press-release/file/1386631/downl...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: