I'm kinda curious how the law would treat a dead man's switch that automatically wipes the phone if you haven't unlocked it for N hours (say, 24 or 48). (Assuming it was set up well before any event that prompted the phone's seizure, of course.) Could they somehow charge you for not warning the police about the auto-wipe when they took your phone? Does the answer change if you were officially under arrest and had a right to remain silent?
This seems like a pretty good idea in any case. If the seizing party can't crack the passcode anyway then it's a no-op. If they can then presumably they won't/can't do it right away, so it would add a bit of defense in depth.
> Does the answer change if you were officially under arrest and had a right to remain silent?
Being formally under arrest doesn't affect whether you have the right to remain silent. It affects whether the police are required to tell you that you do.
Fascinating how this varies between nations and cultures. In Norwegian case law, refusing to explain yourself can be considered indirect evidence of guilt.
I remember a report from a recent Norwegian criminal trial, where the judge himself warned the accused that refusing to give an explanation could reflect badly on the question of whether she was guilty or not.
In the US, the Fifth Amendment protects the opposite: you cannot be compelled to testify against yourself in the US, and choosing to take the Fifth cannot be considered evidence of guilt. There's some evidence that, in practice, this isn't always true, but judges are required to clearly explain this to jurors.
I wonder how much this has to do with a nation's format of policing and especially interrogation.
The legal protection for refusing to speak in the US is the Fifth Amendment, which of course predates most modern police tactics. But there are basically no calls to change that, and it has a lot of cultural support too - there's plenty of media where "I ain't sayin' nothin'" marks a tough or well-informed character instead of a guilty one.
The other side of which is that US police have very few boundaries in interrogations other than giving a Miranda warning and avoid physical violence. A lot of police forces rely heavily on the Reid Technique, which presumes the suspect is guilty and has a long history of producing false confessions. They're also free to outright lie about both the state of evidence and how a confession will be handled.
I don't know a great deal about Norwegian policing, but just hearing this I would predict that "brought in for questioning" doesn't have the same "try to drag a confession out of you" associations it does in the US.
Refusing to provide an alibi is going to cause you problems in the US as well, but generally in the US defendants talk through their lawyers, not directly.
In Germany - and I assume in the States as well - being a witness will place much more legal duties on you than being a suspect. A witness for example can be obligated to give testimony. Which is why police sometimes intentionally declares a suspect a witness.
In the US you have the 5th amendment which means you cannot be compelled to testify against yourself witnesses that may incriminate themselves would get an immunity deal before taking the stand the only time where you cannot invoke the 5th is during cross when you agreed to testify.
Under certain cases in which your testimony cannot be used against you the court can compel you to testify if you refuse then you can be charged with contempt a common case for this would be a court ordering a reporter to disclose their source.
But the prosecution classifying a defendant as a witness would not fly.
> The jury was instructed that they may find the failure by the store to retain (and subsequently provide to the other party) the additional footage may be considered an attempt to hide evidence that Brookshire Brothers' management knew would be damaging to their case.
> The Texas Supreme Court reversed, ordering a new trial, stating that it was abuse of discretion by the trial court to issue a spoliation inference instruction in this case, that the court should have imposed a different corrective measure on Brookshire Brothers (a less severe sanction), and that a spoliation inference instruction to the jury is only warranted in egregious cases of destruction of relevant evidence.
I think it is essentially identical to deletion policies - except you probably don't have any legal requirements for minimum retention policies not a lawyer admittedly.
Essentially they need to subpeona you to formally tell you to preserve all potential evidence and stop the deletions or take backups such that the day to day deletions are irrelevant. The later being a fine but important distinction that shredding extra copies of your own is okay. You wouldn't get arrested for copying a customer's account information to do profit margin math and then shred the copy when done to ensure their privacy.
If you are unbound by other regulations there is nothing illegal about reimagining your device every 24 hours.
If they were trying to keep a low investigation profile and never told you that you were to preserve the data the destruction of evidence is on their incompetence.
If you implemented it post subpoena you are at fault of course.
That a good question. I don't know how related it is, but I haven't seen anything about the legality or illegality of warrant canaries[1], so there might be something to it.
Further down the wikipedia page there’s a link to moxie marlinspike saying warrant canaries don’t work because not updating them is legally the same as disclosing you’ve received a secret warrant. In other words, you have to lie in your canary or violate the warrant.
Has that been tested at the appellate level? The government can compel silence (gag orders,) but the government can’t compel speech, which is what updating a warrant canary is. This seems to be ready made for the Supreme Court.
There's also the question of a more manual implementation, rather than a self-initiated statement.
If someone just so happens to ask me on Twitter each day if I received an order, and I say no, but on day 99 I don't reply or say "I'd rather not answer", does that muddy the waters a little?
So the law says NOT to notify users about secret subpoenas. If you do, no matter how cute you think your method is, IMO, you broke the law. Oh, but just I took a file offline. Nope.
That's not how warrant canaries work, though there is no case law I'm aware of that determines whether or not they are actually legal.
The idea is that you have a message you update regularly to specify whether you have received an NSL -- you never delete the latest version. If you get an NSL you comply by doing nothing (and by your inactivity you've signalled that the warrant canary was tripped). There is a valid freedom-of-speech question (at least in the US) about whether you can be compelled to continue updating the message -- you can be forced to be silent but can you be forced to proactively lie when the NSL forced you to be silent?
Though, of course, they could subpoena the signing key for the canary and destroying the key would be destruction of evidence. A quorum system for signing might be more robust against this, but I have my doubts.
And of course quite a few folks think that warrant canaries wouldn't work in any case[1].
Schneier is right. Warrant canaries are just another side of the same coin as the sovereign citizen movement. Word games and magical thinking is not going to fool any judges.
While I do think secret warrants are unjust most of the time (and nobody can verify that they are justly applied when applied), the intent of the law is obviously to not let anyone know about the warrant, if you do you have broken the law regardless of how you did it. Any judge that rules otherwise is engaging in judicial activism.
Maybe, though I would say that it is odd that some companies (who presumably have legal council) have decided to implement warrant canaries. Quite a few have "activated" them, though it's quite possible some lawyer told them to knock it off. It is at least a little less crazy than arguing that you are a free inhabitant and that you don't need a drivers' license.
In Australia we have an explicit law which makes it illegal to talk about the existence or non-existence of a journalist surveillance warrant (though in Australia you might not even be aware of such a warrant's existence). This means that any discussion of such warrants is technically illegal -- making warrant canaries impossible to implement here. The minimum sentence is 2 years, and it's specifically targeted toward journalists (and affects anyone who shares already-public information -- so retweeting such a story on Twitter would be a serious crime).
> Maybe, though I would say that it is odd that some companies (who presumably have legal council) have decided to implement warrant canaries
Warrant canaries are an effective PR move irrespective of the legality of tripping them. They only stop working as a PR move if you don't trip them after getting a secret warrant and that warrant later becomes non-secret.
I think it's a bit strong to conclude that if a judge disagrees with you on this topic it is activism. This is relatively untested/uncharted legal territory and, at least in the US, it is complex. For places like Australia that have no guaranteed first amendment rights, I would agree that it is much more cut and dry because there are other laws already in place that can limit your speech through due process.
The same is true about due process in the US, but there are limitations on what it can apply to regarding speech. The US government can legally stop you from speaking on certain matters via court order. But speech is explicitly separated from lack of speech in the US and are treated as two totally separate things. There is legal standing that non-speech cannot be considered as speech - this goes hand in hand with the 5th amendment and how a person's refusal to speak and provide testimony against themselves cannot in of itself be considered evidence against them. So a lack of speech cannot be considered evidence of guilt.
Also, there is a substantial body of law protecting the people (even government employees) from being forced to say anything by government. So, for others here wondering if the US can require them to keep updating it - they almost certainly cannot. There may be a way around that because the US government does have a fairly broad ability to regulate businesses so they could, in theory, pass legislation requiring businesses update this...maybe. But I doubt such a law would pass challenges as it would be challenged on first amendment grounds and the idea that other compulsive speech requirements on businesses have generally been geared toward information sharing and notifications of legal rights and other things that protect consumers. This is something entirely different and doesn't fall in those categories. Those requirements are all structured around spreading truthful information to keep consumers informed where here the government would be requiring businesses to lie - something that could easily be argued is against consumer interest.
Thank you for this post. People seem to be treating warrant canaries as a sort of “gotcha!” defense that no judge would take seriously, but you’ve given a good explanation of why it’s not.
It should be noted that there is currently no public case-law on whether warrant canaries are actually legal. So really, until this topic goes before a court in a benchmark case, your guess is as good as mine.
I agree there are several theoretical reasons why warrant canaries might actually be a useful tool, but it's just as likely that intentionally constructing a scenario where you are implicitly telling people about a gag order through a bunch of hurdles would not be considered following the spirit of the law.
For instance, if you get an NSL you can't tell your family about it. When going to see your lawyer, you need to omit the reason why you're seeing a lawyer -- which is basically de-facto requiring you to actively lie to your family (because "I can't tell you why I'm seeing my lawyer" is arguably code for "I have received an NSL" if your family is aware that you might get an NSL one day).
I personally think this is massively unjust (and in Australia, we have explicit laws to disallow speaking about the existence or non-existence of any such secret warrants -- which makes even attempting to set up a warrant canary a crime with a minimum 2 year sentence).
I’d expect a court to find against this in the same way it would find against someone communicating the information in a foreign language. It’s the intent and how you use the file that matters to them.
The sticking point is whether you can be compelled to proactively lie by an NSL (when an NSL is actually a gag order, preventing you from talking about NSLs), given freedom-of-speech in the US. But quite a few people agree with you that courts probably wouldn't care about this level of pedantry[1].
A NSL or similar already prevents you from saying something and limits your free speech.
The court will, either way, not be impressed by someone communicating that they got a warrant by not communicating in a previously arranged manner (this is basically communicating in a code language).
only if you're covert working for 3 lettered agencies can you lie and get away with it. Though they'd get you out some other way to avoid blowing your cover.
I would argue that you brought yourself in this mess. The only valid reason for having a canary is to evade the requirements of the law, so you can't complain.
It's an argument, I'm no fan of secret orders or the government telling you to lie or keep quiet.
Fundamental rights can be in conflict with each other. It's not uncommon for a judge to impose gag orders, which ostensibly protects a fair trial, but infringes on a person's rights to talk about a case in public.
Well, the right to freedom of movement can be taken away if you've taken yourself into a mess that ends you in prison (and in tens of other cases for that matter).
>>Could they somehow charge you for not warning the police about the auto-wipe when they took your phone?
They might charge but "I was arrested, my mind was going nuts...was setup a long time ago, never hit my mind" etc etc. You need to be doing it on purpose and knowingly.
Well, unless you never needed to reset the switch while you kept evidence on the phone, you have demonstrated that you are aware of the system and actively worked to keep it running. That a system you knowingly and on purpose set up to destroy evidence then worked exactly as intended once arrested isn't a defence.
You are probably better off just pretending it got bricked by some random software update.
I'm not saying there is no legitimate reason for having that system, but if you reset the switch while knowingly having evidence on your phone then letting it triggering after being arrested is intentionally destroying evidence.
There's already a weak version of this, in that most phones which have biometric authentication stop accepting it if the phone hasn't been unlocked for a few days.
Howdy, digital forensics software developer here. A few points: 1) yes, the police probably should have put the phone in a Faraday bag, but those aren’t perfect and municipal law enforcement generally doesn’t have the same equipment that state and federal police do. It doesn’t excuse the suspect allegedly taking action to destroy evidence; 2) this is probably about the boyfriend, who they suspect committed a shooting—-they charged the girlfriend with evidence tampering as a felony, and then the prosecutor has legal leverage to get her to testify against her boyfriend and take a gun off the street; 3) no amount of technical argumentation will save you from a prosecutor, judge, or jury if you do something that causes spoiliation of evidence—when your company is sued and your business systems are preserved/collected as a result, don’t even think about getting in the way.
How about the threat of a 3rd party _adding_ information to a device? That sounds like another threat if someone wants to frame a suspect (i.e. another reason why devices should be placed in Faraday bags)
Interesting, if I were an attorney I would try to argue that chain of custody is broken if the device is still accepting remote commands. I wonder if it would work in court?
> In 2014, months prior to public knowledge of the server's existence, Clinton chief of staff Cheryl Mills and two attorneys worked to identify work-related emails on the server to be archived and preserved for the State Department. Upon completion of this task in December 2014, Mills instructed Clinton's computer services provider, Platte River Networks (PRN), to change the server's retention period to 60 days, allowing 31,830 older personal emails to be automatically deleted from the server, as Clinton had decided she no longer needed them. However, the PRN technician assigned for this task failed to carry it out at that time
I guess people of varying political viewpoints would differ on whether Clinton had gotten away with anything. But plenty of other cases in which powerful people did not get away with destroying evidence. Or rather, they had the ability to destroy email evidence and didn't, because they knew they wouldn't get away with it. Gen. David Petraeus [0], for example, and the officials currently under the Mueller probe.
You don’t really need to look very far. The moment you have so many people that you can plausibly chalk the deletion up to miscommunication or automated processes you are basically home free.
Or at least, just get off with a fine, it’s the company doing a wrong after all, and you can’t jail a company.
> The moment you have so many people that you can plausibly chalk the deletion up to miscommunication or automated processes you are basically home free.
Having so many people involved is as much a liability as any kind of benefit. It means more people to testify, and if you are involved in a cover up, more people willing to join in your conspiracy. Bigger companies also likelier have better guidelines regarding automated processes, are you suggesting any deletion by an automated process should be judged as suspect?
Not in all cases. But if it hits something like email records, it becomes a bit silly to think they’re doing it for any reasons other than that there’s stuff in the emails that’s going to hurt them (at some point).
Probably not, otherwise the police could be tampering with evidence. Obviously you and I know that airplane mode != changing device data, but courts and juries may not see it that way. Or they will say "You put it in Airplane mode, what else did you do?"
It really depends. I have a friend that used to work for Kroll Ontrack under their forensics unit. She quit after a few years due to the disturbing amount of child porn she had to recover. She knew she was doing good stuff in putting those predators in prison, but it really eats away at you. She's now working on an internal infosec team at Verizon, a job she really loves.
To answer your original question, she has a degree in computer forensics.
I was laid off from a startup at the bitter, wretched end of the dotcom bust in the spring of 2003. Another company two blocks from my apartment in Pasadena, CA was hiring C++ engineers, and I was lucky enough to join Guidance Software to work on EnCase. It's a fun field: bits and bytes, a whole lot of 'em, and the requirement for perfection.
> "Our position is that my client didn't access anything to remotely delete anything," Smalls said. "My client wouldn't have any knowledge how to do that."
That seems like something pretty easily disproven with a subpoena to Apple for records of whether a remote wipe command was issued, no?
Which makes me think the defendent probably indeed didn't remote wipe.
I wonder if it wiped itself after too many wrong password attempts (is that a thing they do?), or as the attorney suggests "days after her phone was seized, Grant got a new phone. Smalls said he didn't know if that had any impact on the data on the phone police had taken" -- does it auto-wipe the old phone in those circumstances sometimes?
That seems like something pretty easily disproven with a subpoena to Apple for records of whether a remote wipe command was issued, no?
But who wiped it? Was it her or her boyfriend or some other friend that though that she lost her phone? Or did she tell the Apple store that she lost her phone, and they wiped it as a "courtesy"?
Yes, there's an option to have your phone auto-wipe after 10 failed unlock attempts. However, I'm not aware of any way to have it auto-wipe by itself after n-days. Getting a new phone and signing into your Apple ID has no effect on the data stored on your old phone.
Removing a phone that was previously associated with your Apple ID will remotely erase the device. You will get an email confirming the erasure the next time the device is turned on.
Assuming the suspect is telling the truth, that's probably what happened. And it might not even have been the suspect herself; a store employee helping her set up her new phone might have helpfully removed her "lost" phone from her account.
OK, then, does telling the Apple Store that you lost your phone, when it was in fact impounded by police, constitute evidence destruction? I can see how it might.
I bet saying the magic words "my phone is impounded by the police, please wipe my phone" would be. But, assuming the suspect is telling the truth and assuming an apple employee disassociated the suspect's phone from their icloud account, let's also assume the suspect lied to the apple employee.
"I lost my phone and need a new one." "Your phone is lost? Let me disassociate your old and and help you set your new one up. What's your icloud username, email or phone number?"
Assuming the suspect knew disassociation meant data deletion on their old phone, is it up to the suspect to prevent this from happening? It seems pretty close to invoking the magic words I started out with, especially if this was the suspect's intent going into the store.
Now assuming the suspect didn't know, and he did not intent to delete data from his old phone... Now what? Is it acceptable to accidentally destroy evidence? Spoliation of evidence suggests a guilty conscience, but in this case it was an accident.
I didn’t downvote you, but I think you were downvoted because the post you were replying to was asking a legal/ethical question about whether it’s acceptable to accidentally destroy evidence in general, and you gave an unsupported answer and then veered off into a discussion of whether the law is applied to all people fairly.
In my view the question of whether the law is unbiased and “fair” is more important to resolve than whether accidentally erasing a phone seized by law enforcement counts as spoiling evidence.
There’s no point arguing that storing a vat of milk in the sun counts as the law enforcement impounding incorrectly or the suspect deliberately arranging evidence to destroy itself, when the crux of the matter is that the defendant is a black woman in Alabama so has no chance of a fair trial regardless how airtight the case might seem.
I'm pretty sure you have to tell it specifically to do that. I've done it a couple of times and don't recall ever just having it "happen" when I removed the phone from my Apple ID -- I think it was an option you could select to have it remote wiped.
What if I use the gmail/slack/whatsapp website instead of the app, and remotely log the phone out of google/etc if my device is seized. That way the data was only stored in RAM, and they shouldn't be able to access it once they get into the phone. Does that count as destruction of evidence?
If you deliberately do anything which will cause evidence to be placed beyond the reach of law enforcement then you are likely to be hit with charges.
You can quibble over technical details, but at some point a judge will be asked if it fits the charge, and make a layman decision, not a programmer's one.
Then again, if a browser cookie is the only thing providing access to "evidence" on a particular machine, then it wasn't actually on that machine to begin with.
That's far from a mere technical detail, as it also means the person lacked any meaningful physical control of, or proximity to, the evidence.
I recall there are some cases that centered on whether someone was aware of the existence of a browser cache and knew how to clear it. In that case the "evidence" really is on the local machine because that's what the cache is.
>That's far from a mere technical detail, as it also means the person lacked any meaningful physical control of, or proximity to, the evidence.
That would still be irrelevant if their intention for getting themselves to "lack any meaningful physical control of, or proximity to, the evidence" is deemed by a judge to be malicious.
You have badly misunderstood the point the GP was making. The person in question was always in a state of “lack[ing] any physical control of or, proximity to, the evidence”. They had no control of this in any way.
This, by the way is why the technical issues are important, relying solely on the lay person interpretation is dubious. A court that issued ruling on issues it doesn’t understand is inherently unjust.
>The person in question was always in a state of “lack[ing] any physical control of or, proximity to, the evidence”. They had no control of this in any way.
Well, if they arranged so they are always, from the start, in that position, with the intent to leverage that "lack of control" to not produce evidence (i.e. with doing some law breaking in mind), that could still be considering incriminating...
And that, in the end, is a lay person's judgement to make...
I think the point you are missing here is that the person would be purposely altering the state of the device after it entered police custody, with the explicit intention of restricting access to the information that the device either 1) had directly, 2) had stored in cache, or 3) had access to through cookies, logins, etc.
If you are doing something to alter the device itself in any way (i.e. the bits anywhere on the device), it's a pretty straightforward path to the clink.
What isn't clear, though, is if the device was, for example, an "approved" device on some site/services and you logged into your accounts and removed access. Let's say for the sake of argument you had an encrypted chat app on your phone and that service has both web and mobile access. Your phone and laptop are approved devices. The police confiscate your phone. As soon as they release you, you log into your account from your laptop and remove the phone's access. The phone itself hasn't been changed. I wonder what would happen there.
Wouldn't this mean that measures to protect your data from other adversaries (criminals or competitors) would be illegal as well? Unless, of course, "intent" is determined solely by the fact that I am or am not a criminal. But in an age of overcriminalization, where you can indict a ham sandwich if you need to, anyone could be considered a criminal if the government takes enough of an interest in your activities. This would mean that everyone who takes reasonable security precautions is at risk of stacked charges for "destruction of evidence"! Not a great situation to be in for journalists and dissidents.
The question is - have you, realising that you are under police investigation, attempted to destroy information which that investigation is interested in.
Not "Do you have a lock on your phone" or even "Do you have a lock on your phone which causes it to self-wipe after 5 incorrect password attempts" but "Did you, when you realised the police were on to you, deliberately wipe some data to stop you getting into trouble."
>at some point a judge will be asked if it fits the charge, and make a layman decision, not a programmer's one.
There seems to be a doubles standard in regards to the use of technical vs layman decisions. I've seen legal cases where the judge is making rulings on extremely technical points of law which are far outside the layman's understanding, but these only seem to happen when there are really expensive lawyers pushing for it. Have a public defender? Layman decisions, especially if they aren't in the defendants favor.
I wonder if anyone would have the ability to formalize this into actual research to see if there is any truth behind my intuition.
I assume that if you deliberately do anything to alter the state of the device in policy custody as evidence it will be considered tempering with evidence. A similar analogy - "hey, I didn't destroy evidence, I just remotely instructed my phone to encrypt itself. The data is still there, it's not destroyed." That would land you in a federal prison real fast, and rightfully so - you took action to change the device state after it entered police possession and you knew it was evidence. The contents of the RAM would definitely be considered evidence since by your own explanation they contain the data that the police are looking for.
It is probably similar to the police seizing your keys or combinations for locks to a storage unit and you changing the lock on the storage unit.
The police can just go to google or slack with a warrant to get the evidence. The physical equivalent would be going to the storage unit proprietor and cutting the lock.
IANAL but I would expect it to count as "hindering a police investigation", obstruction of justice, or something similar.
Depend on how easy it is to prove that you did it. For example if you're using 2FA with client specific passwords that all show locked out accounts then it's probable that they could request your access control logs for your 2FA provider.
If only one or two such services were "timed out" then it's going to be harder to prove.
I’m curious what sort of notice was given to this person that the phone was evidence. For example was she arrested, and had her phone on her, and then the police never returned it, or was there a search warrant/subpoena and she was given a receipt for items held under that order?
How long until we find out that attempting to use a Greybox will trigger the self destruct feature built into iOS by Apple as part of that patch where they disable USB data when the phone is locked?
The FBI has been doing this for some years now. There was an amusing story a while back about how Verizon put a new cell on top of the building where the FBI had their special shielded room, and the signal was so strong it was able to penetrate the shielding and some phones were lost. I could only find this brief mention here: https://spectrum.ieee.org/tech-talk/telecom/wireless/when-th...
One possible countermeasure: detect the lack of any electromagnetic field paired with no vibrations at all and colder temperature, ergo the phone isn't in someone's pocket in a no reception area but very likely was put in a shielded box. Phone sensors can reveal a lot of what is happening around them.
If it's your safe, then the phone very likely could detect its position very near to it before being screened so it can temporarily disable the automatic wipe. That includes your bank or some relatives safe of course.
As I wrote, sensors can be used to detect various conditions, including accidents. Vibration, temperature, field, orientation, position and dynamic variations of the above all can be used to extrapolate data about what is happening outside the phone.
Don't take me to the letter, of course an automatic wipe of the phone should take into account a huge number of false positives before kicking in.
That's a field where AI could play a role in profiling users even without accessing audio, video and other personal data.
There are many office/lab environments where personal electronic devices cannot be present for various reasons (security, EMI, etc.). Metal safes and lock boxes are common places to store PEDs when entering those environments.
"detect the lack of any electromagnetic field paired with no vibrations at all and colder temperature"
You go hiking far outside of civilization and go to sleep in a tent, there are no electromagnetic fields around you at all(that the phone can detect anyway), there are no vibrations at all, and the temperature is cold because the phone is not on your body.
Or for a slightly more realistic example - I stayed with a friend recently who lives in a house from the 12th century - walls in there are about a meter thick, in certain rooms there is absolutely zero reception from any network, wifi doesn't penetrate at all. The no vibrations and low temperature points also apply there.
He's suggesting some sensor checks, not giving a total framework to account for every edge case. If you were to implement something like this, you would take many cases, including edge cases, into account.
With regards to the scene from "Snowden" where he places the phones in a micro wave oven. I was surprised to find that this doesn't work - at least not with my grandma's oven.
Can they just take them offline? Or...at do they at some point need to be online for the evidence gathering?
I wonder how can they prove that she did it? Or it's your phone, you know the password and let's try to convict you? If it was programmed to be erased before the phone was taken /crime committed she is not guilty I guess.
Does anyone know what the standard of proof is for destruction of evidence? It seems a simple defense in a case like this is to have previously shared ones iCloud password with their entire family. Each family member then creates reasonable doubt for any family member who is tried for the crime.
I guess it didn't have a passcode? Just let the police try whatever they have, as long as you have the iOS 12 update that disables accessories and thwarts "GrayKey" there's no need to remotely wipe it.
It only locks out accessories if the device has been locked for over an hour, so make sure to lock your device at least an hour before getting arrested, or reboot it. Also make sure the option is enabled under passcode settings; I think it's disabled by default.
The phone is the physical evidence. Would you not consider a flash drive as physical evidence? Even at the most pedantic level, the data is still stored physically as the state of the logic gates.
Nobody cares about who did the mechanical act of destroying, especially since they (e.g. Apple) had no intent and where totally unrelated to the crime.
So that wont fly as an excuse to a judge. In general pedantic splitting hair arguments will more likely turn against the person.
You can’t expect Apple to do an investigation of every phone they wipe at a customer’s request. What about Google Docs? Even if the user clicks the “delete” button, Google performs the physical act of deletion. So should clicking that button trigger a pop-up with the message, “Deletion will occur once we’ve verified that you are not under investigation by the authorities in your jurisdiction”? Or should it silently send the “deleted” document to an archive that Google must maintain indefinitely for the purpose of responding to subpoenas?
I believe the GDPR allows a company to refuse (or delay) processing your request to delete their data about you if they have a legitimate reason, and being legally obligated to hold onto that data is one of the legitimate reasons. However, GDPR is a big law and I am not a European lawyer (I am neither, in fact), so I'm curious if my understanding is wrong. What section of GDPR are you thinking of and what exceptions does it have?
The GDPR regulation has some exceptions for legal requests and generally puts itself below local laws and regulations that specify further, to my knowledge, if you get a letter from the police/state that says some data is needed for a court case, you can safely ignore all deletion requests for that data until such time that the state/police request is no longer valid (ie, they copied it off your server).
However, once they have the data (and after asking the forensics team if it's okay) you can certainly follow up on the request and it's probably good manners to inform people that there is a legal obligation holding up the deletion of some data (unless the warrant prohibits that).
This again raises the issue of speech vs absence of speech. What if a cloud provider has applications that confirm deletions that are initiated by the user? A secret warrant prohibits disclosure of the warrant’s existence, but now we’re talking about a requirement to actively lie to users. I really don’t think that this hair-splitting.
Well, not if you're physically located in in the U.S. at the tome, but the GDPR effects non-EU businesses and governments as long as the person involved is an EU citizen.
No it doesn’t! The citizenship has nothing to do with the law. It’s the residency. An EU citizen living in New York has exactly zero to do with GDPR. An American citizen living in Paris though, would be covered by the law.
However it does apply to EU companies regardless of where the data subject is, and given that Apple is clearly an EU company if you see how its business is structured to (illegally) avoid taxes[1], it would apply in both cases.
But, more importantly, the GDPR doesn't help if the data is needed for a criminal investigation. There are very clear exemptions to the GDPR protections, and this is one of them.
> Well, not if you're physically located in in the U.S. at the tome, but the GDPR effects non-EU businesses and governments as long as the person involved is an EU citizen.
In what court would you bring a case against the United States under the GDPR?
If the business operates with the EU, this generally involves having a subsidiary in an EU country (most companies have subsidiaries in Ireland that own all of their "IP" for tax avoidance reasons, and thus can be very trivially fined as they operate as an EU company).
I get your point, but practically most large companies have EU subsidiaries (and in many cases, structure their businesses to exploit the benefits of EU nations like Ireland) and thus must follow EU laws anyway.
Maybe the detective should have to undergo some extra training? Though I guess they probably figured it out by now so no point but I mean seriously? In 2018 you are going to allow this to happen? Outsmarted by someone who sounds like a gangster. This has been an issue for years. I had a friend who was arrested for illegal fishing(5 fish out of season) and they took his phone. He was panicked because he didn't have a password on his phone and had done something that may have gotten him in trouble. For me being so experienced with tech at the time without even thinking said I would just remote wipe it if this happened to me. He said how. I said well just sign into google device manager. For those wondering he had some marijuana stuff which is now legal in Canada on his phone pictures of his plants or something nothing crazy. We were young and never thought about legal ramifications. But it seems like this has been possible for at least half a decade there should be standard procedure when taking a persons phone into custody.
Not really sure what you mean by that? I also knew my opinion would be unpopular here on HN, perhaps it was my tone I'm not sure, but really if I was the family of a murder victim and the detectives lost incriminating evidence on the phone in this way I would be upset. This has been in the news many times. We know better then that and in my opinion the detective screwed up. It appears almost everyone here knows about faraday cages it is common knowledge. So why did this happen again in 2018? Have a good day everyone
This seems like a pretty good idea in any case. If the seizing party can't crack the passcode anyway then it's a no-op. If they can then presumably they won't/can't do it right away, so it would add a bit of defense in depth.