One of our many lawyers can relate to us how meaningful the complaint about the word count in the prosecution's brief is. Maybe it's a big deal; I have absolutely no clue about that point.
But the central argument to me in this piece is that the DOJ is simply criminalizing URL editing. That is to me a gross oversimplification of what's happened. The CFAA is constructed not to criminalize accidental or reckless unauthorized access, but instead using a "knowing" standard. The DOJ's argument in the Aurenheimer case is that the defendant was aware that he shouldn't have had access to information tied to ICC-IDs, just as he'd have been aware had he tried to loop through Social Security Numbers in some other application.
There are plenty of sane arguments (see Orin Kerr† for a good survey) that what Aurenheimer did shouldn't have constituted unauthorized access. I don't actually happen to agree with any of the ones I've heard, but, more importantly, I have a hard time believing that those arguments are so dispositive that they indicate malfeasance on the part of prosecutors.
To me, the central problem with the CFAA isn't that it's easy to trip. Rather, it's that the sentencing is totally out of whack, in two ways: (1) that CFAA reacts in a particularly noxious catalytic way with other criminal statutes to accelerate minor infractions into significant felonies, and (2) that sentences scale with "damages", which have the effect of creating sentences that scale with the number of iterations in a for(;;) loop, which is nonsensical.
The problem is not simply that once prosecuted, defendants face unjust sentences. It's worse: the oversentencing creates a perverse incentive for prosecutors, turning run-of-the-mill incidents into high-profile vanity cases that lock the DOJ into pointlessly aggressive prosecutions.
To me, it makes sense that what Aurenheimer did should have been illegal, but it makes no sense at all that he's serving a custodial sentence over it.
(I did read the whole article; I didn't find the user-agent and responsible disclosure points particularly compelling, but maybe you did; I'm happy to opine about them as well. It's my judgement, not the article's overt wording, that the argument revolves around URL editing.)
I think my thoughts on the CFAA have evolved. I agree it's not easy to trip. I agree sentences are the problem. But as far as I can tell, the US Sentencing Commission is full of crazy people. The Sentencing Guidelines are bizarre. And the whole process has caused judges to abdicate their good sense and anchor their sentences to this messed up document.
If we can't trust sentencing as a process, and I'm beginning to believe we can't, maybe sensible laws can nonetheless be ultimately unreasonable in context.
Oh, let me be clear: the law has to change. I just don't think the definition of "unauthorized access" needs to be so dramatically narrowed as Robert Graham does.
Where were you when the Sentencing Guidelines were proposed in 1987? When they became law on November 1, 1989? The guidelines at that time were all about throwing drug dealers into jail for extended periods, but because you weren't a drug dealer, so what? Now those same guidelines are being used against average computer users. Because they said nothing before, it's too late now. What was the quote from the German pastor Niemoller? "First they came for the Socialists..."
"Dissenting Justice Scalia believed the sentencing commission to be an unconstitutional delegation of legislative power by Congress to another agency because the guidelines established by the Sentencing Commission have the force of law: a judge who disregards them will be reversed. Scalia noted that the guidelines were 'heavily laden (or ought to be) with value judgments and policy assessments' rather than merely technical, Scalia also disputed the assertion by majority that the sentencing commission was in the judicial branch rather than the legislative saying the commission 'is not a court, does not exercise judicial power, and is not controlled by or accountable to members of the Judicial Branch.'"
My view, having read most of the prosecution's statement, as well has having chatted with weev about this, is that the system was open to the public.
Here's why. The prosecution details the steps Spitler took: downloading the iPad image, decrypting it, finding the url the system used (I'm guessing by running strings), and then spoofing an iPad browser request via the user agent string, and providing the userid to obtain an email address.
IANAL, but it seems that they are maybe trying to make the case that the user agent string was equivocal to a password, or that decrypting the image was the point of exceeding access. If decrypting the image was the issue, then I imagine this would be placed with all the other similar cases (DeCSS, etc), but it wouldn't constitute identity fraud. If the user agent string is seen as the password, then that is the weakest security system I've ever seen.
I haven't actually kept up with how AT&T apparently fixed it, but it seems that a rational response to this would be to make users authenticate with their own password BEFORE it spits out information like an email address. If you don someone's userid but have no password (or session id token, etc), I'd suggest that's impersonation, but not identity theft or fraud. If we're going to criminalize impersonation, I guess the Saturday Night Live cast needs to find a new career.
That said, I totally understand why weev contacted reporters and not AT&T. We're in an age where contacting large corporations about security fixes typically results in a gag order on the security researcher and no fix (hi Cisco). By contacting a reporter, he increased the chance that the story would get out and AT&T would fix the issue.
Finally, a lot of people have suggested that weev "deserved" to go to jail for other things he's done. I'm not denying he's a troll, and he has done some over the top things. However, it's not illegal to be a troll, and while one might say he should be in jail for other things he has done, he is currently in jail for this. IMHO, the punishment not only outweighs the crime, but in conjunction with the other abuses of CFAA prosecution we've seen lately (such as Aaron Swartz), I think it's time we stop allowing the government to use poster children like weev as punching bags for obvious career boosting agendas.
A security system's weakness does not grant permission to break it. That gets said every time we have this conversation, but I guess it needs to be said again.
When does accessing a system turn into breaking a security system? Let's say I am trying to access an Internet Explorer only website with Firefox, and it gives me an error. I change my user agent, and it lets me in. Did I just commit a crime?
According to the law, and what seems like common sense to me, when you know or should have known that you were accessing something you weren't meant to.
And who gets to define when you're "meant" to? The law we're talking about was written in 1986, before the web even existed. Haven't we already had this conversation with regards to Google (the debacle over the robots file) and other systems?
Finally, even if it is concluded that weev committed a crime, something with which I disagree, would you say it's ok to punish it by nearly 4 years in prison, denial to medical care, and solitary confinement for using email? All of those things have happened after he was indicted.
Same people who decide every other time the law calls for consideration of intent and mental state of defendants (which is a lot) -- the judge and jury.
And that gets back to what the authors of the article are talking about. The judge and jury have no idea what this long-haired, bearded internet troll actually did. So they accepted the prosecution's assertion of, "He's a witch!" and handed down a guilty verdict.
He accessed data belonging to other people that he should not have, and knew he should not have. (And then went on to make very unwise statements about his intentions of how to handle that data.)
That's all that really matters to the judge and jury. The technical aspects don't matter much to them.
Also: If ease of access to information means anyone can take it, do you mean to say the NSA should take whatever they want because people don't encrypt their data?
If that information is so valuable, then shouldn't some burden be placed on AT&T for negligence? If this had been health care records, AT&T would have been required to notify users and possibly pay a fine. Perhaps it makes sense to have similar laws in place to protect all data, as Europe does (see ECHR).
The point here is, if we want to nail weev to a cross, AT&T should be nailed up right next to him.
Some blame should surely be placed at AT&T's feet too, but IMO not as much. Going back to the door analogy, whoever leaves it carelessly unlocked will definitely get less sympathy (e.g. insurance may decline to cover the loss), but that does not mean they're anywhere as guilty as the thief who actually committed the burglary.
I don't think the door analogy entirely works here. Here's why:
For a typical burglary, person A leaves their door unlocked, and person B walks in. The items clearly belong to person A, and when person B takes them and walks out, theft has clearly occurred.
In this case, person A walks near person B's house, and sees that person B has laid the possessions of person C all over the sidewalk. Person A brings out their duplicator machine, creates mirror images of all person C's items, takes those mirror images, and walks away.
While there is a question of whether person A should have duplicated those items, person C is sitting across town clueless as to what's going on. There's also the question of whether person B should have left things all over the sidewalk, or should have placed the things behind the door.
If we begin comparing accessing a website to opening a door, that creates a lot of legal confusion. IANAL, but IIRC, the current legal understanding is that a computer on a network falls under the jurisdiction of the network. If that's the case, and we consider the Internet to be a public place, then a web server placed on the internet becomes public, unless there's a password on it. If, instead, we consider web servers to be like doors, where you need permission to access them, then anyone who spiders a website might be considered guilty of attempted breaking and entering. For another example, does it make more sense to allow allow smartphone apps to have full access to your phone by default, or should permission be granted for special capabilities? AFAIK, consent in this area is not very well defined.
In the traditional sense of theft, there is an object that I once had in my possession and it has now been taken from me. That doesn't really work so well with digital media where the supply issue goes away.
There's a lot more to this discussion, but I'm curious what the next response will be :)
I have a pretty good understanding of what he actually did, and when I think about the implication of immunizing every similar action by anyone on the Internet --- any vulnerability triggered by a preauth GET handler --- I have no trouble seeing why what he did was illegal. You can safely monkey around with other people's systems under that reading of the CFAA. But, once you find yourself getting private information about other users, you know something's wrong, and you need to stop right away. He didn't. Coming into that knowledge and then continuing to exploit the system is the crux of the prosecution's case here, not the nature of URLs.
But, again: I think this case didn't deserve to be prosecuted, and I think CFAA's sentencing should be revised to ensure that in the future prosecutors have no incentive to push pointless cases like it.
We aren't arguing about what he did. We are arguing about why he did it. Did you seriously just not read the entire conversation above the comment? The technical aspects of exactly what he did aren't important.
Add to that the multiple perverse incentives in every direction. Folks are discouraged to be curious, regardless of their intentions for harm. Meanwhile, the penalties for creating a system that fails to protect the most valuable details of customers is punished very lightly, if at all. Companies are only, in theory, obligated to disclose breaches that could reveal customer data if they learn of them. The easiest route for a company to take, is to take a lackadaisical approach to security and expend the bare minimum of effort on audits, with the hope that any breaches will be sophisticated enough to be imperceptible by them. We've seen this approach played out in the real world with lots of companies.
We should be thankful that braggarts and clowns like anonymous et al exist because they bring to light many breaches and weak security systems that would have been kept secret otherwise.
I didn't find the user-agent and responsible disclosure points particularly compelling
I think you may be right from a legal perspective but I find it troubling that the law is so structured. I think it's important that when dealing with a system that's designed to serve some information to the public, but not other information, it's critical that there be no ambiguity about what a given person is allowed to access.
I do not mean to say that all security mechanisms must be effective, else the issue of unauthorized access would be moot, but that no reasonable, technically adept person would think the security mechanism is not a security mechanism. In the case of a website or web service, a number of well-known industry-standard mechanisms exist, and it's reasonable to expect people to use them.
There are plenty of sane arguments, sure. Weev is clearly scum; It's possible that he was doing this entirely maliciously. That's not the argument that the prosecutors are making, though, nor have they established any of his actions were illegal beyond reasonable doubt.
But while they can edit the URL, most people don't. For that reason, prosecutors insists that it's illegal. On page 32, they describe a hypothetical "judicial law clerk" who is a "reasonably sophisticated computer user". They point out that this clerk would search in vain for hyperlinks, and thus, not be able to access the information since such hyperlinks don't exist.
This is a clever trick of the prosecutors. It exploits the fact that the way the judge is going to handle this case is to give the brief to the young clerk who spends a lot of time on Facebook, where "heavy Facebook use" is the proxy for "reasonably sophisticated computer user".
HN user Rayiner is a law clerk in a US appeals court, and he's pretty handy with assembler from wha tI recall. This is a ridiculous straw man argument what badly misrepresents the claims in the brief.
Overall, I think this article is terribly poorly written. An inability to handle basic grammar is not a good foundation for parsing legal arguments, and much of the author's argument is predicated on the assumption that lawyers and judges do not understand computers.
This is a ridiculous straw man argument what badly misrepresents the claims in the brief.
Graham isn't hypothesizing an ignorant law clerk. He is responding to the prosecution's legal brief, which does so. It isn't a "straw man argument" to point out that they've constructed a hypothetical "sophisticated computer user" who doesn't know the first thing about HTTP.
Ignorance of the law is no excuse, but ignorance of everything else is fine if you are the law.
No, but nor do I think much of the author's snide dismissal of law clerks as 'people who use Facebook a lot,' (and who, by implication, are incapable of parsing the defense team's arguments). This is a popular trope on HN, but not a very well-founded one. There is intense competition for clerking assignments, which means they go mostly to the cream of the academic crop, and good law students and lawyers are the kind of people who are able to accurately assess their own level of knoweldge on a particular subject and rectify it through research, because their professional reputation depends on the ability to do so.
Frankly, I would trust a law clerk who knew nothing about computers to understand the subject better after study than I would a programmer who knew nothing about law.
I agree with your opinion of law clerks as generally competent people, which probably extends somewhat to technology with the younger set.
That said, I know a lot of young, competent engineers and scientists who know next to nothing about the workings of computers and networks. They could figure out a lot if they had the time to put into it (I've seen a couple switch into development successfully), but usually they don't and their knowledge is of the surface-level stuff. That could still help with gut checks about what's reasonable behavior online for a casual user, but it's far from the nuanced understanding necessary to understand the ramifications of and make calls about things like the various applications of the CFAA in cases involving more advanced users.
Most people are not curious about technology and generally don't have a good understanding other people's curiosity about the subject. Should they be the ones to judge whether someone was just playing around or trying to attack something? Or should it be people with that curiosity who have had experience in playing around with security?
I would say that programmers' interpretation of the law via intent and current context in tech cases is frequently more consistent with what a just society needs than most judges' attempts at maintaining consistency with past rulings until a higher circuit corrects the precedent. I wouldn't dismiss the whole class as overenthusiastic amateurs.
I may just not be seeing the value in the judges' attempts at finding consistency, though, and I'm curious as to why they strive so hard for it versus trying to find the correct interpretation. My understanding is that that's just an attribute of the common law system. If someone could tell me why that's valuable (perhaps for consistency of enforcement/predictability of outcomes?), that'd be great. Sorry for the tangent, but it's something I'm curious about.
I get where you're coming from, but I'm not willing to join you over there - I honestly think your position is flawed and that programmers are terrible at judging such issues.
I may just not be seeing the value in the judges' attempts at finding consistency, though, and I'm curious as to why they strive so hard for it versus trying to find the correct interpretation.
This is very much an epistemological question. I'm personally a utilitarian but as we are not granted with the gift of foresight I accept that we need to work within an established framework (ie maintaining consistency with precedent) because what is correct is not nearly as obvious as we would like it to be (eg in this article I think the assumption of what user agent strings are for is too pat by far). A good, accessible, and affordable book on this subject is Bad Acts and guilty Minds by Leo Katz - written by a law professor but for a lay audience. I would be a good deal more utilitarian than he is, but then I'd have approached the defense of Weev's case far differently too.
I'm still kind of boggled that they were unable to get Weev on criminal harassment. Or anything else, for that matter, given that IIRC he had no employment of record but was independently wealthy and bragged about doing computer crime for cash. He absolutely belongs in prison; just not, perhaps, for this specific charge.
I would like to see this rhetoric about Weev stop. People are allowing themselves to be distracted by the character of the defendant rather than the stupidity of the laws involved.
Whether or not he belongs in prison is completely irrelevant to the conversation about the sentencing and laws and prosecutorial conduct involved.
Unless, of course, you want to defend bad laws so long as they apply to people who are not you.
PhasmaFelis is being perfectly reasonable in noting that Weev's current prosecution seems inappropriate and dangerously precedent setting, while still noting that Weev is vile scum (by his own admissions) who should have instead been prosecuted for other more real crimes.
But that's not relevant. At all. And, it weakens the criticism of the prosecution: "I hate to defend this guy, but..."
It's akin to saying, "Alan Turing is gay, but he's done some good work in cryptography anyway..." ... that example only seems ridiculous now because social mores have changed.
Weev's character would have relevance in a discussion about whether or not he deserves a Great Justice award, not whether or not the prosecution in this case is just or not.
It's relevant because I've seen more than a few people try to advance Weev as some sort of Aaron Swartz-style culture hero. As a part of this culture, I don't want that to happen. I don't want anyone to ever cite Weev as a personal inspiration, I don't want to see his name listed alongside people like Swartz or Bradley as an innocent hacker victimized for trying to do the right thing. If you want to use him as a test case for an unjust and poorly-interpreted law, that's fine, but don't tell me that the discussion has never been about whether Weev is a great guy, because I've seen it happen; and don't try to tell me that the truth is not relevant.
Weev is proud of hurting innocent people. He brags about it. He wants us to know. And I'm sure as hell not going to try to cover that up on his behalf, or tolerate those who do.
And here's a bit from Kathy Sierra, one of Weev's actual victims, unlike you or me:
"His rise as a folk hero is a sign of how desensitized to the abuse of women online people have become. I get so angry at the tech press, the way they try to spin him as a trickster, a prankster. It’s like they feel they have to at least say he’s a jerk. Openly admitting you enjoy ‘ruining lives for lulz’ is way past being a ‘jerk’. And it wasn’t just my life. He included my kids in his work. I think he does belong in prison for crimes he has committed, but what he’s in for now is not one of those crimes. I hate supporting the Free Weev movement, but I do."
There are a myriad of things that are legal for the government to do that are illegal for a common citizen. There's no irony in that.
If the government does not require a warrant to do something, then it should be legal for anyone to do. After all, the entire purpose of a warrant is to insure oversight in the use of government power.
The government doesn't require a warrant to take your property by taxation or for eminent domain.
The government doesn't require a warrant to prevent people from entering or leaving the country.
The government doesn't require a warrant to block off city streets or do any of a number of things to public property.
The only things that the Constitution requires the government to get a warrant to do are "search and seizure", which are terms with very specific meanings in the Common Law. The NSA somehow argues that intercepting people's traffic isn't a "search" until an analyst actually looks at it, which I think is a ridiculous argument; however, the response isn't "everything you do needs a warrant", but "that's a search, and searches need warrants".
Your examples are just word games. The intent of a warrant is oversight, all of those examples require oversight, some more so than others, but all of them require some sort of accountability.
Nope. Depending on the locality, the situations where a cop is allowed to shoot you and I'm allowed to shoot you are similar, having something to do with the perception of an immediate threat.
♫ Now every month there is a new Rodney [King] on Youtube. It's just something our generation is used to ♫
...
Citizens in the US have a duty to de-escalate the situation, a 'duty to retreat', unless they're backed into a metaphorical corner ('castle doctrine').
Police are presently seen as having a duty to escalate - to allow someone potentially hostile to back down and leave without handcuffs is seen as a dangerous failure, extending even to periods when the police officer is off-duty. Meek compliance with 'lawful orders' is the penultimate goal, and people will be bossed around, arrested, tortured (Who the fuck thought 'drive stun' mode was a good idea), or shot for failing to show appropriate amounts of submissiveness.
Assault against a police officer is seen as a crime against the state, whereas assault against a citizen is essentially mandated for a police officer to do their job.
The rules for actual murder are only slightly less assymmetrical.
Examples abound.
...
First breaking off civil relations with the citizenry via the drug war and then paramilitarizing our police force post 9/11, and finally having their behavior revealed with Youtube and smartphones, has severely damaged the credibility of the police in this country, good and bad; It's going to take some severe changes to bring it back - changes explicitly designed to "make it harder for them to do their job", as they would describe it.
"Citizens in the US have a duty to de-escalate the situation, a 'duty to retreat', unless they're backed into a metaphorical corner ('castle doctrine')."
From Wikipedia:
"A Stand-Your-Ground law is a type of self-defense law that gives individuals the right to use deadly force to defend themselves without any requirement to evade or retreat from a dangerous situation. It is law in certain jurisdictions within the United States."
http://en.wikipedia.org/wiki/Stand-your-ground_law
This is the type of law that allowed Trayvon Martin's killer to walk away as an innocent man.
This is the type of law that allowed Trayvon Martin's killer to walk away as an innocent man.
Please stop. Zimmerman's legal team never even mentioned SYG. It wouldn't have made sense, since their claim was that at the time of the shooting he was pinned on his back and unable to move. In such a situation, no one has a "duty to retreat".
I'm not claiming SYG is good or bad law, but if you'd like to argue against it please do so in a sensible manner.
"The "stand your ground law" was not used by the Zimmerman defense team during the trial, although it was considered at an earlier time. Some sources have pointed out that “Stand Your Ground” was mentioned in the Jury Instructions preceding the trial,[308] however, this is part of the required Jury Instructions in all Florida murder trials in which the defendant claims “Justifiable Use of Deadly Force” as part of their defense."
And:
"The police chief said that Zimmerman was released because there was no evidence to refute Zimmerman's claim of having acted in self-defense, and that under Florida's Stand Your Ground statute, the police were prohibited by law from making an arrest."
Honestly, I don't think it's unreasonable to think that SYG played a role in the jury's decision-making process. But hey, don't take my word for it, what about the reaction of the Governor of Florida (again from Wikipedia):
"Three weeks after the shooting, Florida Governor Rick Scott commissioned a 19-member task force to review the Florida statute that deals with justifiable use of force, including the Stand Your Ground provision."
If that's still to tenuous a connection for you, let's hear from one of the jurors on the case:
"An anonymous member of the jury appeared on Anderson Cooper 360 on July 15 to discuss how Florida's Stand Your Ground law provided a legal justification for Zimmerman's actions. According to the juror, neither charge against Zimmerman applied "because of the heat of the moment and the Stand Your Ground"
http://thewabashc3.blogspot.fr/2013/07/timothy-johnson-media...
So yeah, I really do think it's "sensible" to think that SYG helped Trayvon Martin's killer walk away as an innocent man.
I recommend the recent Warrior Cop book by Balco that explain how it got to the point where the police have the duty to escalate and not defuse the citations.
Other parts of the world take the idea of excessive use of police force somewhat more seriously and are weary of it.
There are differences between police and citizens, but you're right that "in theory" they're much narrower than is generally perceived. In practice, however, it seems that possibly-not-really-justified killings by cops are given more of the benefit of the doubt than those by private citizens. (The recent Zimmerman case seems like an exception to this, however. It's probably best not to speculate why.)
I think there is a clear distinction that you can make between an SQL injection attack and the unsecured API that weev accessed. SQL injection attacks depend on inserting malicious code into an application in order to traverse that application and access systems that stand behind it. The point of SQL injection is to circumvent restricted permissions that the owner of the server has attempted to impose.
What weev did was quite different in that he accessed this web service in exactly the way it was intended. Even if he was not the intended consumer of this data, his attempted access never exceeded the defined and expected parameters of the API he was accessing. Furthermore, he didn't circumvent [1] any access restrictions; rather, access restrictions were never imposed. weev had no information available to himself as to AT&T's intent to disclose or not disclose customer emails; as far as he was concerned, the existence of this API could have been a purposeful and not simply negligent disclosure on the part of AT&T.
I think that the reason that the weev case rankles is that web developers do this kind of thing all the time. What is the difference between what weev did here and Padmapper did when it built a product on top of Craigslist's data? Despite Eric DeMenthon's protests to the contrary, a strong argument to could be made that Padmapper's intent was to cause severe commercial harm to Craigslist, which is conceivably why he got sued. In spite of the civil case, however, criminal charges are almost unthinkable.
Also, how often do we read about someone's project being hampered when a private Google API is turned off? [2] Anyone that builds a commercial product on top of something like this would be deemed a fool, but I've never seen anyone accuse a developer who is using this kind of API of acting criminally.
What is the difference, under the law, between someone accessing a private Google API and the private AT&T API that weev accessed? As a web developer with zero documentation, zero information beyond simply knowledge of the API URL's existence, there is no apparent difference beyond what content was being served by these APIs. So, if that is the case, at what point should web developers accessing undocumented APIs begin to be concerned about their criminal liability?
[1] Shouldn't it be circumvention not authorization that that defines criminal access under the law?
> What weev did was quite different in that he accessed this web service in exactly the way it was intended.
So is a thief who walks through a door carelessly left unlocked "accessing it exactly in the way it was intended." It's what he does afterwards that makes the difference.
> What is the difference, under the law, between someone accessing a private Google API and the private AT&T API that weev accessed? As a web developer with zero documentation, zero information beyond simply knowledge of the API URL's existence, there is no apparent difference beyond what content was being served by these APIs. So, if that is the case, at what point should web developers accessing undocumented APIs begin to be concerned about their criminal liability?
When the content you get back from a URL is other people's private data, it doesn't take a genius to figure out that maybe there's some criminal liability there.
>So is a thief who walks through a door carelessly left unlocked "accessing it exactly in the way it was intended." It's what he does afterwards that makes the difference.
If he takes some pictures and leaves he certainly isn't guilty of breaking and entering.
Point taken, intent does matter. But there is a large difference between taking the information you used to the black market and taking it to a media organization.
e.g. Homakov's hack of github didn't deserve jail time as it was for publicity, not malevolance.
I agree. I think the case against Aurenheimer is ridiculous and the sentence a travesty. But I don't think it's reasonable to take that conclusion and work it back to "anything you can do with a URL that doesn't say user/password is fair game".
I have trouble agreeing with this. I know nothing of the law around this, but also realise given the international nature of the internet, the law probably doesn't mean much in perspective. Would Aurenheimer be prosecuted if he were Chinese?
The grandparent making the point about status 200 has a point, especially in regards to this case. If a website is returning 200s for a get request. Then you are implicitly 'authorized' to see that page. The counter point made of SQL injection is also valid, but SQL injection wasn't used here. Just plain old GET requests.
It's difficult to draw real world comparisons to things like this. So I don't think you can simplify it down to locked/unlocked doors, or public/private property.
If I go to cia.gov/supersecretfiles and it returns something... did I just "hack" the CIA? It doesn't make sense to me.
In northern Maine, everyone I know keeps their house doors unlocked and their keys sitting in the ignition of their cars. However, it's still illegal to steal their cars and enter their houses.
There doesn't even need to be a metaphor here: the data physically existed on a private server, and weev was not authorized to access it.
No, he would not have been prosecuted if he were Chinese.
The point about "200" error codes is sophistry. We all know that every 200 code is not actually a deliberate authorization. If you believe otherwise, then any SQL injection attack that uses GETs and generates 200 must be authorized.
Certainly the conclusion can't be that the legality of your actions depends on the reaction of an automated system at the other end of a pipe that you don't control?
I have no problem with basing it off intent, but the focus should be on prosecuting whoever put that data out there in the first place with gross negligence.
The legality of your actions depends on whether you know, as you interact with the automated system, that you have managed to find a path to data that you should not have had access to.
So, if by incrementing ICC-IDs, you found random technical data about AT&T provisioning, it would be very hard to argue that you were knowingly accessing it without authorization. But when the information you find is so personal that your first instinct is chat about selling it to spamming rings, you are on considerably less safe footing.
I am ambivalent about software liability. Vulnerable software is much more common than most people think it is, and it would be a shame if ill-conceived liability rules created a situation for startups analogous to that of medical malpractice insurance. On the other hand, liability laws would be hugely lucrative for me.
Putting the burden on a user to "know" whether they are authorized or not, seems crazy. Even if they talked about selling to spammers.
Hypothetically the police give me a Police report number that I can access at police.gov/crimes/:reportno
I discover if I increment/decrement these I can get ALL reports. I then build a cool mashup of crimes in the area on a google map.
It turns out the police didn't intend that, am I now a criminal (because of the polices intent)?
"Attacking" - are databases people? Do they have rights?
I'm stumbling around trying to figure out what the right balance is too, but I think the existing laws we have around fraud and privacy are all that we need. That is, we don't need to criminalize accessing inadvertently public information; we just need to criminalize exploiting it.
Exploiting it is criminalized. Exploiting it is harder to detect and enforce. It is easy to read server logs and parse them for crimes. Lazy man's way to enforce the law.
The government would normally file an application for leave to file a brief in excess of the appellate rules' word limit. U.S. federal courts normally grant these applications when made by the government. When made by defendants in criminal cases, it's much more of a maybe. However, courts as a rule do not like verbose briefs.
On another note, it seems to me that in these computer cases the law really doesn't care what's "under the hood." It doesn't matter if it's javascript or java, or if maybe someone could have jumped on an open Wi-Fi network.
The law lives in a conservative analogue world and will continue to do so for years to come.
If you read the irc logs, weev and spitler's intent was obviously malicious. I think many things in this case are true at the same time. Oversentencing and prosecution yes, but how do you then prosecute someone like this? There was a reason they included the irc logs in the prosecution, because the intent counts, not just the physical actions by the accused.
I'm wondering why he even appealed this. Seemed pretty straight forward what he did. What exactly is he appealing on? He was only sentenced to 3 years. He'd be out in less than 2 if he stays out of trouble. For a hacker, I'd say he got off pretty light considering what others have gotten.
Not less than 2. There is no federal parole. You earn good time at the rate of 55 days per year. So the maximum "Weev" could earn is 165 days. He has already been in seg once for rules violations so it's unlikely he would get the full 55 days, at least for this year.
He would be eligible for a halfway house; in his case that would be within three months of his mandatory release date.
So in any case he is going to spend more than two years in a federal prison. Doing time is not easy if you fight the prison system, and according to reports this is what he has been doing.
His sentence also undoubtedly contained a supervised release provision. So if he violates the conditions of his release (probably no computer use, that's a standard one) he goes back inside for the duration of the supervised release period.
Federal prison is no joke. There are very good reasons to appeal.
But the central argument to me in this piece is that the DOJ is simply criminalizing URL editing. That is to me a gross oversimplification of what's happened. The CFAA is constructed not to criminalize accidental or reckless unauthorized access, but instead using a "knowing" standard. The DOJ's argument in the Aurenheimer case is that the defendant was aware that he shouldn't have had access to information tied to ICC-IDs, just as he'd have been aware had he tried to loop through Social Security Numbers in some other application.
There are plenty of sane arguments (see Orin Kerr† for a good survey) that what Aurenheimer did shouldn't have constituted unauthorized access. I don't actually happen to agree with any of the ones I've heard, but, more importantly, I have a hard time believing that those arguments are so dispositive that they indicate malfeasance on the part of prosecutors.
To me, the central problem with the CFAA isn't that it's easy to trip. Rather, it's that the sentencing is totally out of whack, in two ways: (1) that CFAA reacts in a particularly noxious catalytic way with other criminal statutes to accelerate minor infractions into significant felonies, and (2) that sentences scale with "damages", which have the effect of creating sentences that scale with the number of iterations in a for(;;) loop, which is nonsensical.
The problem is not simply that once prosecuted, defendants face unjust sentences. It's worse: the oversentencing creates a perverse incentive for prosecutors, turning run-of-the-mill incidents into high-profile vanity cases that lock the DOJ into pointlessly aggressive prosecutions.
To me, it makes sense that what Aurenheimer did should have been illegal, but it makes no sense at all that he's serving a custodial sentence over it.
(I did read the whole article; I didn't find the user-agent and responsible disclosure points particularly compelling, but maybe you did; I'm happy to opine about them as well. It's my judgement, not the article's overt wording, that the argument revolves around URL editing.)
† http://www.volokh.com/2013/01/28/more-thoughts-on-the-six-cf...