Hacker News new | past | comments | ask | show | jobs | submit login
An Example of Forensic Science at Its Worst: US v. Brig. Gen. Jeffrey Sinclair (zdziarski.com)
114 points by jzdziarski on Aug 24, 2014 | hide | past | favorite | 34 comments



Ironically, the amount of pressure from the top involved in this (and related) cases caused several prosecutions of sexual misconduct in the military to be dismissed. You read that right. There's a doctrine called "unlawful command influence" where, if the posture of the Commander on Chief on down suggests that there is a "right verdict" independent of the facts, then the military courts should dismiss cases rather than having military officers be, essentially, ordered to convict.

A certain US president who is a Constitutional law scholar has run into this at least three times and professes to be really surprised every time it happens.

see generally: http://www.nytimes.com/2013/07/14/us/obama-remark-is-complic...


To be fair, the Uniform Code of Military Justice is wholly separate from the Constitution and the modern form (including the unlawful command influence clause[1]) has only really existed since 1950.

He probably should have been more wary after commenting on the Chelsea Manning case, but it's worth noting that there have been many judges who haven't dismissed cases after his remarks, so at the very least the propriety of his statement (which was fairly tame and limited to calling for discharge once convicted) is debatable.

[1] http://www.au.af.mil/au/awc/awcgate/ucmj.htm#837.%20ART.%203...


We have something similar going on in the UK with Cameron making commentary and publicly taking sides on court proceedings.

http://www.bbc.co.uk/news/uk-england-london-25350419

http://www.bbc.co.uk/news/uk-politics-28014035


While that's not good, it's very different thing. Those two examples were of normal, civilian courts, proceeding of an independent judiciary; a President or Prime Minister has the "power of the bully pulpit", but no formal power. Our minimalist Constitution goes so far as to say Federal judge compensation "shall not be diminished during their Continuance in Office", i.e. the other branches of government can't dock their pay.

In military courts, the judges and jury are in the chain of command, so as patio11 put it, "the military courts should dismiss cases rather than having military officers be, essentially, ordered to convict." Hence the formal name "unlawful command influence".


What this really highlights is that lay people trust computer forensics people to be experts, and often that trust is misplaced. Unfortunately, this isn't the only example of misplaced trust. In several places, Zdziarski laments that computer forensics doesn't live up to the standards of proper "science." Yet, his trust in the rest of forensic "science" is also misplaced. Pretty much everything besides DNA forensics, maybe fingerprints, is junk: http://lst.law.asu.edu/FS09/pdfs/Koehler4_3.pdf (page 4).

Indeed, the National Science Foundation has been looking into the field and has been shocked by how unfounded it is: http://www.nsf.gov/pubs/2013/nsf13120/nsf13120.jsp ("While the report acknowledges that 'the forensic science disciplines have produced valuable evidence that has contributed to the successful prosecution and conviction of criminals as well as to the exoneration of innocent people,' it cites a need for systematic research to validate the various disciplines’ underlying assumptions and methodologies, adding that the 'forensic science ... communities will be improved by opportunities to collaborate with the broader science and engineering communities.')

All that is just a polite way of saying: "holy shit your techniques lack real scientific foundations; you guys need to hire real scientists."


On top of the junky aspects of forensic "science" are many bad practices. 1600+ cases in Massachusetts were called into question due to a crime lab employee who told police and prosecutors what they wanted to hear. She was a hero until she wasn't.

She was convicted of a crime and got a short prison term. The investigation found she "acted alone."

Is she an outlier?


A sad read on many levels.

"I investigated a little. As it turns out, this particular manufacturer first copies the iOS file system onto the Windows partition that their software is running on, and then pulls the timestamp information off of the copy of the data."

"Irresponsible, to say the least. But this is the quality of the forensics software assisting our government and military. Poorly written, over-priced assumptionware."

Also, the OP's professional frustration is endearing -- typical woes of an expert who "knows and cares", observing snakeoil barons become rich on bullshit :-)


I think you should understand that America's economy has a long an illustrious history of "snakeoil barons becoming rich on bullshit". It's kind of the very basis of our whole system.


"...This, and many other types of artifacts are often either completely overlooked by numerous commercially sold, expensive-as-hell tools, or in the case of at least one tool – seemingly made up data. All of these came into play in this case and would later play a role in its outcome..."

So -- prosecutors and investigators don't understand the tools they use to search phones, judges give them wide leeway in reporting such "evidence", AND some commercial tools make up data to return?

Does this not deeply concern anybody in any of the branches of the government? I hope I'm not being alarmist, but that statement, if true, seems to me to be incredibly damming of the entire criminal justice system. It's like saying the prosecutors determine who's guilty simply by selecting them for trial. The rest of the work is simply acquiring and configuring the appropriate "tools" (Note that it doesn't say they do this on purpose. I guess that's something.)


I wonder if anyone has ever been wrongfully convicted because of poor forensic software. Those programmers are incredibly unethical people.

I'm familiar with the food-on-the-table argument, but wow, possibly getting innocent people locked up just because you don't know what you're doing is something. It can't feel good to receive the double whammy of knowing you're incompetent and that you're incompetent enough to ruin lives.

Would I do it? I don't know. Through good fortune, I've never been, as a programmer, in the situation where I worried about losing my job and being unable to find one. So maybe those of us who feel we would never do something like this (work on critical health software while ignorant, or on forensic software while clueless) just don't know that it is easy when your livelihood is threatened. Still really unethical, though.


I used to work in pressure equipment control software.

Quite literally, the software malfunctioning (ignoring physical safety mechanisms which should prevent it) could literally cause an explosion that would destroy the lab, send the equipment rocketing in to the air if not properly bolted in to concrete, and would kill anyone that happened to be in the area when it happened. The pressure differential of the shockwave was enough to seriously injure you with sudden compression and was strong enough to tare limbs off. Even with physical failsafes, that only reduced the damage to a sudden jet of highly pressurized, unbreathable air, which was dangerous in its own ways.

I would have quit that job (even if it meant going back to painting houses or tutoring high school kids or serving coffee) if it had for a moment required that I not take every opportunity to make sure the software was as safe and predictable as it needed to be. This included consulting with staff from Boeing on good engineering practices for mission critical safety gear, using languages outside of the most popular ones (ie, not using popular ones and instead using languages with good tooling for safety and reliability), etc.

I don't think it's acceptable we hold people who make forensic software to a different standard than those who make airplanes or industrial safety systems - they're ruining lives just the same.

> I wonder if anyone has ever been wrongfully convicted because of poor forensic software.

Not only this, they've been convicted by digital expert witnesses just being lazy in their analysis and only looking at the surface level.

One of my favorite digital forensic instructors from my time in college had almost helped send a guy to prison for child pornography by half-assing his analysis before he decided that he was going to do the very best job he could even if he hated the guys guts, took a second look at the evidence before submitting his report, and discovered that it was likely a coworker who had looked up the child porn on the accused guy's work computer.


People have literally died due to buggy software: https://en.wikipedia.org/wiki/Therac-25

I think it's reasonable to assume that people have also been wrongfully convicted due to buggy forensics software as well, given just how hard it is to get it right.


My sources say: yes.

There exists software which is designed to confuse them and convincingly plant evidence; and while that software is imperfect, they are powerless against that.


I've been a digital forensics software developer for a decade. There's plenty of buggy software, yes, but I've never heard of software that deliberately plants evidence.

FLETC and other training classes instill in investigators the necessity of manually verifying and testing their results for their reports and/or court testimony.

Most cases involve child pornography. Most of the time its known child pornography (i.e., there's a known victim, there are likely other folks who have been convicted of possessing the images). Most of the time the suspect takes a plea bargain.

It's the rare case that involves figuring out who-did-what-when events on a computer based on trace artifacts. So, the good news is that they are not the norm; the bad news is that only the best examiners (on either prosecution or defense) can typically handle them.


>work on critical health software while ignorant

This struck a nerve because that's how I feel in my position. I really had no idea what went into the products that I'm not working on. I feel ignorant, but someone's giving my code the okay and occasionally throwing some back, so I must be doing something right. Someone said, "He looks like he can handle this job" and here I am with no prior experience in this particular field (I did have some programming experience). I wonder how many juniors these forensic software companies employ?

On one hand, mission-critical companies should only hire people who are both experts in programming, experts in safe and reliable programming, and experts in the domain that they programming in. (note: actual experts, not "i've-done-it-once-or-twice-so-i'm-the-department-expert-because-no-one-else-is-really-working-on-it" expert)

On the other hand, it's incredibly hard to find all three of those things in sufficient numbers to staff a team capable of handling large and complicated software. With employees in general changing positions more often (sometimes on whim alone, nevermind what the world needs), it's even harder to cultivate 5+ year employees who really know their way around the software and the business.

There is just a huge volume of information and practice required to become that individual, and you really run into daily practical concerns if we were to only hire these experts. You have to be fed challenges that teach you so you can eventually become a real expert, but you need competent oversight -- a mentor, really. And sometimes you just don't have the luxury of tackling problems that are 'small enough'. Sometimes your mentor really doesn't have time to teach to the level of detail needed because let's face it, there's still a business behind all of this. And then they should be competent in security, performance, and a host of other skills because software is just so connected and reliant on the other parts of the system.

So now you need someone with all of those previous skills and the ability to teach people well. These newly minted experts will have to pass down the knowledge as well.

How do you cultivate experts without also creating a little danger to your customers and their customers and maybe even the general population? At some point, that trainee is going to have to make change and implement things without a safety net behind him. He's got to do this enough until he becomes that actual expert, but that takes a really fucking long time, and I don't think the market is really motivating people to take that route ("Why not just make some websites for 80k a year and save myself all of that stress and an early death?").

If someone else makes a big mistake, most of that time you spent to become better becomes worthless when your firm hits the front page. Now you're associated with someone else's problem and that's going to affect your chances of a job. So not only do you have to watch out for your own mistakes, but also your peers' mistakes as well. While you may not have to resort to flipping burgers, you probably won't be able to get a job in a critical environment for a while.


Not to diminish the difficulty of accomplishing this, but there is a good answer to your question: it requires both a strong culture of quality assurance (this is what you're talking about with mentors and watching each others' backs), and a strong specialization of quality assurance, made up of the experts. Unfortunately, many software organizations treat the specialization like a joke of a useless roadblock and only pay lip service to the culture. I think this may be because lots of software truly doesn't need to be very high quality, while other software truly does, and determining which kind you're making and justifying the additional cost of quality (which is enormous) is not straightforward.

In the case of this specific article, I find it odd that the blame is laid at the non-forensic-expert software engineers, rather than the companies that employ them, seemingly without the support of a tightly integrated set of experts. There is nothing wrong with dividing labor between creating software and defining and verifying what the software does. Both jobs are difficult in specialized ways. It seems that, in the opinion of one expert at least, these companies are just doing a poor job of QA.


All the time.

If you want to be horrified, read about the ludicrous bullshit that is a breathalyzer. The fundamental problem seems to be three fold: first, as anyone not an idiot would expect, animals process alcohol differently so at best you have a model relating alcohol intake with alcohol present in the lungs. Second, the programming appears to be ludicrously bad [1]. Pigs and manufacturers have fought and fought to avoid having to share code behind these devices or present any proof that they measure what they purport to measure besides, "Trust me." When competent engineers have finally gotten their hands on code, you'll be unsurprised to discover it's full of bugs. Third, even accepting the bac to gas model arguendo, it's an open question if these devices measure what they purport to measure.

Just a few representative issues when competent software engineers have got their hands on the code:

   - Of the available 12-bits of A/D precision, just 4-bits (most-significant)
   are used in the actual calculation. This sorts each raw blood-alcohol reading
   into one of 16 buckets. (I wonder how they biased the rounding on that.)
   - Out of range A/D readings are forced to the high or low limit. This must 
   happen with at least 32 consecutive readings before any flags are raised.
   - There is no feedback mechanism for the software to ensure that actuated 
   devices, such as an air pump and infrared sensor, are actually on or off 
   when they are supposed to be.
   - The software first averages the initial two readings. Then it averages the 
   third reading with that average. Then the fourth reading is averaged in, 
   etc. No comments or documentation explains the use of this formula, which 
   causes the final reading to have a weight of 0.5 in the final value and the 
   one before that to have a weight of 0.25, etc.
   - Out of range averages are forced to the high or low limit too.
   - Static analysis with lint produced over 19,000 warnings about the code 
   (that’s about three errors for ever five lines of source code).
I would suggest we move to an impairment model of punishment rather than a blood alcohol. Regardless, it's a mockery of a justice system to have defendants convicted based on the flawed source code of a flawed model of human biology, that prosecutors have successfully loopholed their way into not having to defend, and that has essentially no publicly available evidence proving it measures what it purports to measure.

see also http://digital.law.washington.edu/dspace-law/bitstream/handl...

[1] http://embeddedgurus.com/barr-code/2009/11/breathalyzer-sour...

edit: and just so we're clear, drunk drivers are utter assholes. But it's a basic principle that you need a high likelihood of guilt to punish people. These breathalyzers seem like utter bullshit, used mostly as a pseudo-scientific gloss to help pigs + prosecutors punish those whom they wish to punish.


So, I read through the article and could not find what was concretely wrong, technically, with the data. Does he state that any where? Thanks.


The details are of course confidential. He says:

"There are reasons I’m not going to dig into the details of the case. Certain people that were involved could easily be pinpointed by revealing technical details that could be pieced together with news reports, and help build a story in your mind that would probably be inaccurate. The details aren’t so important as the errors that were made. All you need to know from a technical perspective is right here: some of the types of information that these commercial tools were (and likey still are) misreporting is significant. Evidence and timestamps of a device erasure event. Evidence of a backup restore event. Application usage dates. Application deletion events and timestamps. File access times. This, and many other types of artifacts are often either completely overlooked by numerous commercially sold, expensive-as-hell tools, or in the case of at least one tool – seemingly made up data. All of these came into play in this case and would later play a role in its outcome."


"The details are of course confidential. He says:"

I kept having to skip paragraphs to get to the juicy bits. Eventually, I gave up and closed the tab. I don't much like this guy's writing style, as it constantly goes off on tangents that don't really add anything to the story, much less the baity title.


Me neither. It seemed like a fascinating read, but I wish he would have trimmed the amount of soapbox talk about his own credentials and bad software engineers, and instead provided the facts about the case (at least what changed the case and could have been revealed).


Yeah, he doesn't really discuss it. It's just the usual mix of crypto-anarcho/libertarian/fundamentalist paranoia I've come to expect.

He may indeed not be able to reveal certain details of the case. However, he'd certainly be capable of reproducing whatever bugs he found in the unnamed software (likely EnCase or FTK) and discussing them. Instead of science, though, he merely provides lip service thereto, along with political commentary, irrelevant asides, and run-on sentences.


Sadly, forensic science can actually be a lot worse than the example here.

http://www.huffingtonpost.com/2011/09/01/michael-west-fabric...


An Example of Pathetically Self-Aggrandizing Twaddle, Nearly at Its Worst.


You seem to have called a spade a spade and some folks don't seem to like that. Yes, that page (the article and the byline) does seem a little anxious to establish the author's credentials. But that should be taken separately from the content.

I'm surprised that the article got so many up-votes considering it has so little technical detail.


Jeffery Sinclair is the name of the first commander of Babylon 5. I had to check that it was a real name, but it seems that it is also the name of a real brigadier general.


This article makes me wonder: does forensic software undergo any kind of 3rd party testing for accuracy? Or are we literally just taking someone's word that it is producing accurate results?


There's some, but it's far too little to be comprehensive. Some law enforcement labs have entire QA departments dedicated to validating internal procedures, including how particular tools should be used.


the courts and their interaction with science has always been tenuous, the story of arson investigators, fire science and criminal justice is similarly terrifying http://www.newyorker.com/magazine/2009/09/07/trial-by-fire


TLDR; "computer says no" all over again.


The OP should start hacking:

http://www.sleuthkit.org/


The author states they were shocked at how Sinclair seemed to get off easy, with just a slap on the wrist. I'm not sure why that's shocking.

It's been painfully obvious for more than a decade that there exists a certain class of people in the US who are above the law, e.g. James Clapper.


Well, it looked like the powers-that-be wanted to make an example out of him. So I could see it be surprising that he got a slap on the wrist after having those "guns" pointed at him.


I read that line to mean they were shocked that he received as much punishment as he did. The phrase just before that talks about damaging the prosecution's case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: