Hacker News new | past | comments | ask | show | jobs | submit login

In a country of 330,000,000, people who have shot up schools may be around 330 or 0.0001%.

It is impossible to assign any single causality to events that have such low occurrences unless you have billions of examples.

(Yes, the rest of the world data can provide counterfactual data, but it is still mathematically impossible because there are 100s of variables at play)




It is possible and airliner crashes show that. They happen even less often and almost every single one had a precisely defined cause that was discovered after rigourus investigation. The reason why we do this despite the events having small chance of occurance, is because the impact is enormous and we give it as much scrutiny as we possibly can to prevent it from happening again. The same level of scrutiny is not happening with school shootings.


I've yet to meet a human being that came with an operator's manual, a maintenance manual, schematics, engineering diagrams, a diagnostic and logging system designed to survive catastrophic system failure, or a multi-billion dollar megacorp full of thousands of people whose jobs depend on being able to perform root cause analysis after catastrophe. There's a whole dimension of failure analysis that is uniquely permitted by human-designed engineering systems, thanks to our comprehensive understanding of their construction and capabilities, which allows us to narrow down root causes for failures with great precision. We can do that to some extent with people, for their less complex and more common failure mechanisms (medicine)... but I don't think "deciding to go on a shooting spree at a school" is the kind of failure mode you can pinpoint to some specific hardware or software phenomenon (at least not today), let alone how to fix it. The level of scrutiny applied to human-designed engineering systems, when instead applied to interpreting tail-end-of-distribution human behavior, does not yield an improvement in root cause specificity, and likely wouldn't yield any actionable design improvements. Instead we can really only suggest process controls, based on inferences about which particular parts of the process have the most influence on the outcomes. There are so many inputs into the black box of human behavior, and so many functionally unverifiable assumptions about the mentality of the tail-end-of-distribution people who become mass shooters, that it's extremely challenging to identify process controls that would be both effective and feasible to implement. Harder still when many suggested process controls also contain embedded political goals that have little to do with controlling the process, but which can be speciously furthered under the guise of preventing high-impact system failures.

All this to say, this is a hard problem, composed of thousands of overlapping adjacent elements, we have very little insight (or reliable insight-gathering mechanisms) applicable to the system under investigation, and so throwing money or effort at the investigation is a gamble that a feasible solution exists at all, rather than a process of concrete deduction with a deterministic endpoint. Maybe the optimal amount of scrutiny should be different.


You say this like there’s no humans involved in the manufacture, maintenance, and flying of airplanes


On the contrary: where humans are directly involved in the processes of aviation, we're stuck with process control most of the time, and it's clearly insufficient against motivated malfeasance.

When an airplane crashes due to a design failure, thorough investigation often yields a specific design element or combination of elements acting as a root cause; and we have a much broader range of corrective actions available, including modifications to the design for improved redundancy, fault tolerance, higher likelihood of correct manufacture, lower defect rate, easier cockpit control, etc. We can also make process changes, like modifying the maintenance schedule, updating the maintenance checklist, pre-screening critical components for correlated early signs of defect, updating pilot training manuals, etc.

There are also plenty of cases where aircraft crashes occurred due to malfeasance. In rare cases, there might be a design mechanism that can prevent malfeasance, but aircraft are overwhelmingly designed to be built, maintained, and operated by people who aren't trying to misuse them for mass violence (sort of like malls, or public schools). But there's legions of process controls implemented on the hiring and training of the people directly involved with aircraft manufacture, maintenance, and operation.

And yet, Boeing still made a plane that dropped out of the sky twice. And that wasn't even necessarily malfeasance - a good chunk of it can likely be chalked up to second or third order effects of cost saving measures and siloed design teams. I guess you could make an argument for malfeasance, but something something incompetence indistinguishable from malice... in any case, the technical analysis of the design problem is totally solved, but what about the process control by which the design came to be flawed in the first place? I genuinely don't know whether they've modified or added to their process controls to prevent the same design flaw, let alone others not directly related.

Scores of new process controls were put into place in the wake of 9/11 (which incidentally resulted in about the order of magnitude of casualties observed in all US mass shootings since (I think? There's a few different numbers floating around, depending on the threshold for which a mass shooting is recorded)). Many additional "process controls", if you'll allow the tortured reading of the concept, were implemented outside of aviation - a multi-trillion dollar double decade war campaign, passenger luggage and body scanning, and mass surveillance systems, to name a few. 20 years later, and none of this prevented a pilot from doing some unauthorized loops in the Seattle sky before crashing, and very likely nothing except for a lack of killing intent prevented him from flying his plane into a building downtown. Were the process controls implemented after 9/11 effective? I genuinely don't know the answer. Maybe aviation terrorism just has a low base rate to begin with. Maybe I'm uninformed, and our process controls have foiled numerous attempts at terrorism. It's hard to tell with rare events.

Unlike with engineering design work, where you can deduce real responsibility for failure and synthesize a test case to prove the soundness of a design, debugging complex and rare psychological or sociological phenomena is substantially more inexact. Mass shooters are a statistical anomaly against the population as a whole. People are aware of process controls and can deliberately plan subversion tactics. The snarled mess of genetic predispositions and environmental insults that drive someone to terrorism are not guaranteed to be the same across multiple people, especially at the long tail of the distribution. Preventative policy proposals can be misguided or undermined for numerous unrelated reasons, never achieving the desired effect.

None of this is to say we shouldn't try to find and implement effective process controls on mass violence... I just hope to offer an explanation why mass violence can't be approached with the same rigor as human-designed engineering systems with any predictable return on investment.


A 25-year hiatus on funding research on gun violence didn’t help. https://www.nytimes.com/2021/03/27/us/politics/gun-violence-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: