Hacker News new | past | comments | ask | show | jobs | submit login

That's not the point I'm making. Obviously it's bad to wrongly convict people.



I guess I don't know what your point is.

I thought when you called it a "tricky one" you were expressing that it might be a bad thing if it were difficult to convict someone based primarily on audit logs.

But if you don't want people to be wrongly convicted, then surely that's a good thing, right? As we know, there's no guarantee a particular audit log is correct.


> But if you don't want people to be wrongly convicted, then surely that's a good thing, right?

Think of it like a diagnostic test, like covid tests. That sort of test has 2 measures, not one (anyone who just says "Test X is 95% accurate!" is selling you something) - specificity and sensitivity. Sensitivity is the percentage of true positives it generates out of all positives, and specificity is the percentage of true negatives it generates out of all negatives.

I don't want people to be wrongly convicted, no, so I want legal tests to have a very high specificity. But I could do that easily, by just throwing every case out as not guilty. The hard bit is raising sensitivity at the same time. You can't just say "if you don't want people to be wrongly convicted", because that justifies far too many things.

> As we know, there's no guarantee a particular audit log is correct.

There's no guarantee anything is correct. Three witnesses could have colluded and someone might go to jail for it, but unless there's a reason to think they colluded, we don't assume that. That's the problem I'm talking about: how do we get a feel for software systems without assuming like the Post Office that they work, or like you that they don't work?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: