Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Explainable isn't the same as justifiable. An expert system may be able to readily explain that it determined the chance of recidivism is high due to factors x, y, and z. However, there is no guarantee that those factors correlate with recidivism at all since expert systems were simply built to emulate (human) expert decisions.

While it's very easy to do machine learning _incorrectly_ it is also possible to reasonably attribute factors to outcomes based on large quantities of data. You can also look a LIME/SHAP and other factor contribution metrics. This seems like a significant improvement over expert systems.



> Explainable isn't the same as justifiable. An expert system may be able to readily explain that it determined the chance of recidivism is high due to factors x, y, and z. However, there is no guarantee that those factors correlate with recidivism at all since expert systems were simply built to emulate (human) expert decisions.

Re-reading the article now, it may very well be an expert system. The "big data" mentioned isn't training data, but pre-existing databases like bank account info, salaries, contracts, property ownership, etc. FTA:

> Disciplinary officials need to help scientists train the machine with their experience and knowledge accumulated from previous cases. For instance, disciplinary officials spent many hours manually tagging unusual phenomenon in various types of data sets to teach the machine what to look for.

As far as explaining the results, it's hard to draw any conclusion without knowing exactly what "not very good at explaining the process it has gone through " means.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: