Hacker News new | past | comments | ask | show | jobs | submit login

You don't generally look at neural network output like that.

There is generally a threshold, less than X, not the class, equal or more, is the class. Then you run the network with the same threshold on a known data set and compute a confusion matrix, which tells you about the error, I don't even want to know what a confusion matrix analogue for 3D geometry would look like but I'm sure they have something.

This is literally the process that one does in taking part of the this. And the error rate (specifically the lack of errors) is what is everybody is talking about. 90 is just as accurate as we can get with experimental measurement. It's likely at this point the source of error is in the data set (we can only train on data we experimentally measure and these are not perfect measurements). It's also possible, at this point, the model generalized so well that when it deviates from experimental measurements it's actually correct and the experimental value was the one that was wrong.

So no, the outstanding question is not "is going to be how good is the confidence metric at telling the user to trust or not trust the results.". Nobody is going to be looking confidence values when it model is giving an output, they are going to be looking at the overall error rate across a broad spectrum of proteins to get a sense of it's accuracy.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: