Hacker News new | past | comments | ask | show | jobs | submit login

Really good humor :)

But in reality, sparse NN is just loose it's performance, mean loose precision and recall. Precision, means, larger probability of errors; recall - if you work with piece of information, which could consist of few, ie predicates, it will see not all predicates.

To be concrete, for good trained full-scale NN, usually considered 70-90% for precision and for recall; but if use small fraction of weights, usually will got drop of performance to about 40-70%, which is good enough for many cases, considering saves on size and computations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: