Hacker News new | past | comments | ask | show | jobs | submit login

I think one of the issues is that fixing a problem is a lot harder in ML than in software engineering. You know that the model fails on this particular data point. If you have identified a bug in the code and wrote a fix did a pull request as long you are able to test the code for conditions you failed on you would have solved the problem. With modern ml especially with nueral nets as long as you don't have a way to spin up a data engine to track the problems you are facing and collect similar points you problem is not fixed.



Certainly resonates with me. When the problem is something like a list being the wrong shape or two lists not maintaining a parallel order its basically invisible unless you load the mental model of the code and think deeply about it. Not like you’re going to notice your list of length 2,340,383 should start with [0.12, 1.67, 0.66 instead of [0.412, 0.567, 0.23


I remember running into a paper from Google circa 2017 IIRC discussing the maintainability issues with machine learning models but haven’t been able to track down since. Does anyone know which one this is and have a link?



Looks like I found a similar one just now too from the group. Thanks!

https://proceedings.neurips.cc/paper_files/paper/2015/file/8...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: