Hacker News new | past | comments | ask | show | jobs | submit login

It's been a long time since I've done anything in machine vision, but, at least back in the day, what I was seeing was that, compared to other uses for machine vision, industrial applications tended to stay a lot simpler: Lower-resolution images, black-and-white imaging, support vector machines instead of neural nets (let alone deep learning), all that good stuff. They could get away with it because they are able to much more carefully control the input domain - controlled lighting conditions, consistent orientation of the thing being imaged, all that good stuff. So they don't need 10^9 or more weights' worth of ocean-boiling complexity the way you would in something like self-driving cars or impressing your friends with your Imagenet performance.

And if you can get away with running an SVM on a 1-megapixel black and white image, then your weights will fit in 1GB with an order of magnitude to spare.




Ok, what you said about lower res images makes sense. Lower variation of images maybe means you could get away with less weights/more quantization-- you could afford to lose more information in the model. Maybe 1GB can be sufficient then.

There's no reason to use an SVM over a (C)NN nowadays though.


Sure there is. With an SVM, you can pick different kernels to more carefully engineer specific behaviors, what kinds of errors your model is likely to make, etc. You can get a good, stable model on less training data, which is great when your training data is expensive to produce. (A situation that I'm guessing is not at all uncommon in industrial automation.) You get all that crispy crunchy large margin goodness. Stuff like that.

I'd absolutely focus on ANNs if I were an academic researcher, because that's the hot flavor of the month that's going to get your career the attention it needs to bring in funding, jobs, etc. I'd also pick it for Kaggle-type stuff, where there's effectively no real penalty for generalizing poorly. Bonus points if you consume more energy to train your model than Calgary does to stay warm in the winter.

In a business setting, though, I would only default to ANNs if it were holistically the best option for the problem domain. By "holistically" I mean, "there's more to it than chasing F1 scores at all costs." The business considerations that caused Netflix to never try productionizing the prize-winning recommendation engine, for example, are always worth thinking about. Personally, I'm disinclined to look past linear models - not even as far as kernel methods - without strong reason to believe that I'm dealing with a curve that can't be straightened with a simple feature transformation. Complexity is expensive, and needless complexity is a form of technical debt.


You can get a good, stable model on less training data, which is great when your training data is expensive to produce

Huh? SVMs don't perform better than NNs on less training data.

I'm sorry, but the rest of what you said is out of date and wrong. CNNs work better than SVMs for CV tasks. There's no reason to use SVMs anymore for CV, and no one in their right mind does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: