Hacker News new | past | comments | ask | show | jobs | submit login

You can do this now(?!)

In fact something as simple as naive Bayes will work reasonably well for that.

I'm not sure if you are aware, but in (say) image classification it's pretty common to take a pre-trained net, lock the values except for the last layer, and then retrain that for a new classification task. You can even drop the last layer entirely and use a SVM and get entirely adequate results in many domains.

Here's an example using VGG and Keras: http://blog.keras.io/building-powerful-image-classification-...

And here a similar thing for Inception and TensorFlow: https://www.tensorflow.org/versions/r0.8/how_tos/image_retra...




Transfer learning at the moment works within one domain (say images), because the low level shapes are still similar, but not between different domains of data


Sure, that's what I was pointing out.

However: Zero-Shot Learning Through Cross-Modal Transfer http://arxiv.org/abs/1301.3666




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: