In fact something as simple as naive Bayes will work reasonably well for that.
I'm not sure if you are aware, but in (say) image classification it's pretty common to take a pre-trained net, lock the values except for the last layer, and then retrain that for a new classification task. You can even drop the last layer entirely and use a SVM and get entirely adequate results in many domains.
Transfer learning at the moment works within one domain (say images), because the low level shapes are still similar, but not between different domains of data
In fact something as simple as naive Bayes will work reasonably well for that.
I'm not sure if you are aware, but in (say) image classification it's pretty common to take a pre-trained net, lock the values except for the last layer, and then retrain that for a new classification task. You can even drop the last layer entirely and use a SVM and get entirely adequate results in many domains.
Here's an example using VGG and Keras: http://blog.keras.io/building-powerful-image-classification-...
And here a similar thing for Inception and TensorFlow: https://www.tensorflow.org/versions/r0.8/how_tos/image_retra...