How does this differ from active learning? When would you use which in case you don't have sufficiently large training dataset? Would you combine both approaches? If so, how?
Better still, train a learning machine (or a collection thereof) based on their expertise, and use that machine to augment the experts and reduce tedium/repetition, or even reduce error rates below what you'd see in the field otherwise.
I've been pondering about this analogy. I like this one but if AI us the new electricity do we need more Edisons or Teslas? I think everyone jumping into AI stuff learning Deep learning and stuff seems to learning how to create electricity itself than creating Lightbulb and other stuff that runs on electricity - building user apps on it.
I don't know, just check out a few Kaggle competitions and how pragmatic the winning teams are approaching their solution. It's most often a combination of tried-and-true techniques, used in an ensemble, with some smart feature selection. Anecdotally, there's plenty of ready-to-use ML tech available nowadays that I, as a novice, was able to go from zero to working Gradient Boosting classifier within a few days. For me that's the definition of applying the techniques without trying to earn a PhD in the field.
"Ugh" is not a substantial rejoinder. It's an apt analogy in my opinion. People keep waiting for AI to break out, and the argument here is it's not alone a product, it's a requisite tool for building them. AI, of course, is dependent on copious quality data, such that it fuels AI.