> Fragile, secretive and specific techniques ... parameters will be supplied to you for a price by the inventors-researchers' companies
This seems a strange/dated/paranoid view of machine learning. Perhaps it has been true historically (have any references?) but doesn't ring true for me of the field these days as I've seen it.
Hyper-parameter selection can be tricky, and some papers do handwave about it when evaluating models, although this kind of flaw is increasingly picked up on by reviewers I think.
At any rate you'll find a lot of useful literature on hyper-parameter optimisation techniques especially for the more popular and general ML models. It's recognised as an important and interesting (albeit sometimes hard and fiddly) problem, not something to be swept under the rug, and not the stuff of conspiracies.
This seems a strange/dated/paranoid view of machine learning. Perhaps it has been true historically (have any references?) but doesn't ring true for me of the field these days as I've seen it.
Hyper-parameter selection can be tricky, and some papers do handwave about it when evaluating models, although this kind of flaw is increasingly picked up on by reviewers I think.
At any rate you'll find a lot of useful literature on hyper-parameter optimisation techniques especially for the more popular and general ML models. It's recognised as an important and interesting (albeit sometimes hard and fiddly) problem, not something to be swept under the rug, and not the stuff of conspiracies.