That's nice to hear! I'm doing my masters thesis on statistical learning and I often think how un-glamour this field now is. No bayesianism, less engineering, but having probabilistic guarantees on your out of sample results as well as sample complexity, no matter the underlying distribution, can be quite beneficial.
If you have time, I'd also read something like David Mackay's information theory textbook [1] for more of a Bayesian perspective. Interviewers did seem to appreciate having multiple perspectives and interpretations of basic results, though less practical.
Given you're making recommendations on the topic of statistics, exactly how many companies did you talk to reach the recommendations you're providing and what if any bias was there in your job search?
Good points. I'm coming from a computational physics research background and applied for a few data science positions at startups, a large social network and a private ML research group, so not that many overall, beware of the small sample size.
The smaller startups seemed to want more "data engineering" experience.
What do you mean by "what if any bias was there in your job search"?
Using concentration inequalities on Lipschitz convex learning algorithms to derive generalizing bounds. The seminal papers for this would be Stability and Generalization by Bousquet and Elisseef (2002), or those by Shalev Schwartz.