I am currently going through Statistical Rethinking [0], by R. McElreath. So far it's been great, all the code example and practice problems help comprehension.
I've also heard good things about Probabilistic Programming and Bayesian methods [1] by Cam Davidson Pilon.
Gelman's (one of the authors behind Stan) Bayesian Data Analysis is considered to be one of the best resources on this subject, so I suggest giving that a read. It's not the easiest book to work through though.
Here's a course from another author behind Stan on YouTube, working through it myself so can't say how good it bad it is, I think it uses the book too.
Why do you say that, if I may? I looked at the index and preface and it did not give me that impression at all - the only thing that surprised me is that I was reading “foundations” as “introduction”, and at least part of the book is focused more on “foundational theory”/“underpinnings” (e.g. measure theory aspects, or how you can construct a lambda calculus that should demonstrate that general probabilistic programming ( as in languages such as anglican[1]) can be achieved.
[1] I only have hands-on experience with anglican and PyMC3. They differ in approach: in anglican you write anglican code (not clojure) that is parsed as a model; whereas in PyMC, you write python code that constructs the PyMC model, by calling PyMC library functions. To me, the former is closer to the idea of probabilistic programming, being Turing-complete and all, but perhaps this is not a productive segregation.
https://github.com/melling/Probabilistic_Programming
Currently trying to learn RStan (and R)
Found interesting examples for the Kaggle Titanic competition:
https://wiekvoet.blogspot.com/2015/09/predicting-titanic-dea...
https://www.r-bloggers.com/2015/10/predicting-titanic-deaths...
Putting examples here:
https://github.com/melling/Probabilistic_Programming/tree/ma...
Anyway, if anyone else has stumbled upon any other blogs, books, etc, I could use other good resources.