The frequency with which Kevin Murphy's ML book gets left out of these lists in almost bewildering.
If I had to choose 1 book as the ML bible, then it would be Murphy's (contrasted against Bishop and ESL) for the following reasons:
1. It uses CS jargon. (Bishop's book while great, uses Math/physics notation/jargon which add a barrier to entry)
2. It is more up to date and comprehensive (It covers everything from probabilistic models, traditional models, to neural networks all the way up to 2015 or so, unlike say ESL which is more introductory)
3. Everything in deep learning past 2015, is better learnt through papers/video lectures than any book. (A lot of it is intuition and not truths. There is a certain authority to books that belies our lack of understanding of NNs. Opinions on popular operations such as Dropout, Batch Norm, saliency maps have changed drastically over the last few years)
4. My ML professor used it for our upper grad level ML course and I came out very satisfied. (nothing quite like personal validation). I have read ESL and found it to be better as reading for an intro to ML course. I tried reading Bishop, and didn't like it :| )
Personally I had a really poor experience with Murphy's book because I bought one of the first editions (don't recall whether it was first or second). The list of known errors is really long [1], and the author didn't even bother to organize it properly. The author even decided to rewrite a chapter because it contained too many errors [2].
Looks like the Murphy book is $80 in hardback, they have a similar priced kindle version that doesn’t even have a cover image. Feels like this book is really for the college textbook market. Likely why it hasn’t gotten wider coverage.
If I had to choose 1 book as the ML bible, then it would be Murphy's (contrasted against Bishop and ESL) for the following reasons:
1. It uses CS jargon. (Bishop's book while great, uses Math/physics notation/jargon which add a barrier to entry)
2. It is more up to date and comprehensive (It covers everything from probabilistic models, traditional models, to neural networks all the way up to 2015 or so, unlike say ESL which is more introductory)
3. Everything in deep learning past 2015, is better learnt through papers/video lectures than any book. (A lot of it is intuition and not truths. There is a certain authority to books that belies our lack of understanding of NNs. Opinions on popular operations such as Dropout, Batch Norm, saliency maps have changed drastically over the last few years)
4. My ML professor used it for our upper grad level ML course and I came out very satisfied. (nothing quite like personal validation). I have read ESL and found it to be better as reading for an intro to ML course. I tried reading Bishop, and didn't like it :| )