There is a certain degree of contradiction between "a practical guide" and a book that purports to cover "(almost) all parsing methods, not just the popular ones".
As a matter of practicality, you could probably get away with just (a) finite automata, both in the hand-coded lexer style and regular expression style tools like lex; (b) LL(k), both in hand-coded recursive descent and tools like antlr; and (c) GLR, when you need to pull out bigger guns.
It's practical in the sense that it contains the information required to actually implement the different parsing methods instead of just getting lost in the theory. I don't think it's a contradiction at all.
According to TFA,
"No advanced mathematical knowledge is required; the book is based on an intuitive and engineering-like understanding of the processes involved in parsing, rather than on the set manipulations used in practice."
I'm guessing this drives the "practical" moniker.
A little googling uncovers a Second Edition, also available on pdf.
The thing is, the mathematics behind parsing, such as they are, are not complicated. The concepts of first and follow sets, if not their names, are required to see how grammars can be ambiguous for a simple LL or LALR parser. Set notation for formal definitions of state machines may be slightly daunting, but the intuitions are easily grasped from a few diagrams.
I'm probably too close to this domain to have a useful perspective for those not familiar with it though.
Completely disagree -- there are tons of parsing papers that are extremely notation-heavy and difficult to follow. Even other authors say so; this is from David Gries' 1972 paper "Describing an Algorithm by Hopcroft":
"In [3], Hopcroft gives an algorithm for minimizing the number of states in a finite automaton. [...] Unfortunately the algorithm, its proof of correctness and the proof of running time, are all very difficult to understand. We present here a "structured", top-down approach to the presentation of the algorithm which makes it much clearer. [...] Such a structured approach to presenting an algorithm seems to be longer and require more discussion than the conventional way. If the reader wishes to complain about this, he is challenged to first read Hopcroft's original paper and see whether he can understand it easily. The advantages of our approach will thus be clear."
And this is just a DFA minimization algorithm! It's not an undecidability proof or anything like that. And Hopcroft's original paper (PDF: ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/71/190/CS-TR-71-190.pdf) isn't nearly as dense or symbol-heavy as a lot of parsing papers out there.
There is a wide gap between reading papers that begin "A grammar G is a 4-tuple G=(N, sigma, P, S)" and are full of proofs and lemmas, and actually implementing algorithms.
If you want that sort of succinctness, it is already covered in many compiler text books.
This is to complement the existing literature with a broad and deep coverage of parsing in specific. There is most to parsing than just the front end of a compiler :-)
Love this book! It's the encyclopedia of parsing: it contains enough information to get a basic understanding of almost any parsing topic, but contains an awesome 417-entry annotated bibliography for references to primary sources.
''' The printed book contains only the about 400 literature references that are referred to in the book itself, all of them with annotations. The complete list of literature references comprises about 1700 entries of which around 1100 are annotated. It can be found here. It consists of augmented versions of the Table of Contents, Chapter 18, the Authors' Index, and the Subject Index, each reflecting in its way the added entries. '''
As a matter of practicality, you could probably get away with just (a) finite automata, both in the hand-coded lexer style and regular expression style tools like lex; (b) LL(k), both in hand-coded recursive descent and tools like antlr; and (c) GLR, when you need to pull out bigger guns.