Best as in best written, along with K&R—Brian Kernighan co-authoring both can't be an accident!—it's just a pleasure to read. I use it as a model when writing documentation myself. They get you started using AWK literally on page 1. Later they get into the details of particular use cases—profiling, making DSL interpreters/compilers, random text generation, making indexes, a graph-drawing language, databases, data validation etc etc. All in under 200 pages!
And best as in most useful—I use AWK every day for all kinds of things–web scraping, rearranging and creating data for programs, meta-programming etc. So easy to make whatever tool's needed for a job.
You say you are using AWK on a daily basis, including web scraping. Most programmers use Python libraries Scrapy and BeautifulSoup, and R libraries like rvest and RCurl. How do you parse HTML with AWK - as part of a pipeline including wget, hxselect, and lynx, or just by using the AWK regular expressions? I couldn't find many examples, except for some basic Rosetta code script and random blog posts. Can you share some example script?
https://archive.org/details/pdfy-MgN0H1joIoDVoIC7
Best as in best written, along with K&R—Brian Kernighan co-authoring both can't be an accident!—it's just a pleasure to read. I use it as a model when writing documentation myself. They get you started using AWK literally on page 1. Later they get into the details of particular use cases—profiling, making DSL interpreters/compilers, random text generation, making indexes, a graph-drawing language, databases, data validation etc etc. All in under 200 pages!
And best as in most useful—I use AWK every day for all kinds of things–web scraping, rearranging and creating data for programs, meta-programming etc. So easy to make whatever tool's needed for a job.