As the article states in the introductory paragraph, this problem encompasses more than just counting strings. It also involves "some fundamental tasks of natural language processing (NLP): tokenization (dividing a text into words), stemming, and part-of-speech tagging for lemmatization", so a little more work is required here.
Yep, if you don't need any of the fancy NLP features of a library then something like this is the most straight-forward. (In the article I did give a plain Python solution using split() to tokenize and then using Counter to get the hapaxes in the function called "word_form_hapaxes".)