Syntax highlighting works great for code, because code itself is highly structured, mostly normalized data (hence BNF being a thing). Even though we call them "languages", they're much closer to spreadsheets than natural languages. That's why it's so useful. Code isn't meant to be read from left to right, top to bottom. There's a lot of back-and-forth skimming to understand what the code is doing. Natural languages are meant to be read from beginning to end. We do skim, but it's almost entirely contextual, and only slightly syntactic. They're just not alike enough for things like this to make sense.
As an armchair linguist, I get frustrated by how computer nerd types keep comparing programming languages with natural languages. The two have very little in common other than some superficial similarities. The way each is acquired, used, and evolved is very different from the other. One of the worst examples of this bad analogy is sigils in Perl, which Larry insists are supposed to be good because they mirror plurals in English.
The analysis itself could be IMMENSELY useful in things like typesetting. There's a whole assortment of stylistic rules based on rather subtle things (e.g. the space between initials is usually slightly smaller) and they're rather hard to automate. If it were possible to parse and tag text like that, it may give way to advanced typesetting algorithms.
By the way, syntax highlighting works for code only as long as you see the same color scheme. If the scheme changes, the benefit is lost.