Cool idea. Parsing LaTeX into an AST with awareness of math structure (e.g. taking into account symbolic transformations like x^{1/2}=\sqrt{2}) would be a good start.
Before catching errors, I think an easier target for a tool like this would be to enforce notation, e.g. permutations are always called π not σ, and vectors are always \vec{v} not \mathbf{v}. This would be useful for multi-author texts like wikipedia.
Parsing LaTeX would be a heavy task due to LaTeX's innumerable packages.
I would recommend a compiler which translates a subset of a clean math notation (which could be identical to the math subset of LaTeX) to native LaTeX, which includes a lint algorithm, and which also supports special graphics packages (PGF/TikZ for instance).
This approach would keep the compiler small and maintainable while retaining 100% compatibility with LaTeX.
Before catching errors, I think an easier target for a tool like this would be to enforce notation, e.g. permutations are always called π not σ, and vectors are always \vec{v} not \mathbf{v}. This would be useful for multi-author texts like wikipedia.