If that's what it does, then it's "cheating" in the sense that people think they're interacting with an LLM, but they're actually interacting with an LLM + chess engine. This could give the impression that LLM's are able to generalize to a much broader extent than they actually are – while it's actually all just a special-purpose hack. A bit like putting invisible guard rails on some popular difficult test road for self-driving cars – it might lead you to think that it's able to drive that well on other difficult roads.