Think of this as closer to an optimizing backend for GHC than a completely brand-new compiler. It uses GHC to parse, typecheck and desugar Haskell to an intermediate form called Core and uses that to do its own optimization and code generation.
A core design principle of Haskell is that while the whole language has gotten relatively complex thanks to all its language features and extensions, almost everything can be simplified to a really small and elegant core language. This core language is a typed lambda calculus that looks a lot like a subset of Haskell except with a few changes like no type inference and different rules for strictness.
GHC then uses this pared down version of Haskell (appropriately called Core) for the rest of its optimization and compilation. This means that once the first pass is done with type inference, type checking, typeclass resolution and a lot of other high-level transformations, the rest of the compiler doesn't have to worry about them at all. This makes all of GHCs optimizations easier to implement and maintain, and it lets us add features to Haskell without needing to change the backend.
Is there any chance they might miss an opportunity for optimisation by throwing out the sugar, thereby discarding information about the prgrammer's intent?
There is, but it's pretty low because language extensions are designed with Core in mind and the desugaring tends to be relatively straightforward. In most cases, the semantics of the "sugar" is defined in terms of a simpler subset of the language, so there is very little the compiler could do differently.
More importantly, any modern compiler is very far from the producing the most optimal possible programs. Overall performance could be improved pretty much anywhere, and the tradeoffs involved make operating on the "whole" language a far lower priority than pretty much anything else.
We can test that. Is it the target, machine code? If not, are there any optimizations that can happen between the two? If it's not machine code & can be improved, then it's probably safe to classify it as an IL. Or common sense variant: it's not the final language so it's an intermediate language. :)
A core design principle of Haskell is that while the whole language has gotten relatively complex thanks to all its language features and extensions, almost everything can be simplified to a really small and elegant core language. This core language is a typed lambda calculus that looks a lot like a subset of Haskell except with a few changes like no type inference and different rules for strictness.
GHC then uses this pared down version of Haskell (appropriately called Core) for the rest of its optimization and compilation. This means that once the first pass is done with type inference, type checking, typeclass resolution and a lot of other high-level transformations, the rest of the compiler doesn't have to worry about them at all. This makes all of GHCs optimizations easier to implement and maintain, and it lets us add features to Haskell without needing to change the backend.