It would be interesting to see how this rule ("x = (y + c1) + c2 => x = y + (с1 + с2)") was encoded in the compiler. I guess it has more complicated form than a (simple and declarative) term rewriting rule, which you may use on AST transform stage.
There is a quote from Muchnik's textbook: "Like constant folding, algebraic simplifications and reassociation are best structured in a compiler as a subroutine that can be called from any other phase that can make use of it".
But it seems that in some cases, on the later phases with lower IRs, you just can't use this "subroutine" as is.
This transformation is encoded exactly like simple and declarative term rewriting rule, and it may be called from any phase of compiler, except the last one when IR passed through backend and IR objects acquire machine code attributes - registers and code ordering. But we guarantee that at this phase there are no terms in IR applicable to this and other similar simple rules.
During IR "lowering" we do not rebuild the whole IR into some other form, we just rebuild parts of them, which are too high-level for backend. Additions are obviously not among them, so the is no problem for using this transformation in lowered IR.
It would be interesting to see how this rule ("x = (y + c1) + c2 => x = y + (с1 + с2)") was encoded in the compiler. I guess it has more complicated form than a (simple and declarative) term rewriting rule, which you may use on AST transform stage.
There is a quote from Muchnik's textbook: "Like constant folding, algebraic simplifications and reassociation are best structured in a compiler as a subroutine that can be called from any other phase that can make use of it".
But it seems that in some cases, on the later phases with lower IRs, you just can't use this "subroutine" as is.