Honestly I'm not sure. Modern GCs are pretty good, and manual memory management isn't a free lunch either. One major advantage of a GC is it can batch all the `free`s together which is a lot more efficient than the manual approach where you free objects individually in a pseudo-random order.
Rust is for memory safety. GC software is a way to achieve some form of memory safety while also being much simpler and faster to write compared to Rust.
ZGC (JVM) now has guaranteed maximum pause times of 500 microseconds, with average pause times of just 50 microseconds: https://malloc.se/blog/zgc-jdk16
Pause time is not the same as cycle time, but it’s what’s important for reactivity.
Sure, but deferring collection requires increasing memory use. Since the OP was discussing microcontrollers, that's generally not desirable because memory is in short supply. A 1ms GC cycle is totally tolerable for most microcontroller applications though.
I was responding to your statement regarding “on desktops and servers”, which I believe doesn’t quite represent the state of the art anymore. I certainly agree that 1 ms is already pretty short.