So in other words, this is analog computation on a digital hardware on analog substrate - the analog-to-digital step eliminates the noise of our physical reality, and the subsequent digital-to-analog reintroduces a certain flexibility of design thinking.
I wonder, is there ever a case analog-on-digital is better to work with as an abstraction layer, or is it always easier to work with digital signals directly?
Yes, at least I think so. That's why we so often try to approximate analog with psuedo-continuous data patterns like samples and floats. Even going beyond electronic computing, we invented calculus due to similar limitations with discrete data in mathematics.
Of course, these are all just approximations of analog. Much like emulation, there are limitations, penalties, and inaccuracies that will inevitably kneecap applications when compared to a native implementation running on bare metal (though, it seems that humans lack a proper math coprocessor, so calculus could be considered no more "native" than traditional algebra).
We do sometimes see specialized hardware emerge (DSPs in the case of audio signal processing, FPUs in the case of floats, NPUs/TPUs in the case of neural networks), but these are almost always highly optimized for operations specific to analog-like data patterns (e.g.: fourier transforms) rather than true analog operations. This is probably because scalable/reliable/fast analog memory remains an unsolved problem (semiconductors are simply too useful).
I wonder, is there ever a case analog-on-digital is better to work with as an abstraction layer, or is it always easier to work with digital signals directly?