I'm not talking about using a NN for oversampling; I'm talking about whether an NN could learn to reduce aliasing when implementing nonlinear functions without any external oversampling.
Oversampled DSP algorithms work by oversampling, performing the nonlinear processing, then filtering to remove the higher harmonics, and finally downsampling again. We do it this way because it's convenient and easy to understand and based on proven mathematics. But nothing says these steps have to be distinct.
An oversampled DSP algorithm looks like a regular DSP algorithm from the outside, perhaps with some more state and latency required for it to perform the internal oversampling. You can also imolement such an oversampled algorithm entirely at the original sample rate clock; it just means the processing needs to internally process several samples per outer loop sample.
Since neural networks excel at modeling "black boxes" as one amorphous blob that we don't understand, I wonder if a NN could learn to model such an internally oversampled algorithm fairly accurately, and what the computational complexity would be.
Since you can model the oversampling/filtering/etc steps as linear convolutions with wider internal state at the original sample rate, I'm almost certain this will work with the right NN topology. It's obvious an NN can implement oversampling.
And so my question is: could treating the combined oversampled processing as one step, and training a NN on that, potentially result in a more efficient implementation than doing it naively? Especially for heavy distortion that needs high oversampling ratios.
Oversampled DSP algorithms work by oversampling, performing the nonlinear processing, then filtering to remove the higher harmonics, and finally downsampling again. We do it this way because it's convenient and easy to understand and based on proven mathematics. But nothing says these steps have to be distinct.
An oversampled DSP algorithm looks like a regular DSP algorithm from the outside, perhaps with some more state and latency required for it to perform the internal oversampling. You can also imolement such an oversampled algorithm entirely at the original sample rate clock; it just means the processing needs to internally process several samples per outer loop sample.
Since neural networks excel at modeling "black boxes" as one amorphous blob that we don't understand, I wonder if a NN could learn to model such an internally oversampled algorithm fairly accurately, and what the computational complexity would be.
Since you can model the oversampling/filtering/etc steps as linear convolutions with wider internal state at the original sample rate, I'm almost certain this will work with the right NN topology. It's obvious an NN can implement oversampling.
And so my question is: could treating the combined oversampled processing as one step, and training a NN on that, potentially result in a more efficient implementation than doing it naively? Especially for heavy distortion that needs high oversampling ratios.