The filter is just an SVF with, with a precomputed expo scale "bent over" at the top to correct for quantisation. From that the two SVF coefficients ω/Q and ω*Q are calculated every time there's a control update and of course because you can't divide on an Arduino it uses a lookup table of reciprocals just as for the blep. I could probably use a "wider" table of 16-bit values for better precision.
It's amazing how untapped this field is. I know there's a certain cost when transferring audio data to GPU and back from it to CPU, but I can imagine particularly detail reverbs implemented on GPUs.
I'm honestly really curious about this. Could you elaborate?