There isn't any reason you can't run a neural net on a CPU. It's still just a bunch of big matrix operations. The advantage of the GPU is it's a lot faster, but "a lot" might be 1 second versus 10 seconds, and for some applications 10 seconds of inference latency is just fine (I have no idea how long this model would take). All the major ML libraries will operate in CPU-only mode if you request it.