It's not really random access. I bet the graph can be pipelined such that you can keep a "horizontal cross-section" of the graph in memory all the time, and you scan through the parameters from top to bottom in the graph.
Fair point, but you’ll still be bounded by disk read speed on an SSD. The access pattern itself matters less than the read cache being << the parameter set size.
You can read bits at that rate yes, but keep in mind that it’s 250 GiB /parameters/, and matrix-matrix multiplication is typically somewhere between quadratic and cubic in complexity. Then you get to wait for the page out of your intermediate result etc etc.
It’s difficult to estimate how slow it would be, but I’m guessing unusably slow.
That's pretty much what SLIDE [0] does. The driver was achieving performance parity with GPUs for CPU training, but presumably the same could apply to running inference on models too large to load into consumer GPU memory.