The data is sampled in the Fourier domain. A complete scan (frequency up to a desired Nyquist limit) takes a long time for the MRI machine to acquire all of these samples.
If you can get by with sampling only a subset of this space and approximate/reconstruct the rest with a mathematical model, yet yield reasonable accuracy (wrt diagnosis or other criterion) relative to full sampling, the MRI session will be a lot faster because you don't need to acquire all the data you did before.
I'm not clear which scenario you are alluding to. It the expectation figuring out only how to sparsely sample the same area or how to quickly sample the larger area so that a detailed scan can be taken after that?
It is known that we can reconstruct MR images at full fidelity- with no loss of information- by randomly sampling "k-space" at something like 10% of the usual sampling rate. This leads to much faster acquisitions. I believe Siemens has a product based on this technology that is currently going to market- https://usa.healthcare.siemens.com/magnetic-resonance-imagin...
One issue, though, is that truly random sampling isn't great from a practical point of view. Sampling patterns are constrained by other equipment considerations. There is also the issue of noise.
If you can get by with sampling only a subset of this space and approximate/reconstruct the rest with a mathematical model, yet yield reasonable accuracy (wrt diagnosis or other criterion) relative to full sampling, the MRI session will be a lot faster because you don't need to acquire all the data you did before.