Hacker News new | past | comments | ask | show | jobs | submit login

The data is sampled in the Fourier domain. A complete scan (frequency up to a desired Nyquist limit) takes a long time for the MRI machine to acquire all of these samples.

If you can get by with sampling only a subset of this space and approximate/reconstruct the rest with a mathematical model, yet yield reasonable accuracy (wrt diagnosis or other criterion) relative to full sampling, the MRI session will be a lot faster because you don't need to acquire all the data you did before.




It sounds as though you already know this but this is actually happening: http://mriquestions.com/compressed-sensing.html


I'm not clear which scenario you are alluding to. It the expectation figuring out only how to sparsely sample the same area or how to quickly sample the larger area so that a detailed scan can be taken after that?


The former.

It is known that we can reconstruct MR images at full fidelity- with no loss of information- by randomly sampling "k-space" at something like 10% of the usual sampling rate. This leads to much faster acquisitions. I believe Siemens has a product based on this technology that is currently going to market- https://usa.healthcare.siemens.com/magnetic-resonance-imagin...

One issue, though, is that truly random sampling isn't great from a practical point of view. Sampling patterns are constrained by other equipment considerations. There is also the issue of noise.

Machine learning for MR (and CT, and PET/SPECT, and...) is an active area of research, eg https://arxiv.org/pdf/1705.06869.pdf


Having seen some "exploration of data based on mathematical models" done on logistcs data I somewhat don't fell comfortable with this approach.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: