Hey! I actually handled the data coordination for the BraTS data sets. We used a combination of the best algorithms from prior years of the BraTS competition to pre-segment the data sets, and then we had experts (fully-trained neuroradiologists) make manual corrections, which were then further reviewed by my team before finalization.
The three tissue types of interest are fairly easy to identify in most cases. Edema is bright on the FLAIR sequence, enhancing tumor is bright on T1 post-contrast and dark on pre-contrast, and necrosis is relatively dark on T1 pre- and post-contrast while also being surrounded by enhancing tumor. These rules hold true in most cases, so it’s really just a matter of having the algorithm find these simple patterns. The challenge in doing this manually is the amount of time it takes to create a really high quality 3D segmentation. It’s painful and very tough to do with just a mouse and 3 orthogonal planes to work with.
Oh wow, the joys of the HN-community. Do you know the take of the neuroradiologists on this type of modelling? Are the models in a challenge like this already usable for enhanced decision-making by the experts?
With the segmentations these models create, you can create reports that quantitatively describe the changes in different tumor tissues. That info can be useful for guiding chemotherapy and radiotherapy decisions.
Currently, the accepted practice is to report these changes qualitatively without using segmentations (the way it’s been done for years). While the segmentations created by the models are probably good enough to use in practice today, the logistical challenges of integrating the model with the clinical workflow impede its actual use.
Sure, you could manually export your brain MR to run the model, but that’s a pain to do when you’re reading ~25 brain MR cases/day.
Thanks Satyam! That's glass half full if I read it correctly. Working models that need to be integrated into a workflow. What kind of firms are we talking about that could do that?
(I know nothing of this tbh, except I once had a demo of a radiologist back when the gamma knife was introduced, have a colleague who became a radiotherapist and a friend who works in ML for Philips medical.)
It’s definitely possible to do, and many companies are able to do it (eg RapidAI). I’m also not an expert in this specific problem, but there are HIPAA/privacy/security concerns that need to be addressed with the radiology department and IT team. Once those have been handled, there is some kind of API available to integrate the model.
The three tissue types of interest are fairly easy to identify in most cases. Edema is bright on the FLAIR sequence, enhancing tumor is bright on T1 post-contrast and dark on pre-contrast, and necrosis is relatively dark on T1 pre- and post-contrast while also being surrounded by enhancing tumor. These rules hold true in most cases, so it’s really just a matter of having the algorithm find these simple patterns. The challenge in doing this manually is the amount of time it takes to create a really high quality 3D segmentation. It’s painful and very tough to do with just a mouse and 3 orthogonal planes to work with.