In:
Neuro-Oncology, Oxford University Press (OUP), Vol. 21, No. Supplement_6 ( 2019-11-11), p. vi170-vi170
Abstract:
Skull-stripping describes essential pre-processing in neuro-imaging, directly impacting subsequent analyses. Existing skull-stripping algorithms are typically developed and validated only on T1-weighted MRI scans without apparent gliomas, hence may fail when applied on neuro-oncology scans. Furthermore, most algorithms have large computational footprint and lack generalization to different acquisition protocols, limiting their clinical use. We sought to identify a practical, generalizable, robust, and accurate solution to address all these limitations. METHODS We identified multi-institutional retrospective cohorts, describing pre-operative multi-parametric MRI modalities (T1,T1Gd,T2,T2-FLAIR) with distinct acquisition protocols (e.g., slice thickness, magnet strength), varying pre-applied image-based defacing techniques, and corresponding manually-delineated ground-truth brain masks. We developed a 3D fully convolutional deep learning architecture (3D-ResUNet). Following modality co-registration to a common anatomical template, the 3D-ResUNet was trained on 314 subjects from the University of Pennsylvania (UPenn), and evaluated on 91, 152, 25, and 29 unseen subjects from UPenn, Thomas Jefferson University (TJU), Washington University (WashU), and MD Anderson (MDACC), respectively. To achieve robustness against scanner/resolution variability and utilize all modalities, we introduced a novel “modality-agnostic” training approach, which allows application of the trained model on any single modality, without requiring a pre-determined modality as input. We calculate the final brain mask for any test subject by applying our trained modality-agnostic 3D-ResUNet model on the modality with the highest resolution. RESULTS The average(±stdDev) dice similarity coefficients achieved for our novel modality-agnostic model were equal to 97.81%+0.8, 95.59%+2.0, 91.61%+1.9, and 96.05%+1.4 for the unseen data from UPenn, TJU, WashU, and MDACC, respectively. CONCLUSIONS Our novel modality-agnostic skull-stripping approach produces robust near-human performance, generalizes across acquisition protocols, image-based defacing techniques, without requiring pre-determined input modalities or depending on the availability of a specific modality. Such an approach can facilitate tool standardization for harmonized pre-processing of neuro-oncology scans for multi-institutional collaborations, enabling further data sharing and computational analyses.
Type of Medium:
Online Resource
ISSN:
1522-8517
,
1523-5866
DOI:
10.1093/neuonc/noz175.710
Language:
English
Publisher:
Oxford University Press (OUP)
Publication Date:
2019
detail.hit.zdb_id:
2094060-9
Bookmarklink