Ariana M Familiar, Neda Khalili, Nastaran Khalili, Cassidy Schuman, Evan Grove, Karthik Viswanathan, Jakob Seidlitz, Aaron Alexander-Bloch, Anna Zapaishchykova, Benjamin H Kann, Arastoo Vossough, Phillip B Storm, Adam C Resnick, Anahita Fathi Kazerooni, Ali Nabavizadeh
{"title":"Empowering Data Sharing in Neuroscience: A Deep Learning De-identification Method for Pediatric Brain MRIs.","authors":"Ariana M Familiar, Neda Khalili, Nastaran Khalili, Cassidy Schuman, Evan Grove, Karthik Viswanathan, Jakob Seidlitz, Aaron Alexander-Bloch, Anna Zapaishchykova, Benjamin H Kann, Arastoo Vossough, Phillip B Storm, Adam C Resnick, Anahita Fathi Kazerooni, Ali Nabavizadeh","doi":"10.3174/ajnr.A8581","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and purpose: </strong>Privacy concerns, such as identifiable facial features within brain scans, have hindered the availability of pediatric neuroimaging datasets for research. Consequently, pediatric neuroscience research lags adult counterparts, particularly in rare disease and under-represented populations. The removal of face regions (image defacing) can mitigate this, however existing defacing tools often fail with pediatric cases and diverse image types, leaving a critical gap in data accessibility. Given recent NIH data sharing mandates, novel solutions are a critical need.</p><p><strong>Materials and methods: </strong>To develop an AI-powered tool for automatic defacing of pediatric brain MRIs, deep learning methodologies (nnU-Net) were employed using a large, diverse multi-institutional dataset of clinical radiology images. This included multi-parametric MRIs (T1w, T1w-contrast enhanced, T2w, T2w-FLAIR) with 976 total images from 208 brain tumor patients (Children's Brain Tumor Network, CBTN) and 36 clinical control patients (Scans with Limited Imaging Pathology, SLIP) ranging in age from 7 days to 21 years old.</p><p><strong>Results: </strong>Face and ear removal accuracy for withheld testing data was the primary measure of model performance. Potential influences of defacing on downstream research usage were evaluated with standard image processing and AI-based pipelines. Group-level statistical trends were compared between original (non-defaced) and defaced images. Across image types, the model had high accuracy for removing face regions (mean accuracy, 98%; <i>N</i>=98 subjects/392 images), with lower performance for removal of ears (73%). Analysis of global and regional brain measures (SLIP cohort) showed minimal differences between original and defaced outputs (mean <i>r</i> <sub>S</sub>=0.93, all <i>p</i> < 0.0001). AI-generated whole brain and tumor volumes (CBTN cohort) and temporalis muscle metrics (volume, cross-sectional area, centile scores; SLIP cohort) were not significantly affected by image defacing (all <i>r</i> <sub>S</sub>>0.9, <i>p</i><0.0001).</p><p><strong>Conclusions: </strong>The defacing model demonstrates efficacy in removing facial regions across multiple MRI types and exhibits minimal impact on downstream research usage. A software package with the trained model is freely provided for wider use and further development (pediatric-auto-defacer; https://github.com/d3b-center/pediatric-auto-defacer-public). By offering a solution tailored to pediatric cases and multiple MRI sequences, this defacing tool will expedite research efforts and promote broader adoption of data sharing practices within the neuroscience community.</p><p><strong>Abbreviations: </strong>AI = artificial intelligence; CBTN = Children's Brain Tumor Network; CSA = cross-sectional area; SLIP = Scans with Limited Imaging Pathology; TMT = temporalis muscle thickness; NIH = National Institute of Health; LH = left hemisphere; RH = right hemisphere.</p>","PeriodicalId":93863,"journal":{"name":"AJNR. American journal of neuroradiology","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJNR. American journal of neuroradiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3174/ajnr.A8581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background and purpose: Privacy concerns, such as identifiable facial features within brain scans, have hindered the availability of pediatric neuroimaging datasets for research. Consequently, pediatric neuroscience research lags adult counterparts, particularly in rare disease and under-represented populations. The removal of face regions (image defacing) can mitigate this, however existing defacing tools often fail with pediatric cases and diverse image types, leaving a critical gap in data accessibility. Given recent NIH data sharing mandates, novel solutions are a critical need.
Materials and methods: To develop an AI-powered tool for automatic defacing of pediatric brain MRIs, deep learning methodologies (nnU-Net) were employed using a large, diverse multi-institutional dataset of clinical radiology images. This included multi-parametric MRIs (T1w, T1w-contrast enhanced, T2w, T2w-FLAIR) with 976 total images from 208 brain tumor patients (Children's Brain Tumor Network, CBTN) and 36 clinical control patients (Scans with Limited Imaging Pathology, SLIP) ranging in age from 7 days to 21 years old.
Results: Face and ear removal accuracy for withheld testing data was the primary measure of model performance. Potential influences of defacing on downstream research usage were evaluated with standard image processing and AI-based pipelines. Group-level statistical trends were compared between original (non-defaced) and defaced images. Across image types, the model had high accuracy for removing face regions (mean accuracy, 98%; N=98 subjects/392 images), with lower performance for removal of ears (73%). Analysis of global and regional brain measures (SLIP cohort) showed minimal differences between original and defaced outputs (mean rS=0.93, all p < 0.0001). AI-generated whole brain and tumor volumes (CBTN cohort) and temporalis muscle metrics (volume, cross-sectional area, centile scores; SLIP cohort) were not significantly affected by image defacing (all rS>0.9, p<0.0001).
Conclusions: The defacing model demonstrates efficacy in removing facial regions across multiple MRI types and exhibits minimal impact on downstream research usage. A software package with the trained model is freely provided for wider use and further development (pediatric-auto-defacer; https://github.com/d3b-center/pediatric-auto-defacer-public). By offering a solution tailored to pediatric cases and multiple MRI sequences, this defacing tool will expedite research efforts and promote broader adoption of data sharing practices within the neuroscience community.
Abbreviations: AI = artificial intelligence; CBTN = Children's Brain Tumor Network; CSA = cross-sectional area; SLIP = Scans with Limited Imaging Pathology; TMT = temporalis muscle thickness; NIH = National Institute of Health; LH = left hemisphere; RH = right hemisphere.