Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein
{"title":"Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images.","authors":"Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein","doi":"10.1117/1.JMI.11.4.045501","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.</p><p><strong>Approach: </strong>Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).</p><p><strong>Results: </strong>The CNN-CADe improved the 3D search for the small microcalcification signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.098</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.0002</mn></mrow> </math> ) and the 2D search for the large mass signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.076</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.066</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.035</mn></mrow> </math> ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( <math><mrow><mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>0.528</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.036</mn></mrow> </math> ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.033</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.133</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045501"},"PeriodicalIF":1.9000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232702/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.11.4.045501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/9 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.
Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).
Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( , ) and the 2D search for the large mass signal ( , ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( , ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( , ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( , ).
Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.