Pub Date : 2024-03-01Epub Date: 2024-03-19DOI: 10.1117/1.JMI.11.2.024502
Tal Zimbalist, Ronnie Rosen, Keren Peri-Hanania, Yaron Caspi, Bar Rinott, Carmel Zeltser-Dekel, Eyal Bercovich, Yonina C Eldar, Shai Bagon
Purpose: The diagnosis of primary bone tumors is challenging as the initial complaints are often non-specific. The early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. We propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging. First, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians, and imaging protocols. This diversity poses a major challenge to any automatic analysis method.
Approach: We propose training an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only.
Results: We evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at a 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69.
Conclusions: The proposed preprocessing method enables effectively coping with the inherent diversity of radiographs acquired in HMOs and EDs.
目的:原发性骨肿瘤的诊断具有挑战性,因为最初的主诉往往是非特异性的。骨癌的早期发现对良好的预后至关重要。因其他原因拍摄的 X 光片可能会偶然发现病变。然而,这些早期征兆往往被遗漏。我们提出了一种自动算法,用于检测传统射线照片中的骨病变,以促进早期诊断。在这类射线照片中检测病变具有挑战性。首先,骨癌的发病率很低;任何方法都必须显示出很高的精确度,以避免过多的误报。其次,在健康维护组织(HMO)或急诊科(ED)拍摄的射线照片因 X 光机、技术人员和成像协议的不同而存在固有的多样性。这种多样性给任何自动分析方法都带来了巨大挑战:方法:我们建议训练一种现成的物体检测算法来检测射线照片中的病变。我们方法的新颖性源于一个专门的预处理阶段,可直接解决数据的多样性问题。预处理包括使用视觉变换器(ViT)进行自我监督的兴趣区域检测,以及基于前景的直方图均衡化,以增强相关区域的对比度:我们通过一项回顾性研究对我们的方法进行了评估,该研究分析了 2003 年 1 月至 2018 年 12 月期间在不同采集协议下获取的 X 光片上的骨肿瘤。我们的方法在 1.5% 的假阳性率下获得了 82.43% 的灵敏度,超过了现有的预处理方法。在病灶检测方面,我们的方法达到了 82.5%的准确率和 0.69.的 IoU:所提出的预处理方法能有效地应对在医疗机构和急诊室获取的射线照片固有的多样性。
{"title":"Detecting bone lesions in X-ray under diverse acquisition conditions.","authors":"Tal Zimbalist, Ronnie Rosen, Keren Peri-Hanania, Yaron Caspi, Bar Rinott, Carmel Zeltser-Dekel, Eyal Bercovich, Yonina C Eldar, Shai Bagon","doi":"10.1117/1.JMI.11.2.024502","DOIUrl":"10.1117/1.JMI.11.2.024502","url":null,"abstract":"<p><strong>Purpose: </strong>The diagnosis of primary bone tumors is challenging as the initial complaints are often non-specific. The early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. We propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging. First, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians, and imaging protocols. This diversity poses a major challenge to any automatic analysis method.</p><p><strong>Approach: </strong>We propose training an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only.</p><p><strong>Results: </strong>We evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at a 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69.</p><p><strong>Conclusions: </strong>The proposed preprocessing method enables effectively coping with the inherent diversity of radiographs acquired in HMOs and EDs.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024502"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10950029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1117/1.JMI.11.2.026001
Abdullah Al-Hayali, Amin Komeili, Azar Azad, Paul Sathiadoss, Nicola Schieda, Eranga Ukwatta
Purpose: Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds.
Approach: The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient.
Results: In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier.
Conclusions: A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.
{"title":"Machine learning based prediction of image quality in prostate MRI using rapid localizer images.","authors":"Abdullah Al-Hayali, Amin Komeili, Azar Azad, Paul Sathiadoss, Nicola Schieda, Eranga Ukwatta","doi":"10.1117/1.JMI.11.2.026001","DOIUrl":"10.1117/1.JMI.11.2.026001","url":null,"abstract":"<p><strong>Purpose: </strong>Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds.</p><p><strong>Approach: </strong>The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient.</p><p><strong>Results: </strong>In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier.</p><p><strong>Conclusions: </strong>A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"026001"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10905647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-04-24DOI: 10.1117/1.JMI.11.2.024013
Christiane Posselt, Mehmet Yigit Avci, Mehmet Yigitsoy, Patrick Schuenke, Christoph Kolbitsch, Tobias Schaeffter, Stefanie Remmele
Purpose: To provide a simulation framework for routine neuroimaging test data, which allows for "stress testing" of deep segmentation networks against acquisition shifts that commonly occur in clinical practice for T2 weighted (T2w) fluid-attenuated inversion recovery magnetic resonance imaging protocols.
Approach: The approach simulates "acquisition shift derivatives" of MR images based on MR signal equations. Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art multiple sclerosis lesion segmentation networks to explore a generic model function to describe the F1 score in dependence of the contrast-affecting sequence parameters echo time (TE) and inversion time (TI).
Results: The differences between real and simulated images range up to 19% in gray and white matter for extreme parameter settings. For the segmentation networks under test, the F1 score dependency on TE and TI can be well described by quadratic model functions (). The coefficients of the model functions indicate that changes of TE have more influence on the model performance than TI.
Conclusions: We show that these deviations are in the range of values as may be caused by erroneous or individual differences in relaxation times as described by literature. The coefficients of the F1 model function allow for a quantitative comparison of the influences of TE and TI. Limitations arise mainly from tissues with a low baseline signal (like cerebrospinal fluid) and when the protocol contains contrast-affecting measures that cannot be modeled due to missing information in the DICOM header.
目的:为常规神经成像测试数据提供一个模拟框架,以便针对临床实践中常见的 T2 加权(T2w)流体衰减反转恢复磁共振成像协议的采集偏移对深度分割网络进行 "压力测试":方法:该方法根据磁共振信号方程模拟磁共振图像的 "采集偏移导数"。实验包括通过真实磁共振扫描对模拟图像进行验证,以及对最先进的多发性硬化病灶分割网络进行示例压力测试,以探索一个通用模型函数,描述 F1 分数与对比度影响序列参数回波时间(TE)和反转时间(TI)的关系:结果:在极端参数设置下,灰质和白质的真实图像与模拟图像之间的差异高达 19%。对于测试中的分割网络,F1 分数与 TE 和 TI 的关系可以用二次模型函数很好地描述(R2>0.9)。模型函数的系数表明,TE 的变化比 TI 对模型性能的影响更大:我们的研究表明,这些偏差在文献描述的弛豫时间错误或个体差异可能造成的数值范围内。F1 模型函数的系数可对 TE 和 TI 的影响进行定量比较。局限性主要来自于基线信号较低的组织(如脑脊液),以及由于 DICOM 标头信息缺失而无法建模的对比度影响措施。
{"title":"Simulation of acquisition shifts in T2 weighted fluid-attenuated inversion recovery magnetic resonance images to stress test artificial intelligence segmentation networks.","authors":"Christiane Posselt, Mehmet Yigit Avci, Mehmet Yigitsoy, Patrick Schuenke, Christoph Kolbitsch, Tobias Schaeffter, Stefanie Remmele","doi":"10.1117/1.JMI.11.2.024013","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024013","url":null,"abstract":"<p><strong>Purpose: </strong>To provide a simulation framework for routine neuroimaging test data, which allows for \"stress testing\" of deep segmentation networks against acquisition shifts that commonly occur in clinical practice for T2 weighted (T2w) fluid-attenuated inversion recovery magnetic resonance imaging protocols.</p><p><strong>Approach: </strong>The approach simulates \"acquisition shift derivatives\" of MR images based on MR signal equations. Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art multiple sclerosis lesion segmentation networks to explore a generic model function to describe the F1 score in dependence of the contrast-affecting sequence parameters echo time (TE) and inversion time (TI).</p><p><strong>Results: </strong>The differences between real and simulated images range up to 19% in gray and white matter for extreme parameter settings. For the segmentation networks under test, the F1 score dependency on TE and TI can be well described by quadratic model functions (<math><mrow><msup><mi>R</mi><mn>2</mn></msup><mo>></mo><mn>0.9</mn></mrow></math>). The coefficients of the model functions indicate that changes of TE have more influence on the model performance than TI.</p><p><strong>Conclusions: </strong>We show that these deviations are in the range of values as may be caused by erroneous or individual differences in relaxation times as described by literature. The coefficients of the F1 model function allow for a quantitative comparison of the influences of TE and TI. Limitations arise mainly from tissues with a low baseline signal (like cerebrospinal fluid) and when the protocol contains contrast-affecting measures that cannot be modeled due to missing information in the DICOM header.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024013"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11042016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-04-17DOI: 10.1117/1.JMI.11.2.024011
Michael E Kim, Chenyu Gao, Leon Y Cai, Qi Yang, Nancy R Newlin, Karthik Ramadass, Angela Jefferson, Derek Archer, Niranjana Shashikumar, Kimberly R Pechman, Katherine A Gifford, Timothy J Hohman, Lori L Beason-Held, Susan M Resnick, Stefan Winzeck, Kurt G Schilling, Panpan Zhang, Daniel Moyer, Bennett A Landman
Purpose: Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that provides unique information about white matter microstructure in the brain but is susceptible to confounding effects introduced by scanner or acquisition differences. ComBat is a leading approach for addressing these site biases. However, despite its frequent use for harmonization, ComBat's robustness toward site dissimilarities and overall cohort size have not yet been evaluated in terms of DTI.
Approach: As a baseline, we match participants from two sites to create a "silver standard" that simulates a cohort for multi-site harmonization. Across sites, we harmonize mean fractional anisotropy and mean diffusivity, calculated using participant DTI data, for the regions of interest defined by the JHU EVE-Type III atlas. We bootstrap 10 iterations at 19 levels of total sample size, 10 levels of sample size imbalance between sites, and 6 levels of mean age difference between sites to quantify (i) , the linear regression coefficient of the relationship between FA and age; (ii) , the ComBat-estimated site-shift; and (iii) , the ComBat-estimated site-scaling. We characterize the reliability of ComBat by evaluating the root mean squared error in these three metrics and examine if there is a correlation between the reliability of ComBat and a violation of assumptions.
Results: ComBat remains well behaved for when and when the mean age difference is less than 4 years. The assumptions of the ComBat model regarding the normality of residual distributions are not violated as the model becomes unstable.
Conclusion: Prior to harmonization of DTI data with ComBat, the input cohort should be examined for size and covariate distributions of each site. Direct assessment of residual distributions is less informative on stability than bootstrap analysis. We caution use ComBat of in situations that do not conform to the above thresholds.
{"title":"Empirical assessment of the assumptions of ComBat with diffusion tensor imaging.","authors":"Michael E Kim, Chenyu Gao, Leon Y Cai, Qi Yang, Nancy R Newlin, Karthik Ramadass, Angela Jefferson, Derek Archer, Niranjana Shashikumar, Kimberly R Pechman, Katherine A Gifford, Timothy J Hohman, Lori L Beason-Held, Susan M Resnick, Stefan Winzeck, Kurt G Schilling, Panpan Zhang, Daniel Moyer, Bennett A Landman","doi":"10.1117/1.JMI.11.2.024011","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024011","url":null,"abstract":"<p><strong>Purpose: </strong>Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that provides unique information about white matter microstructure in the brain but is susceptible to confounding effects introduced by scanner or acquisition differences. ComBat is a leading approach for addressing these site biases. However, despite its frequent use for harmonization, ComBat's robustness toward site dissimilarities and overall cohort size have not yet been evaluated in terms of DTI.</p><p><strong>Approach: </strong>As a baseline, we match <math><mrow><mi>N</mi><mo>=</mo><mn>358</mn></mrow></math> participants from two sites to create a \"silver standard\" that simulates a cohort for multi-site harmonization. Across sites, we harmonize mean fractional anisotropy and mean diffusivity, calculated using participant DTI data, for the regions of interest defined by the JHU EVE-Type III atlas. We bootstrap 10 iterations at 19 levels of total sample size, 10 levels of sample size imbalance between sites, and 6 levels of mean age difference between sites to quantify (i) <math><mrow><msub><mi>β</mi><mi>AGE</mi></msub></mrow></math>, the linear regression coefficient of the relationship between FA and age; (ii) <math><mrow><msubsup><mrow><mover><mrow><mi>γ</mi></mrow><mrow><mo>^</mo></mrow></mover></mrow><mrow><mi>s</mi><mi>f</mi></mrow><mrow><mo>*</mo></mrow></msubsup></mrow></math>, the ComBat-estimated site-shift; and (iii) <math><mrow><msubsup><mrow><mover><mrow><mi>δ</mi></mrow><mrow><mo>^</mo></mrow></mover></mrow><mrow><mi>s</mi><mi>f</mi></mrow><mrow><mo>*</mo></mrow></msubsup></mrow></math>, the ComBat-estimated site-scaling. We characterize the reliability of ComBat by evaluating the root mean squared error in these three metrics and examine if there is a correlation between the reliability of ComBat and a violation of assumptions.</p><p><strong>Results: </strong>ComBat remains well behaved for <math><mrow><msub><mrow><mi>β</mi></mrow><mrow><mi>AGE</mi></mrow></msub></mrow></math> when <math><mrow><mi>N</mi><mo>></mo><mn>162</mn></mrow></math> and when the mean age difference is less than 4 years. The assumptions of the ComBat model regarding the normality of residual distributions are not violated as the model becomes unstable.</p><p><strong>Conclusion: </strong>Prior to harmonization of DTI data with ComBat, the input cohort should be examined for size and covariate distributions of each site. Direct assessment of residual distributions is less informative on stability than bootstrap analysis. We caution use ComBat of in situations that do not conform to the above thresholds.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024011"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11034156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-04-08DOI: 10.1117/1.JMI.11.2.024009
Rahul Pemmaraju, Gayoung Kim, Lina Mekki, Daniel Y Song, Junghoon Lee
Purpose: Segmentation of the prostate and surrounding organs at risk from computed tomography is required for radiation therapy treatment planning. We propose an automatic two-step deep learning-based segmentation pipeline that consists of an initial multi-organ segmentation network for organ localization followed by organ-specific fine segmentation.
Approach: Initial segmentation of all target organs is performed using a hybrid convolutional-transformer model, axial cross-attention UNet. The output from this model allows for region of interest computation and is used to crop tightly around individual organs for organ-specific fine segmentation. Information from this network is also propagated to the fine segmentation stage through an image enhancement module, highlighting regions of interest in the original image that might be difficult to segment. Organ-specific fine segmentation is performed on these cropped and enhanced images to produce the final output segmentation.
Results: We apply the proposed approach to segment the prostate, bladder, rectum, seminal vesicles, and femoral heads from male pelvic computed tomography (CT). When tested on a held-out test set of 30 images, our two-step pipeline outperformed other deep learning-based multi-organ segmentation algorithms, achieving average dice similarity coefficient (DSC) of (prostate), (bladder), (rectum), (seminal vesicles), and (femoral heads).
Conclusions: Our results demonstrate that a two-step segmentation pipeline with initial multi-organ segmentation and additional fine segmentation can delineate male pelvic CT organs well. The utility of this additional layer of fine segmentation is most noticeable in challenging cases, as our two-step pipeline produces noticeably more accurate and less erroneous results compared to other state-of-the-art methods on such images.
{"title":"Cascaded cross-attention transformers and convolutional neural networks for multi-organ segmentation in male pelvic computed tomography.","authors":"Rahul Pemmaraju, Gayoung Kim, Lina Mekki, Daniel Y Song, Junghoon Lee","doi":"10.1117/1.JMI.11.2.024009","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024009","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of the prostate and surrounding organs at risk from computed tomography is required for radiation therapy treatment planning. We propose an automatic two-step deep learning-based segmentation pipeline that consists of an initial multi-organ segmentation network for organ localization followed by organ-specific fine segmentation.</p><p><strong>Approach: </strong>Initial segmentation of all target organs is performed using a hybrid convolutional-transformer model, axial cross-attention UNet. The output from this model allows for region of interest computation and is used to crop tightly around individual organs for organ-specific fine segmentation. Information from this network is also propagated to the fine segmentation stage through an image enhancement module, highlighting regions of interest in the original image that might be difficult to segment. Organ-specific fine segmentation is performed on these cropped and enhanced images to produce the final output segmentation.</p><p><strong>Results: </strong>We apply the proposed approach to segment the prostate, bladder, rectum, seminal vesicles, and femoral heads from male pelvic computed tomography (CT). When tested on a held-out test set of 30 images, our two-step pipeline outperformed other deep learning-based multi-organ segmentation algorithms, achieving average dice similarity coefficient (DSC) of <math><mrow><mn>0.836</mn><mo>±</mo><mn>0.071</mn></mrow></math> (prostate), <math><mrow><mn>0.947</mn><mo>±</mo><mn>0.038</mn></mrow></math> (bladder), <math><mrow><mn>0.828</mn><mo>±</mo><mn>0.057</mn></mrow></math> (rectum), <math><mrow><mn>0.724</mn><mo>±</mo><mn>0.101</mn></mrow></math> (seminal vesicles), and <math><mrow><mn>0.933</mn><mo>±</mo><mn>0.020</mn></mrow></math> (femoral heads).</p><p><strong>Conclusions: </strong>Our results demonstrate that a two-step segmentation pipeline with initial multi-organ segmentation and additional fine segmentation can delineate male pelvic CT organs well. The utility of this additional layer of fine segmentation is most noticeable in challenging cases, as our two-step pipeline produces noticeably more accurate and less erroneous results compared to other state-of-the-art methods on such images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024009"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-04-03DOI: 10.1117/1.JMI.11.2.024504
Karen Drukker, Berkman Sahiner, Tingting Hu, Grace Hyun Kim, Heather M Whitney, Natalie Baughan, Kyle J Myers, Maryellen L Giger, Michael McNitt-Gray
Purpose: The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms.
Approach: An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos.
Results: Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability.
Conclusions: The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.
目的:创建医学影像和数据资源中心(MIDRC)是为了促进医学影像机器学习(ML)研究,研究任务包括早期检测、诊断、预后以及与 2019 年冠状病毒疾病大流行及以后相关的治疗反应评估。这项工作的目的是创建一个公开可用的计量资源,以协助研究人员评估其医学图像分析 ML 算法的性能:方法:我们开发了一个名为 MIDRC-MetricTree 的交互式决策树,该决策树按照 ML 算法所训练执行的任务类型进行组织。该决策树的标准是:(1) 用户可以选择任务类型、参考标准性质和算法输出类型等信息;(2) 根据用户输入,提供有关适当性能评估方法和指标的建议,包括文献参考,并在可能的情况下提供公开软件/代码链接以及简短的教程视频:为决策树确定了五类任务:(a) 分类;(b) 检测/定位;(c) 分割;(d) 时间到事件 (TTE) 分析;(e) 估算。例如,决策树的分类分支包括两类(二进制)和多类分类任务,并针对算法产生二进制或非二进制(如连续)输出的情况,以及具有可忽略或不可忽略的可变性和不可靠性的参考标准,提供方法、度量、软件/代码建议和文献参考:公开的决策树是帮助研究人员进行特定任务性能评估的资源,包括分类、检测/定位、分割、TTE 和估算任务。
{"title":"MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis.","authors":"Karen Drukker, Berkman Sahiner, Tingting Hu, Grace Hyun Kim, Heather M Whitney, Natalie Baughan, Kyle J Myers, Maryellen L Giger, Michael McNitt-Gray","doi":"10.1117/1.JMI.11.2.024504","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024504","url":null,"abstract":"<p><strong>Purpose: </strong>The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms.</p><p><strong>Approach: </strong>An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos.</p><p><strong>Results: </strong>Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability.</p><p><strong>Conclusions: </strong>The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024504"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-03-23DOI: 10.1117/1.JMI.11.2.024005
Ahmad Qasem, Genggeng Qin, Zhiguo Zhou
Purpose: The objective of this study was to develop a fully automatic mass segmentation method called AMS-U-Net for digital breast tomosynthesis (DBT), a popular breast cancer screening imaging modality. The aim was to address the challenges posed by the increasing number of slices in DBT, which leads to higher mass contouring workload and decreased treatment efficiency.
Approach: The study used 50 slices from different DBT volumes for evaluation. The AMS-U-Net approach consisted of four stages: image pre-processing, AMS-U-Net training, image segmentation, and post-processing. The model performance was evaluated by calculating the true positive ratio (TPR), false positive ratio (FPR), F-score, intersection over union (IoU), and 95% Hausdorff distance (pixels) as they are appropriate for datasets with class imbalance.
Results: The model achieved 0.911, 0.003, 0.911, 0.900, 5.82 for TPR, FPR, F-score, IoU, and 95% Hausdorff distance, respectively.
Conclusions: The AMS-U-Net model demonstrated impressive visual and quantitative results, achieving high accuracy in mass segmentation without the need for human interaction. This capability has the potential to significantly increase clinical efficiency and workflow in DBT for breast cancer screening.
{"title":"AMS-U-Net: automatic mass segmentation in digital breast tomosynthesis via U-Net.","authors":"Ahmad Qasem, Genggeng Qin, Zhiguo Zhou","doi":"10.1117/1.JMI.11.2.024005","DOIUrl":"10.1117/1.JMI.11.2.024005","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to develop a fully automatic mass segmentation method called AMS-U-Net for digital breast tomosynthesis (DBT), a popular breast cancer screening imaging modality. The aim was to address the challenges posed by the increasing number of slices in DBT, which leads to higher mass contouring workload and decreased treatment efficiency.</p><p><strong>Approach: </strong>The study used 50 slices from different DBT volumes for evaluation. The AMS-U-Net approach consisted of four stages: image pre-processing, AMS-U-Net training, image segmentation, and post-processing. The model performance was evaluated by calculating the true positive ratio (TPR), false positive ratio (FPR), F-score, intersection over union (IoU), and 95% Hausdorff distance (pixels) as they are appropriate for datasets with class imbalance.</p><p><strong>Results: </strong>The model achieved 0.911, 0.003, 0.911, 0.900, 5.82 for TPR, FPR, F-score, IoU, and 95% Hausdorff distance, respectively.</p><p><strong>Conclusions: </strong>The AMS-U-Net model demonstrated impressive visual and quantitative results, achieving high accuracy in mass segmentation without the need for human interaction. This capability has the potential to significantly increase clinical efficiency and workflow in DBT for breast cancer screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024005"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140207950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-03-08DOI: 10.1117/1.JMI.11.2.024002
Chengyue Wu, David A Hormuth, Ty Easley, Federico Pineda, Gregory S Karczmar, Thomas E Yankeelov
Purpose: Validation of quantitative imaging biomarkers is a challenging task, due to the difficulty in measuring the ground truth of the target biological process. A digital phantom-based framework is established to systematically validate the quantitative characterization of tumor-associated vascular morphology and hemodynamics based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).
Approach: A digital phantom is employed to provide a ground-truth vascular system within which 45 synthetic tumors are simulated. Morphological analysis is performed on high-spatial resolution DCE-MRI data (spatial/temporal resolution = 30 to ) to determine the accuracy of locating the arterial inputs of tumor-associated vessels (TAVs). Hemodynamic analysis is then performed on the combination of high-spatial resolution and high-temporal resolution (spatial/temporal resolution = 60 to to 10 s) DCE-MRI data, determining the accuracy of estimating tumor-associated blood pressure, vascular extraction rate, interstitial pressure, and interstitial flow velocity.
Results: The observed effects of acquisition settings demonstrate that, when optimizing the DCE-MRI protocol for the morphological analysis, increasing the spatial resolution is helpful but not necessary, as the location and arterial input of TAVs can be recovered with high accuracy even with the lowest investigated spatial resolution. When optimizing the DCE-MRI protocol for hemodynamic analysis, increasing the spatial resolution of the images used for vessel segmentation is essential, and the spatial and temporal resolutions of the images used for the kinetic parameter fitting require simultaneous optimization.
Conclusion: An in silico validation framework was generated to systematically quantify the effects of image acquisition settings on the ability to accurately estimate tumor-associated characteristics.
{"title":"Systematic evaluation of MRI-based characterization of tumor-associated vascular morphology and hemodynamics via a dynamic digital phantom.","authors":"Chengyue Wu, David A Hormuth, Ty Easley, Federico Pineda, Gregory S Karczmar, Thomas E Yankeelov","doi":"10.1117/1.JMI.11.2.024002","DOIUrl":"10.1117/1.JMI.11.2.024002","url":null,"abstract":"<p><strong>Purpose: </strong>Validation of quantitative imaging biomarkers is a challenging task, due to the difficulty in measuring the ground truth of the target biological process. A digital phantom-based framework is established to systematically validate the quantitative characterization of tumor-associated vascular morphology and hemodynamics based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).</p><p><strong>Approach: </strong>A digital phantom is employed to provide a ground-truth vascular system within which 45 synthetic tumors are simulated. Morphological analysis is performed on high-spatial resolution DCE-MRI data (spatial/temporal resolution = 30 to <math><mrow><mn>300</mn><mtext> </mtext><mi>μ</mi><mi>m</mi><mo>/</mo><mn>60</mn><mtext> </mtext><mi>s</mi></mrow></math>) to determine the accuracy of locating the arterial inputs of tumor-associated vessels (TAVs). Hemodynamic analysis is then performed on the combination of high-spatial resolution and high-temporal resolution (spatial/temporal resolution = 60 to <math><mrow><mn>300</mn><mtext> </mtext><mi>μ</mi><mi>m</mi><mo>/</mo><mn>1</mn></mrow></math> to 10 s) DCE-MRI data, determining the accuracy of estimating tumor-associated blood pressure, vascular extraction rate, interstitial pressure, and interstitial flow velocity.</p><p><strong>Results: </strong>The observed effects of acquisition settings demonstrate that, when optimizing the DCE-MRI protocol for the morphological analysis, increasing the spatial resolution is helpful but not necessary, as the location and arterial input of TAVs can be recovered with high accuracy even with the lowest investigated spatial resolution. When optimizing the DCE-MRI protocol for hemodynamic analysis, increasing the spatial resolution of the images used for vessel segmentation is essential, and the spatial and temporal resolutions of the images used for the kinetic parameter fitting require simultaneous optimization.</p><p><strong>Conclusion: </strong>An <i>in silico</i> validation framework was generated to systematically quantify the effects of image acquisition settings on the ability to accurately estimate tumor-associated characteristics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024002"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10921778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140094911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01Epub Date: 2024-04-24DOI: 10.1117/1.JMI.11.2.024012
Haoli Yin, Rachel Eimen, Daniel Moyer, Audrey K Bowden
Purpose: Specular reflections (SRs) are highlight artifacts commonly found in endoscopy videos that can severely disrupt a surgeon's observation and judgment. Despite numerous attempts to restore SR, existing methods are inefficient and time consuming and can lead to false clinical interpretations. Therefore, we propose the first complete deep-learning solution, SpecReFlow, to detect and restore SR regions from endoscopy video with spatial and temporal coherence.
Approach: SpecReFlow consists of three stages: (1) an image preprocessing stage to enhance contrast, (2) a detection stage to indicate where the SR region is present, and (3) a restoration stage in which we replace SR pixels with an accurate underlying tissue structure. Our restoration approach uses optical flow to seamlessly propagate color and structure from other frames of the endoscopy video.
Results: Comprehensive quantitative and qualitative tests for each stage reveal that our SpecReFlow solution performs better than previous detection and restoration methods. Our detection stage achieves a Dice score of 82.8% and a sensitivity of 94.6%, and our restoration stage successfully incorporates temporal information with spatial information for more accurate restorations than existing techniques.
Conclusions: SpecReFlow is a first-of-its-kind solution that combines temporal and spatial information for effective detection and restoration of SR regions, surpassing previous methods relying on single-frame spatial information. Future work will look to optimizing SpecReFlow for real-time applications. SpecReFlow is a software-only solution for restoring image content lost due to SR, making it readily deployable in existing clinical settings to improve endoscopy video quality for accurate diagnosis and treatment.
目的:镜面反射(SR)是内窥镜视频中常见的高亮伪影,会严重干扰外科医生的观察和判断。尽管人们多次尝试还原镜面反射,但现有的方法效率低、耗时长,而且可能导致错误的临床解释。因此,我们提出了第一个完整的深度学习解决方案--SpecReFlow,用于从内窥镜视频中检测和还原具有时空一致性的 SR 区域:SpecReFlow包括三个阶段:(1) 增强对比度的图像预处理阶段;(2) 指出SR区域位置的检测阶段;(3) 用准确的底层组织结构替换SR像素的还原阶段。我们的还原方法采用光流技术,无缝传播内窥镜视频其他帧的颜色和结构:结果:对每个阶段进行的综合定量和定性测试表明,我们的 SpecReFlow 解决方案比以前的检测和修复方法性能更好。我们的检测阶段达到了 82.8% 的 Dice 分数和 94.6% 的灵敏度,而我们的修复阶段则成功地将时间信息与空间信息相结合,从而实现了比现有技术更精确的修复:SpecReFlow是一种首创的解决方案,它结合了时间和空间信息,可有效检测和修复SR区域,超越了以往依赖单帧空间信息的方法。未来的工作将着眼于为实时应用优化 SpecReFlow。SpecReFlow 是一种纯软件解决方案,可用于恢复因 SR 而丢失的图像内容,因此可随时部署到现有的临床环境中,以提高内窥镜视频质量,从而实现准确的诊断和治疗。
{"title":"SpecReFlow: an algorithm for specular reflection restoration using flow-guided video completion.","authors":"Haoli Yin, Rachel Eimen, Daniel Moyer, Audrey K Bowden","doi":"10.1117/1.JMI.11.2.024012","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024012","url":null,"abstract":"<p><strong>Purpose: </strong>Specular reflections (SRs) are highlight artifacts commonly found in endoscopy videos that can severely disrupt a surgeon's observation and judgment. Despite numerous attempts to restore SR, existing methods are inefficient and time consuming and can lead to false clinical interpretations. Therefore, we propose the first complete deep-learning solution, SpecReFlow, to detect and restore SR regions from endoscopy video with spatial and temporal coherence.</p><p><strong>Approach: </strong>SpecReFlow consists of three stages: (1) an image preprocessing stage to enhance contrast, (2) a detection stage to indicate where the SR region is present, and (3) a restoration stage in which we replace SR pixels with an accurate underlying tissue structure. Our restoration approach uses optical flow to seamlessly propagate color and structure from other frames of the endoscopy video.</p><p><strong>Results: </strong>Comprehensive quantitative and qualitative tests for each stage reveal that our SpecReFlow solution performs better than previous detection and restoration methods. Our detection stage achieves a Dice score of 82.8% and a sensitivity of 94.6%, and our restoration stage successfully incorporates temporal information with spatial information for more accurate restorations than existing techniques.</p><p><strong>Conclusions: </strong>SpecReFlow is a first-of-its-kind solution that combines temporal and spatial information for effective detection and restoration of SR regions, surpassing previous methods relying on single-frame spatial information. Future work will look to optimizing SpecReFlow for real-time applications. SpecReFlow is a software-only solution for restoring image content lost due to SR, making it readily deployable in existing clinical settings to improve endoscopy video quality for accurate diagnosis and treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024012"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11042492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Timely detection and treatment of visual impairments and age-related eye diseases are essential for maintaining a longer, healthier life. However, the shortage of appropriate medical equipment often impedes early detection. We have developed a portable self-imaging slit-light device utilizing NIR light and a scanning mirror. The objective of our study is to assess the accuracy and compare the performance of our device with conventional nonportable slit-lamp microscopes and anterior segment optical coherence tomography (AS-OCT) for screening and remotely diagnosing eye diseases, such as cataracts and glaucoma, outside of an eye clinic.
Approach: The NIR light provides an advantage as measurements are nonmydriatic and less traumatic for patients. A cross-sectional study involving Japanese adults was conducted. Cataract evaluation was performed using photographs captured by the device. Van-Herick grading was assessed by the ratio of peripheral anterior chamber depth to peripheral corneal thickness, in addition to the iridocorneal angle using Image J software.
Results: The correlation coefficient between values obtained by AS-OCT, and our fabricated portable scanning slit-light device was notably high. The results indicate that our portable device is equally reliable as the conventional nonportable slit-lamp microscope and AS-OCT for screening and evaluating eye diseases.
Conclusions: Our fabricated device matches the functionality of the traditional slit lamp, offering a cost-effective and portable solution. Ideal for remote locations, healthcare facilities, or areas affected by disasters, our scanning slit-light device can provide easy access to initial eye examinations and supports digital eye healthcare initiatives.
{"title":"Mobile infrared slit-light scanner for rapid eye disease screening.","authors":"Neelam Kaushik, Parmanand Sharma, Noriko Himori, Takuro Matsumoto, Takehiro Miya, Toru Nakazawa","doi":"10.1117/1.JMI.11.2.026003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.026003","url":null,"abstract":"<p><strong>Purpose: </strong>Timely detection and treatment of visual impairments and age-related eye diseases are essential for maintaining a longer, healthier life. However, the shortage of appropriate medical equipment often impedes early detection. We have developed a portable self-imaging slit-light device utilizing NIR light and a scanning mirror. The objective of our study is to assess the accuracy and compare the performance of our device with conventional nonportable slit-lamp microscopes and anterior segment optical coherence tomography (AS-OCT) for screening and remotely diagnosing eye diseases, such as cataracts and glaucoma, outside of an eye clinic.</p><p><strong>Approach: </strong>The NIR light provides an advantage as measurements are nonmydriatic and less traumatic for patients. A cross-sectional study involving Japanese adults was conducted. Cataract evaluation was performed using photographs captured by the device. Van-Herick grading was assessed by the ratio of peripheral anterior chamber depth to peripheral corneal thickness, in addition to the iridocorneal angle using Image J software.</p><p><strong>Results: </strong>The correlation coefficient between values obtained by AS-OCT, and our fabricated portable scanning slit-light device was notably high. The results indicate that our portable device is equally reliable as the conventional nonportable slit-lamp microscope and AS-OCT for screening and evaluating eye diseases.</p><p><strong>Conclusions: </strong>Our fabricated device matches the functionality of the traditional slit lamp, offering a cost-effective and portable solution. Ideal for remote locations, healthcare facilities, or areas affected by disasters, our scanning slit-light device can provide easy access to initial eye examinations and supports digital eye healthcare initiatives.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"026003"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}