Pub Date : 2026-01-09eCollection Date: 2026-01-01DOI: 10.1155/ijbi/7436511
Rahma Saad Mohamed, Ahmed Sayed Abd El Bassset, Ahmed A G El-Shahawy
Background: Quantitative computed tomography (CT) plays a crucial role in assessing emphysema severity in chronic obstructive pulmonary disease (COPD). However, variations in CT reconstruction parameters-slice thickness (ST), kernel selection, field of view (FOV), and reconstruction gaps-can affect emphysema index (EI) quantification, impacting diagnostic accuracy and study comparability.
Objective: This study examines how CT reconstruction parameters influence EI quantification using Hounsfield Unit (HU)-based measurements and the Goddard Score (GS) to refine imaging protocols for emphysema assessment.
Methods: Low-dose CT scans were performed on 31 subjects, with images reconstructed using ST (0.6-10 mm), kernel settings (Br and Hr series), FOV ranges (250-370 mm), and reconstruction gaps (0.25-3 mm). EI was defined as the percentage of lung volume with attenuation values below - 950 HU, while GS provided a semi-quantitative assessment of emphysema severity. Statistical analyses evaluated the effects of reconstruction parameters on EI and GS.
Results: Variations in FOV, kernel selection, and reconstruction gaps had negligible effects on the GS (p > 0.05), suggesting that these parameters do not introduce structural distortions in pulmonary imaging. However, ultra-thin slices (0.6 mm) enhanced the detection of subtle emphysematous changes, slightly increasing GS, though higher image noise may affect interpretation. Additionally, ST significantly influenced EI values due to partial volume effects, with thinner slices yielding lower attenuation values.
Conclusion: These findings confirm the reliability of CT-based emphysema quantification and highlight the importance of optimizing ST to balance sensitivity and image clarity. Standardized imaging protocols and AI-driven texture analysis could further enhance quantitative emphysema assessment, improving disease monitoring and therapeutic decision-making in COPD management.
{"title":"The Impact of CT Reconstruction Parameters on Emphysema Index Quantification, HU-Based Measurements, and Goddard Score in COPD Assessment: A Prospective Study.","authors":"Rahma Saad Mohamed, Ahmed Sayed Abd El Bassset, Ahmed A G El-Shahawy","doi":"10.1155/ijbi/7436511","DOIUrl":"10.1155/ijbi/7436511","url":null,"abstract":"<p><strong>Background: </strong>Quantitative computed tomography (CT) plays a crucial role in assessing emphysema severity in chronic obstructive pulmonary disease (COPD). However, variations in CT reconstruction parameters-slice thickness (ST), kernel selection, field of view (FOV), and reconstruction gaps-can affect emphysema index (EI) quantification, impacting diagnostic accuracy and study comparability.</p><p><strong>Objective: </strong>This study examines how CT reconstruction parameters influence EI quantification using Hounsfield Unit (HU)-based measurements and the Goddard Score (GS) to refine imaging protocols for emphysema assessment.</p><p><strong>Methods: </strong>Low-dose CT scans were performed on 31 subjects, with images reconstructed using ST (0.6-10 mm), kernel settings (Br and Hr series), FOV ranges (250-370 mm), and reconstruction gaps (0.25-3 mm). EI was defined as the percentage of lung volume with attenuation values below - 950 HU, while GS provided a semi-quantitative assessment of emphysema severity. Statistical analyses evaluated the effects of reconstruction parameters on EI and GS.</p><p><strong>Results: </strong>Variations in FOV, kernel selection, and reconstruction gaps had negligible effects on the GS (<i>p</i> > 0.05), suggesting that these parameters do not introduce structural distortions in pulmonary imaging. However, ultra-thin slices (0.6 mm) enhanced the detection of subtle emphysematous changes, slightly increasing GS, though higher image noise may affect interpretation. Additionally, ST significantly influenced EI values due to partial volume effects, with thinner slices yielding lower attenuation values.</p><p><strong>Conclusion: </strong>These findings confirm the reliability of CT-based emphysema quantification and highlight the importance of optimizing ST to balance sensitivity and image clarity. Standardized imaging protocols and AI-driven texture analysis could further enhance quantitative emphysema assessment, improving disease monitoring and therapeutic decision-making in COPD management.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2026 ","pages":"7436511"},"PeriodicalIF":1.3,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12789179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is a real public health problem. Several women with this disease have died from it. Breast cancer is one of the deadliest cancers. Currently, the only way to combat this scourge is the early detection of breast masses. Mammography is a breast x-ray that allows images of the inside of the breast to be obtained using x-rays, thereby detecting possible abnormalities. Computer-aided diagnosis provides significant support in this direction. This work introduces a new system called MF-BIMFs for computer-aided diagnosis that automatically analyzes digital mammograms to discover areas of interest in breast images and offers experts a second opinion. This system was based on the combination of two steps. The first step is the image preprocessing which was based on the bidimensional empirical mode decomposition (BEMD) of breast mammographic images, and their objective is to decompose the image into several BIMF modes and the residual, while the second step is the extraction of features and irregularity properties of the preprocessed images from the multifractal spectrum on each BIMF and the residual and to extract a better representation of each mode and provide details capable of differentiating the two healthy and cancerous states, using these properties as characteristic attributes to evaluate their performance in characterizing two conditions objectively. The rate of this classification is given by SVM. The experimental results indicate that the BIMF1 mode provided the best classification rate, approximately 97.32%. The interest of this new approach was applied to real mammographic image data from the Reference Center for Reproductive Health of Kenitra, Morocco (RCRHKM), which contains normal and pathological mammographic images.
{"title":"Classification of Mammography Images Based on Multifractal Analysis of BIMFs.","authors":"Fatima Ghazi, Khalil Ibrahimi, Fouad Ayoub, Aziza Benkuider, Mohamed Zraidi","doi":"10.1155/ijbi/5940783","DOIUrl":"10.1155/ijbi/5940783","url":null,"abstract":"<p><p>Breast cancer is a real public health problem. Several women with this disease have died from it. Breast cancer is one of the deadliest cancers. Currently, the only way to combat this scourge is the early detection of breast masses. Mammography is a breast x-ray that allows images of the inside of the breast to be obtained using x-rays, thereby detecting possible abnormalities. Computer-aided diagnosis provides significant support in this direction. This work introduces a new system called MF-BIMFs for computer-aided diagnosis that automatically analyzes digital mammograms to discover areas of interest in breast images and offers experts a second opinion. This system was based on the combination of two steps. The first step is the image preprocessing which was based on the bidimensional empirical mode decomposition (BEMD) of breast mammographic images, and their objective is to decompose the image into several BIMF modes and the residual, while the second step is the extraction of features and irregularity properties of the preprocessed images from the multifractal spectrum on each BIMF and the residual and to extract a better representation of each mode and provide details capable of differentiating the two healthy and cancerous states, using these properties as characteristic attributes to evaluate their performance in characterizing two conditions objectively. The rate of this classification is given by SVM. The experimental results indicate that the BIMF<sub>1</sub> mode provided the best classification rate, approximately 97.32%. The interest of this new approach was applied to real mammographic image data from the Reference Center for Reproductive Health of Kenitra, Morocco (RCRHKM), which contains normal and pathological mammographic images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"5940783"},"PeriodicalIF":1.3,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12752853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-21eCollection Date: 2025-01-01DOI: 10.1155/ijbi/9065572
Mahmud Mohammed, Tulio Fernandez-Medina, Manjunath Rajashekhar, Stephanie Baker, Ernest Jennings
Objectives: To evaluate various segmentation methods used for cone-beam computed tomography (CBCT) images of alveolar bone, assessing their effectiveness and potential benefits in digital workflows for periodontal defect regeneration.
Data: This review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) Checklist.
Source: A comprehensive literature search was conducted from May 2024 to June 2025 using MeSH terms on PubMed, Scopus, Web of Science, and Medline databases, with publication date restricted to 5 years. The PRISMA guidelines were followed to ensure a systematic review process, and the review protocol was registered with Prospero. The QUADAS-2 checklist was used to evaluate the risk of bias in the included studies.
Study selection: The initial search yielded 834 articles, which were systematically filtered down to 23 eligible studies. Deep learning methods, particularly U-Net, were the most frequently employed segmentation techniques. Four studies utilized semi-automated methods, while the remaining studies relied on manual or other segmentation methods. The Dice similarity (DC) index, ranging from 76% to 98%, was the primary metric used to assess segmentation performance.
Conclusions: Significant differences were observed between the segmentation of healthy and defective alveolar bone, underscoring the need to enhance deep learning-based methods. Accurate segmentation of periodontal defects in DICOM images is a crucial first step in the scaffold workflow, as it enables precise assessment of defect morphology and volume. This information directly informs scaffold design, ensuring that the scaffold geometry is tailored to the patient-specific defect.
Prospero registration: CRD42024590957.
目的:评价用于牙槽骨锥形束计算机断层扫描(CBCT)图像的各种分割方法,评估其在牙周缺损再生数字化工作流程中的有效性和潜在优势。资料:本综述遵循PRISMA-ScR(系统评价和荟萃分析扩展范围评价的首选报告项目)清单。资料来源:从2024年5月到2025年6月,在PubMed、Scopus、Web of Science和Medline数据库中使用MeSH术语进行了全面的文献检索,出版日期限制为5年。遵循PRISMA指南以确保系统的审查过程,审查协议已在普洛斯彼罗注册。采用QUADAS-2检查表评估纳入研究的偏倚风险。研究选择:最初的搜索产生了834篇文章,系统地过滤出23篇符合条件的研究。深度学习方法,尤其是U-Net,是最常用的分割技术。四项研究使用了半自动方法,而其余研究则依赖于手动或其他分割方法。Dice相似性(DC)指数,范围从76%到98%,是用来评估分割性能的主要指标。结论:健康牙槽骨和缺陷牙槽骨的分割存在显著差异,强调需要加强基于深度学习的方法。DICOM图像中牙周缺陷的准确分割是支架工作流程中至关重要的第一步,因为它可以精确评估缺陷的形态和体积。这些信息直接指导支架设计,确保支架的几何形状适合患者特定的缺陷。普洛斯彼罗注册:CRD42024590957。
{"title":"Alveolar Bone Segmentation Methods in Assessing the Effectiveness of Periodontal Defect Regeneration Through Machine Learning of CBCT Data: A Systematic Review.","authors":"Mahmud Mohammed, Tulio Fernandez-Medina, Manjunath Rajashekhar, Stephanie Baker, Ernest Jennings","doi":"10.1155/ijbi/9065572","DOIUrl":"10.1155/ijbi/9065572","url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate various segmentation methods used for cone-beam computed tomography (CBCT) images of alveolar bone, assessing their effectiveness and potential benefits in digital workflows for periodontal defect regeneration.</p><p><strong>Data: </strong>This review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) Checklist.</p><p><strong>Source: </strong>A comprehensive literature search was conducted from May 2024 to June 2025 using MeSH terms on PubMed, Scopus, Web of Science, and Medline databases, with publication date restricted to 5 years. The PRISMA guidelines were followed to ensure a systematic review process, and the review protocol was registered with Prospero. The QUADAS-2 checklist was used to evaluate the risk of bias in the included studies.</p><p><strong>Study selection: </strong>The initial search yielded 834 articles, which were systematically filtered down to 23 eligible studies. Deep learning methods, particularly U-Net, were the most frequently employed segmentation techniques. Four studies utilized semi-automated methods, while the remaining studies relied on manual or other segmentation methods. The Dice similarity (DC) index, ranging from 76% to 98%, was the primary metric used to assess segmentation performance.</p><p><strong>Conclusions: </strong>Significant differences were observed between the segmentation of healthy and defective alveolar bone, underscoring the need to enhance deep learning-based methods. Accurate segmentation of periodontal defects in DICOM images is a crucial first step in the scaffold workflow, as it enables precise assessment of defect morphology and volume. This information directly informs scaffold design, ensuring that the scaffold geometry is tailored to the patient-specific defect.</p><p><strong>Prospero registration: </strong>CRD42024590957.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"9065572"},"PeriodicalIF":1.3,"publicationDate":"2025-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12752843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05eCollection Date: 2025-01-01DOI: 10.1155/ijbi/1959442
Ummul Afia Shammi, Talissa Altes, Cody Thornburgh, John P Mugler, Craig H Meyer, Kun Qing, X Eduard E de Lange, Jaime Mata, Robert Thomen
Background: Hyperpolarized gas (HPG) magnetic resonance imaging, recently FDA-approved, offers an innovative approach to evaluating gas distribution and lung function in both adults and children.
Purpose: In this study, we present an algorithm for calculating maps of changes in regional ventilation in asthma, cystic fibrosis, and COPD patients before and after receiving treatment. We validate the results with a radiologist's evaluation for accuracy. Our hypothesis is that the change map would be in congruence with a radiologist's visual examination.
Assessment: Nine asthmatics, six cystic fibrosis patients, and five COPD patients underwent hyperpolarized 3He MRI. N4ITK bias correction, voxel smoothing, and normalization to the signal distribution's 95th percentile voxel signal value were performed on images. For calculating regional ventilation change maps, posttreatment images were registered to baseline images, and difference maps were created. Difference-map voxel values of > 60% of the baseline mean signal value were identified as improved, and those of < -60% were identified as worsened. In addition, short-term improvement (STI) was identified where voxels improved at Timepoint 2 but returned to baseline at Timepoint 3. A grading rubric was developed for radiologist scoring that had the following assessment categories: "level of volume discrepancy" and "discrepancy causes" for each ventilation change map.
Results: In 15 out of the 20 cases (75% of the data), there was a small to no volume disparity between the change map and the radiologists' visual evaluation. The rest of the two cases had moderate volume differences, and three cases had large ones.
Conclusion: Our regional change maps demonstrated congruence with visual examination and may be a useful tool for clinicians evaluating ventilation changes longitudinally.
{"title":"Mapping Regional Changes in Multiple-Timepoint Hyperpolarized Gas Ventilation Images and Validation by Radiologist Score.","authors":"Ummul Afia Shammi, Talissa Altes, Cody Thornburgh, John P Mugler, Craig H Meyer, Kun Qing, X Eduard E de Lange, Jaime Mata, Robert Thomen","doi":"10.1155/ijbi/1959442","DOIUrl":"10.1155/ijbi/1959442","url":null,"abstract":"<p><strong>Background: </strong>Hyperpolarized gas (HPG) magnetic resonance imaging, recently FDA-approved, offers an innovative approach to evaluating gas distribution and lung function in both adults and children.</p><p><strong>Purpose: </strong>In this study, we present an algorithm for calculating maps of changes in regional ventilation in asthma, cystic fibrosis, and COPD patients before and after receiving treatment. We validate the results with a radiologist's evaluation for accuracy. Our hypothesis is that the change map would be in congruence with a radiologist's visual examination.</p><p><strong>Assessment: </strong>Nine asthmatics, six cystic fibrosis patients, and five COPD patients underwent hyperpolarized 3He MRI. N4ITK bias correction, voxel smoothing, and normalization to the signal distribution's 95th percentile voxel signal value were performed on images. For calculating regional ventilation change maps, posttreatment images were registered to baseline images, and difference maps were created. Difference-map voxel values of > 60% of the baseline mean signal value were identified as improved, and those of < -60% were identified as worsened. In addition, short-term improvement (STI) was identified where voxels improved at Timepoint 2 but returned to baseline at Timepoint 3. A grading rubric was developed for radiologist scoring that had the following assessment categories: \"level of volume discrepancy\" and \"discrepancy causes\" for each ventilation change map.</p><p><strong>Results: </strong>In 15 out of the 20 cases (75% of the data), there was a small to no volume disparity between the change map and the radiologists' visual evaluation. The rest of the two cases had moderate volume differences, and three cases had large ones.</p><p><strong>Conclusion: </strong>Our regional change maps demonstrated congruence with visual examination and may be a useful tool for clinicians evaluating ventilation changes longitudinally.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"1959442"},"PeriodicalIF":1.3,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12752820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Intravoxel incoherent motion (IVIM) magnetic resonance imaging (MRI) is a noncontrast technique estimating diffusion and perfusion parameters via multiple b-values, essential for oncology imaging. However, there is limited collective evidence regarding the efficacy of IVIM in oncology imaging compared to contrast-enhanced (CE) MRI perfusion techniques. This systematic review and meta-analysis compared IVIM's diagnostic accuracy and correlation with CE MRI perfusion techniques.
Methods: Following PRISMA guidelines (PROSPERO-registered), a literature search across five databases (PubMed, Scopus, Embase, Web of Science, and Cochrane Library) was conducted. Diagnostic metrics, including AUC, sensitivity, specificity, and correlation coefficients, were analyzed using a random-effects model, with heterogeneity and publication bias assessed via I2 statistics and Egger's test.
Results: Eighteen studies on breast, rectal, and brain cancers were analyzed. For breast cancer, IVIM showed 83.50% sensitivity and 81.24% specificity compared to dynamic contrast-enhanced (DCE) MRI's 88.04% sensitivity and 65.98% specificity. In rectal cancer, IVIM achieved 70.9% sensitivity and 56.2% specificity, outperforming DCE MRI's 58.11% sensitivity and 72.49% specificity. For gliomas, IVIM demonstrated 92.27% sensitivity and 74.06% specificity compared to dynamic susceptibility contrast (DSC) MRI's 95.71% sensitivity and 92.91% specificity. Correlations between IVIM and CE parameters were weak to moderate.
Conclusion: IVIM demonstrated equal or superior diagnostic performance to CE MRI in breast cancer, rectal cancer, and gliomas, offering a noncontrast alternative. However, unclear parameter correlations warrant future studies focusing on IVIM protocol optimization based on perfusion regimes.
背景:体素内非相干运动(IVIM)磁共振成像(MRI)是一种通过多个b值估计扩散和灌注参数的非对比技术,对肿瘤成像至关重要。然而,与对比增强(CE) MRI灌注技术相比,关于IVIM在肿瘤成像中的有效性的集体证据有限。本系统综述和荟萃分析比较了IVIM的诊断准确性及其与CE MRI灌注技术的相关性。方法:遵循PRISMA指南(普洛斯普洛斯注册),对5个数据库(PubMed, Scopus, Embase, Web of Science和Cochrane Library)进行文献检索。使用随机效应模型分析诊断指标,包括AUC、敏感性、特异性和相关系数,并通过i2统计量和Egger检验评估异质性和发表偏倚。结果:对18项关于乳腺癌、直肠癌和脑癌的研究进行了分析。对于乳腺癌,IVIM的敏感性为83.50%,特异性为81.24%,而动态对比增强(DCE) MRI的敏感性为88.04%,特异性为65.98%。在直肠癌中,IVIM的灵敏度为70.9%,特异度为56.2%,优于DCE MRI的58.11%,特异度为72.49%。对于胶质瘤,IVIM的敏感性为92.27%,特异性为74.06%,而动态敏感性对比(DSC) MRI的敏感性为95.71%,特异性为92.91%。IVIM与CE参数的相关性为弱至中度。结论:IVIM在乳腺癌、直肠癌和胶质瘤的诊断中表现出与CE MRI相同或更好的诊断性能,提供了一种非对比的替代方法。然而,不明确的参数相关性使未来的研究重点放在基于灌注制度的IVIM方案优化上。
{"title":"Diagnostic Efficacy and Correlation of Intravoxel Incoherent Motion (IVIM) and Contrast-Enhanced (CE) MRI Perfusion Parameters in Oncology Imaging: A Systematic Review and Meta-Analysis.","authors":"Abhijith S, Rajagopal Kadavigere, Priya P S, Dharmesh Singh, Priyanka, Tancia Pires, Dileep Kumar, Saikiran Pendem","doi":"10.1155/ijbi/3621023","DOIUrl":"https://doi.org/10.1155/ijbi/3621023","url":null,"abstract":"<p><strong>Background: </strong>Intravoxel incoherent motion (IVIM) magnetic resonance imaging (MRI) is a noncontrast technique estimating diffusion and perfusion parameters via multiple <i>b</i>-values, essential for oncology imaging. However, there is limited collective evidence regarding the efficacy of IVIM in oncology imaging compared to contrast-enhanced (CE) MRI perfusion techniques. This systematic review and meta-analysis compared IVIM's diagnostic accuracy and correlation with CE MRI perfusion techniques.</p><p><strong>Methods: </strong>Following PRISMA guidelines (PROSPERO-registered), a literature search across five databases (PubMed, Scopus, Embase, Web of Science, and Cochrane Library) was conducted. Diagnostic metrics, including AUC, sensitivity, specificity, and correlation coefficients, were analyzed using a random-effects model, with heterogeneity and publication bias assessed via <i>I</i> <sup>2</sup> statistics and Egger's test.</p><p><strong>Results: </strong>Eighteen studies on breast, rectal, and brain cancers were analyzed. For breast cancer, IVIM showed 83.50% sensitivity and 81.24% specificity compared to dynamic contrast-enhanced (DCE) MRI's 88.04% sensitivity and 65.98% specificity. In rectal cancer, IVIM achieved 70.9% sensitivity and 56.2% specificity, outperforming DCE MRI's 58.11% sensitivity and 72.49% specificity. For gliomas, IVIM demonstrated 92.27% sensitivity and 74.06% specificity compared to dynamic susceptibility contrast (DSC) MRI's 95.71% sensitivity and 92.91% specificity. Correlations between IVIM and CE parameters were weak to moderate.</p><p><strong>Conclusion: </strong>IVIM demonstrated equal or superior diagnostic performance to CE MRI in breast cancer, rectal cancer, and gliomas, offering a noncontrast alternative. However, unclear parameter correlations warrant future studies focusing on IVIM protocol optimization based on perfusion regimes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"3621023"},"PeriodicalIF":1.3,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12646736/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145640985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06eCollection Date: 2025-01-01DOI: 10.1155/ijbi/6154285
Ali Basim Mahdi, Zahraa A Mousa Al-Ibraheemi, Zahraa Fadhil Kadhim, Raffef Jabar Abbrahim, Yaqeen Sameer Dhayool, Ghasaq Mankhey Jabbar, Sajjad A Mohammed
Various retinal conditions, such as diabetic macular edema (DME) and choroidal neovascularization (CNV), pose significant risks of visual impairment and vision loss. Early detection through automated and accurate and advanced systems can greatly enhance clinical outcomes for patients as well as for medical staff. This study is aimed at developing a deep learning-based model for the early detection of retinal diseases using OCT images. We utilized a publicly available retinal image dataset comprising images with DME, CNV, drusen, and normal cases. The Inception model was trained and validated using various evaluation metrics. Performance metrics, including accuracy, precision, recall, and F1 score, were calculated. The proposed model achieved an accuracy of 94.2%, with precision, recall, and F1 scores exceeding 92% across all classes. Statistical analysis demonstrated the robustness of the model across folds. Our findings highlight the potential of AI-powered systems in improving early detection of retinal conditions, paving the way for integration into clinical workflows. More efforts are needed to utilize it offline by making it available on ophthalmologist mobile devices to facilitate the diagnosis process and provide better service to patients.
{"title":"AI-Powered Early Detection of Retinal Conditions: A Deep Learning Approach for Diabetic Retinopathy and Beyond.","authors":"Ali Basim Mahdi, Zahraa A Mousa Al-Ibraheemi, Zahraa Fadhil Kadhim, Raffef Jabar Abbrahim, Yaqeen Sameer Dhayool, Ghasaq Mankhey Jabbar, Sajjad A Mohammed","doi":"10.1155/ijbi/6154285","DOIUrl":"10.1155/ijbi/6154285","url":null,"abstract":"<p><p>Various retinal conditions, such as diabetic macular edema (DME) and choroidal neovascularization (CNV), pose significant risks of visual impairment and vision loss. Early detection through automated and accurate and advanced systems can greatly enhance clinical outcomes for patients as well as for medical staff. This study is aimed at developing a deep learning-based model for the early detection of retinal diseases using OCT images. We utilized a publicly available retinal image dataset comprising images with DME, CNV, drusen, and normal cases. The Inception model was trained and validated using various evaluation metrics. Performance metrics, including accuracy, precision, recall, and <i>F</i>1 score, were calculated. The proposed model achieved an accuracy of 94.2%, with precision, recall, and <i>F</i>1 scores exceeding 92% across all classes. Statistical analysis demonstrated the robustness of the model across folds. Our findings highlight the potential of AI-powered systems in improving early detection of retinal conditions, paving the way for integration into clinical workflows. More efforts are needed to utilize it offline by making it available on ophthalmologist mobile devices to facilitate the diagnosis process and provide better service to patients.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"6154285"},"PeriodicalIF":1.3,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tuberculosis (TB), caused by Mycobacterium tuberculosis, is a re-emerging disease that necessitates early and accurate detection. While Ziehl-Neelsen (ZN) staining is effective in highlighting bacterial morphology, automation significantly accelerates the diagnostic workflow. However, detecting TB bacilli-which are typically much smaller than white blood cells (WBCs)-in stained images remains a considerable challenge. This study leverages the ZNSM-iDB dataset, which comprises approximately 2000 publicly available images captured using different staining methods. Notably, 800 images are fully stained with the ZN technique. We propose a novel two-stage pipeline where a RetinaNet-based object detection model functions as a preprocessing step to localize and isolate TB bacilli and WBCs from ZN-stained images. To address the challenges posed by low spatial resolution and background interference, the RetinaNet model is enhanced with dilated convolutional layers to improve fine-grained feature extraction. This approach not only facilitates accurate detection of small objects but also achieves an average precision (AP) of 0.94 for WBCs and 0.97 for TB bacilli. Following detection, a patch-based convolutional neural network (CNN) classifier is employed to classify the extracted regions. The proposed CNN model achieves a remarkable classification accuracy of 93%, outperforming other traditional CNN architectures. This framework demonstrates a robust and scalable solution for automated TB screening using ZN-stained microscopy images.
{"title":"Optimizing TB Bacteria Detection Efficiency: Utilizing RetinaNet-Based Preprocessing Techniques for Small Image Patch Classification.","authors":"Shwetha V, Barnini Banerjee, Vijaya Laxmi, Priya Kamath","doi":"10.1155/ijbi/3559598","DOIUrl":"10.1155/ijbi/3559598","url":null,"abstract":"<p><p>Tuberculosis (TB), caused by <i>Mycobacterium tuberculosis</i>, is a re-emerging disease that necessitates early and accurate detection. While Ziehl-Neelsen (ZN) staining is effective in highlighting bacterial morphology, automation significantly accelerates the diagnostic workflow. However, detecting TB bacilli-which are typically much smaller than white blood cells (WBCs)-in stained images remains a considerable challenge. This study leverages the ZNSM-iDB dataset, which comprises approximately 2000 publicly available images captured using different staining methods. Notably, 800 images are fully stained with the ZN technique. We propose a novel two-stage pipeline where a RetinaNet-based object detection model functions as a preprocessing step to localize and isolate TB bacilli and WBCs from ZN-stained images. To address the challenges posed by low spatial resolution and background interference, the RetinaNet model is enhanced with dilated convolutional layers to improve fine-grained feature extraction. This approach not only facilitates accurate detection of small objects but also achieves an average precision (AP) of 0.94 for WBCs and 0.97 for TB bacilli. Following detection, a patch-based convolutional neural network (CNN) classifier is employed to classify the extracted regions. The proposed CNN model achieves a remarkable classification accuracy of 93%, outperforming other traditional CNN architectures. This framework demonstrates a robust and scalable solution for automated TB screening using ZN-stained microscopy images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"3559598"},"PeriodicalIF":1.3,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12515574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22eCollection Date: 2025-01-01DOI: 10.1155/ijbi/6656059
Qianqian Ye, Yuhu Shi, Shunjie Guo
Accurate brain tumor segmentation is essential for clinical decision-making, yet remains difficult to automate. Key obstacles include the small volume of lesions, their morphological diversity, poorly defined MRI boundaries, and nonuniform intensity profiles. Furthermore, while traditional segmentation approaches often focus on intralayer relevance, they frequently underutilize the rich semantic correlations between features extracted from adjacent network layers. Concurrently, classical attention mechanisms, while effective for highlighting salient regions, often lack explicit mechanisms for directing feature refinement along specific dimensions. To solve these problems, this paper presents CAGs-Net, a novel network that progressively constructs semantic dependencies between neighboring layers in the UNet hierarchy, enabling effective integration of local and global contextual information. Meanwhile, the channel attention gate was embedded within this adjacent-context network. These gates strategically fuse shallow appearance features and deep semantic information, leveraging channel-wise relationships to refine features by recalibrating voxel spatial responses. In addition, the hybrid loss combining generalized dice loss and binary cross-entropy loss was employed to avoid severe class imbalance inherent in lesion segmentation. Therefore, CAGs-Net uniquely combines adjacent-context modeling with channel attention gates to enhance feature refinement, outperforming traditional UNet-based methods, and the experimental results demonstrated that CAGs-Net shows better segmentation performance in comparison with some state-of-the-art methods for brain tumor image segmentation.
{"title":"CAGs-Net: A Novel Adjacent-Context Network With Channel Attention Gate for 3D Brain Tumor Image Segmentation.","authors":"Qianqian Ye, Yuhu Shi, Shunjie Guo","doi":"10.1155/ijbi/6656059","DOIUrl":"10.1155/ijbi/6656059","url":null,"abstract":"<p><p>Accurate brain tumor segmentation is essential for clinical decision-making, yet remains difficult to automate. Key obstacles include the small volume of lesions, their morphological diversity, poorly defined MRI boundaries, and nonuniform intensity profiles. Furthermore, while traditional segmentation approaches often focus on intralayer relevance, they frequently underutilize the rich semantic correlations between features extracted from adjacent network layers. Concurrently, classical attention mechanisms, while effective for highlighting salient regions, often lack explicit mechanisms for directing feature refinement along specific dimensions. To solve these problems, this paper presents CAGs-Net, a novel network that progressively constructs semantic dependencies between neighboring layers in the UNet hierarchy, enabling effective integration of local and global contextual information. Meanwhile, the channel attention gate was embedded within this adjacent-context network. These gates strategically fuse shallow appearance features and deep semantic information, leveraging channel-wise relationships to refine features by recalibrating voxel spatial responses. In addition, the hybrid loss combining generalized dice loss and binary cross-entropy loss was employed to avoid severe class imbalance inherent in lesion segmentation. Therefore, CAGs-Net uniquely combines adjacent-context modeling with channel attention gates to enhance feature refinement, outperforming traditional UNet-based methods, and the experimental results demonstrated that CAGs-Net shows better segmentation performance in comparison with some state-of-the-art methods for brain tumor image segmentation.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"6656059"},"PeriodicalIF":1.3,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12396912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-11eCollection Date: 2025-01-01DOI: 10.1155/ijbi/3309822
Xu Wang, Patrice Monkam, Bonan Zhao, Shouliang Qi, He Ma, Long Huang, Wei Qian
Automated lesion segmentation in ultrasound (US) images based on deep learning (DL) approaches plays a crucial role in disease diagnosis and treatment. However, the successful implementation of these approaches is conditioned by large-scale and diverse annotated datasets whose obtention is tedious and expertise demanding. Although methods like generative adversarial networks (GANs) can help address sample scarcity, they are often associated with complex training processes and high computational demands, which can limit their practicality and feasibility, especially in resource-constrained scenarios. Therefore, this study is aimed at exploring new solutions to address the challenge of limited annotated samples in automated lesion delineation in US images. Specifically, we propose five distinct mixed sample augmentation strategies and assess their effectiveness using four deep segmentation models for the delineation of two lesion types: breast and thyroid lesions. Extensive experimental analyses indicate that the effectiveness of these augmentation strategies is strongly influenced by both the lesion type and the model architecture. When appropriately selected, these strategies result in substantial performance improvements, with the Dice and Jaccard indices increasing by up to 37.95% and 36.32% for breast lesions and 14.59% and 13.01% for thyroid lesions, respectively. These improvements highlight the potential of the proposed strategies as a reliable solution to address data scarcity in automated lesion segmentation tasks. Furthermore, the study emphasizes the critical importance of carefully selecting data augmentation approaches, offering valuable insights into how their strategic application can significantly enhance the performance of DL models.
{"title":"Enhancing Lesion Segmentation in Ultrasound Images: The Impact of Targeted Data Augmentation Strategies.","authors":"Xu Wang, Patrice Monkam, Bonan Zhao, Shouliang Qi, He Ma, Long Huang, Wei Qian","doi":"10.1155/ijbi/3309822","DOIUrl":"10.1155/ijbi/3309822","url":null,"abstract":"<p><p>Automated lesion segmentation in ultrasound (US) images based on deep learning (DL) approaches plays a crucial role in disease diagnosis and treatment. However, the successful implementation of these approaches is conditioned by large-scale and diverse annotated datasets whose obtention is tedious and expertise demanding. Although methods like generative adversarial networks (GANs) can help address sample scarcity, they are often associated with complex training processes and high computational demands, which can limit their practicality and feasibility, especially in resource-constrained scenarios. Therefore, this study is aimed at exploring new solutions to address the challenge of limited annotated samples in automated lesion delineation in US images. Specifically, we propose five distinct mixed sample augmentation strategies and assess their effectiveness using four deep segmentation models for the delineation of two lesion types: breast and thyroid lesions. Extensive experimental analyses indicate that the effectiveness of these augmentation strategies is strongly influenced by both the lesion type and the model architecture. When appropriately selected, these strategies result in substantial performance improvements, with the Dice and Jaccard indices increasing by up to 37.95% and 36.32% for breast lesions and 14.59% and 13.01% for thyroid lesions, respectively. These improvements highlight the potential of the proposed strategies as a reliable solution to address data scarcity in automated lesion segmentation tasks. Furthermore, the study emphasizes the critical importance of carefully selecting data augmentation approaches, offering valuable insights into how their strategic application can significantly enhance the performance of DL models.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"3309822"},"PeriodicalIF":1.3,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01eCollection Date: 2025-01-01DOI: 10.1155/ijbi/2149042
Rafiqul Islam, Sazzad Hossain
Brain tumors are complex clinical lesions with diverse morphological characteristics, making accurate segmentation from MRI scans a challenging task. Manual segmentation by radiologists is time-consuming and susceptible to human error. Consequently, automated approaches are anticipated to accurately delineate tumor boundaries and quantify tumor burden, addressing these challenges efficiently. The presented work integrates a convolutional block attention module (CBAM) into a deep learning architecture to enhance the accuracy of MRI-based brain tumor segmentation. The deep learning network is built upon a VGG19-based U-Net model, augmented with depthwise and pointwise convolutions to improve feature extraction and processing efficiency during brain tumor segmentation. Furthermore, the proposed framework enhances segmentation precision while simultaneously incorporating tumor area measurement, making it a comprehensive tool for early-stage tumor analysis. Several qualitative assessments are used to assess the performance of the model in terms of tumor segmentation analysis. The qualitative metrics typically analyze the overlap between predicted tumor masks and ground truth annotations, providing information on the segmentation algorithms' accuracy and dependability. Following segmentation, a new approach is used to compute the extent of segmented tumor areas in MRI scans. This involves counting the number of pixels within the segmented tumor masks and multiplying by their area or volume. The computed tumor areas offer quantifiable data for future investigation and clinical interpretation. In general, the proposed methodology is projected to improve segmentation accuracy, efficiency, and clinical relevance compared to existing methods, resulting in better diagnosis, treatment planning, and monitoring of patients with brain tumors.
{"title":"Enhanced Brain Tumor Segmentation Using CBAM-Integrated Deep Learning and Area Quantification.","authors":"Rafiqul Islam, Sazzad Hossain","doi":"10.1155/ijbi/2149042","DOIUrl":"10.1155/ijbi/2149042","url":null,"abstract":"<p><p>Brain tumors are complex clinical lesions with diverse morphological characteristics, making accurate segmentation from MRI scans a challenging task. Manual segmentation by radiologists is time-consuming and susceptible to human error. Consequently, automated approaches are anticipated to accurately delineate tumor boundaries and quantify tumor burden, addressing these challenges efficiently. The presented work integrates a convolutional block attention module (CBAM) into a deep learning architecture to enhance the accuracy of MRI-based brain tumor segmentation. The deep learning network is built upon a VGG19-based U-Net model, augmented with depthwise and pointwise convolutions to improve feature extraction and processing efficiency during brain tumor segmentation. Furthermore, the proposed framework enhances segmentation precision while simultaneously incorporating tumor area measurement, making it a comprehensive tool for early-stage tumor analysis. Several qualitative assessments are used to assess the performance of the model in terms of tumor segmentation analysis. The qualitative metrics typically analyze the overlap between predicted tumor masks and ground truth annotations, providing information on the segmentation algorithms' accuracy and dependability. Following segmentation, a new approach is used to compute the extent of segmented tumor areas in MRI scans. This involves counting the number of pixels within the segmented tumor masks and multiplying by their area or volume. The computed tumor areas offer quantifiable data for future investigation and clinical interpretation. In general, the proposed methodology is projected to improve segmentation accuracy, efficiency, and clinical relevance compared to existing methods, resulting in better diagnosis, treatment planning, and monitoring of patients with brain tumors.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"2149042"},"PeriodicalIF":1.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12334286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}