首页 > 最新文献

International Journal of Biomedical Imaging最新文献

英文 中文
The Impact of CT Reconstruction Parameters on Emphysema Index Quantification, HU-Based Measurements, and Goddard Score in COPD Assessment: A Prospective Study. 一项前瞻性研究:CT重建参数对肺气肿指数量化、基于hu的测量和COPD评估中的戈达德评分的影响
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-09 eCollection Date: 2026-01-01 DOI: 10.1155/ijbi/7436511
Rahma Saad Mohamed, Ahmed Sayed Abd El Bassset, Ahmed A G El-Shahawy

Background: Quantitative computed tomography (CT) plays a crucial role in assessing emphysema severity in chronic obstructive pulmonary disease (COPD). However, variations in CT reconstruction parameters-slice thickness (ST), kernel selection, field of view (FOV), and reconstruction gaps-can affect emphysema index (EI) quantification, impacting diagnostic accuracy and study comparability.

Objective: This study examines how CT reconstruction parameters influence EI quantification using Hounsfield Unit (HU)-based measurements and the Goddard Score (GS) to refine imaging protocols for emphysema assessment.

Methods: Low-dose CT scans were performed on 31 subjects, with images reconstructed using ST (0.6-10 mm), kernel settings (Br and Hr series), FOV ranges (250-370 mm), and reconstruction gaps (0.25-3 mm). EI was defined as the percentage of lung volume with attenuation values below - 950 HU, while GS provided a semi-quantitative assessment of emphysema severity. Statistical analyses evaluated the effects of reconstruction parameters on EI and GS.

Results: Variations in FOV, kernel selection, and reconstruction gaps had negligible effects on the GS (p > 0.05), suggesting that these parameters do not introduce structural distortions in pulmonary imaging. However, ultra-thin slices (0.6 mm) enhanced the detection of subtle emphysematous changes, slightly increasing GS, though higher image noise may affect interpretation. Additionally, ST significantly influenced EI values due to partial volume effects, with thinner slices yielding lower attenuation values.

Conclusion: These findings confirm the reliability of CT-based emphysema quantification and highlight the importance of optimizing ST to balance sensitivity and image clarity. Standardized imaging protocols and AI-driven texture analysis could further enhance quantitative emphysema assessment, improving disease monitoring and therapeutic decision-making in COPD management.

背景:定量计算机断层扫描(CT)在评估慢性阻塞性肺疾病(COPD)肺气肿严重程度方面起着至关重要的作用。然而,CT重建参数的变化——切片厚度(ST)、核选择、视场(FOV)和重建间隙——会影响肺气肿指数(EI)的量化,影响诊断准确性和研究的可比性。目的:本研究探讨CT重建参数如何影响基于Hounsfield单位(HU)测量和戈达德评分(GS)的EI量化,以完善肺气肿评估的成像方案。方法:对31例受试者进行低剂量CT扫描,采用ST (0.6 ~ 10 mm)、核设置(Br和Hr系列)、视场范围(250 ~ 370 mm)和重建间隙(0.25 ~ 3 mm)重建图像。EI定义为衰减值低于- 950 HU的肺体积百分比,而GS提供半定量的肺气肿严重程度评估。统计分析了重建参数对EI和GS的影响。结果:视场、核选择和重建间隙的变化对GS的影响可以忽略不计(p > 0.05),表明这些参数不会导致肺部成像的结构畸变。然而,超薄切片(0.6 mm)增强了对细微肺气肿变化的检测,略微增加了GS,尽管较高的图像噪声可能影响解释。此外,由于部分体积效应,ST显著影响EI值,薄片越薄,衰减值越低。结论:这些发现证实了基于ct的肺气肿定量的可靠性,并强调了优化ST以平衡灵敏度和图像清晰度的重要性。标准化的成像方案和人工智能驱动的纹理分析可以进一步加强肺气肿的定量评估,改善COPD管理中的疾病监测和治疗决策。
{"title":"The Impact of CT Reconstruction Parameters on Emphysema Index Quantification, HU-Based Measurements, and Goddard Score in COPD Assessment: A Prospective Study.","authors":"Rahma Saad Mohamed, Ahmed Sayed Abd El Bassset, Ahmed A G El-Shahawy","doi":"10.1155/ijbi/7436511","DOIUrl":"10.1155/ijbi/7436511","url":null,"abstract":"<p><strong>Background: </strong>Quantitative computed tomography (CT) plays a crucial role in assessing emphysema severity in chronic obstructive pulmonary disease (COPD). However, variations in CT reconstruction parameters-slice thickness (ST), kernel selection, field of view (FOV), and reconstruction gaps-can affect emphysema index (EI) quantification, impacting diagnostic accuracy and study comparability.</p><p><strong>Objective: </strong>This study examines how CT reconstruction parameters influence EI quantification using Hounsfield Unit (HU)-based measurements and the Goddard Score (GS) to refine imaging protocols for emphysema assessment.</p><p><strong>Methods: </strong>Low-dose CT scans were performed on 31 subjects, with images reconstructed using ST (0.6-10 mm), kernel settings (Br and Hr series), FOV ranges (250-370 mm), and reconstruction gaps (0.25-3 mm). EI was defined as the percentage of lung volume with attenuation values below - 950 HU, while GS provided a semi-quantitative assessment of emphysema severity. Statistical analyses evaluated the effects of reconstruction parameters on EI and GS.</p><p><strong>Results: </strong>Variations in FOV, kernel selection, and reconstruction gaps had negligible effects on the GS (<i>p</i> > 0.05), suggesting that these parameters do not introduce structural distortions in pulmonary imaging. However, ultra-thin slices (0.6 mm) enhanced the detection of subtle emphysematous changes, slightly increasing GS, though higher image noise may affect interpretation. Additionally, ST significantly influenced EI values due to partial volume effects, with thinner slices yielding lower attenuation values.</p><p><strong>Conclusion: </strong>These findings confirm the reliability of CT-based emphysema quantification and highlight the importance of optimizing ST to balance sensitivity and image clarity. Standardized imaging protocols and AI-driven texture analysis could further enhance quantitative emphysema assessment, improving disease monitoring and therapeutic decision-making in COPD management.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2026 ","pages":"7436511"},"PeriodicalIF":1.3,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12789179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Mammography Images Based on Multifractal Analysis of BIMFs. 基于bimf多重分形分析的乳腺造影图像分类。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-29 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/5940783
Fatima Ghazi, Khalil Ibrahimi, Fouad Ayoub, Aziza Benkuider, Mohamed Zraidi

Breast cancer is a real public health problem. Several women with this disease have died from it. Breast cancer is one of the deadliest cancers. Currently, the only way to combat this scourge is the early detection of breast masses. Mammography is a breast x-ray that allows images of the inside of the breast to be obtained using x-rays, thereby detecting possible abnormalities. Computer-aided diagnosis provides significant support in this direction. This work introduces a new system called MF-BIMFs for computer-aided diagnosis that automatically analyzes digital mammograms to discover areas of interest in breast images and offers experts a second opinion. This system was based on the combination of two steps. The first step is the image preprocessing which was based on the bidimensional empirical mode decomposition (BEMD) of breast mammographic images, and their objective is to decompose the image into several BIMF modes and the residual, while the second step is the extraction of features and irregularity properties of the preprocessed images from the multifractal spectrum on each BIMF and the residual and to extract a better representation of each mode and provide details capable of differentiating the two healthy and cancerous states, using these properties as characteristic attributes to evaluate their performance in characterizing two conditions objectively. The rate of this classification is given by SVM. The experimental results indicate that the BIMF1 mode provided the best classification rate, approximately 97.32%. The interest of this new approach was applied to real mammographic image data from the Reference Center for Reproductive Health of Kenitra, Morocco (RCRHKM), which contains normal and pathological mammographic images.

乳腺癌是一个真正的公共健康问题。有几个患这种病的妇女死于此病。乳腺癌是最致命的癌症之一。目前,对抗这一祸害的唯一方法是早期发现乳房肿块。乳房x光检查是一种乳房x光检查,可以通过x光获得乳房内部的图像,从而发现可能的异常情况。计算机辅助诊断在这方面提供了重要的支持。这项工作引入了一个名为MF-BIMFs的计算机辅助诊断新系统,该系统自动分析数字乳房x光照片,以发现乳房图像中感兴趣的区域,并为专家提供第二意见。这个系统是基于两个步骤的结合。第一步是基于乳房x线摄影图像的二维经验模态分解(bmd)的图像预处理,其目标是将图像分解为若干个BIMF模态,残差;而第二步是从每个BIMF和残差上的多重分形谱中提取预处理图像的特征和不规则性,提取每个模式的更好表示,并提供能够区分健康和癌症两种状态的细节,使用这些属性作为特征属性来客观评估它们在表征两种条件方面的性能。该分类的速度由支持向量机给出。实验结果表明,BIMF1模式的分类率最高,约为97.32%。这种新方法的兴趣被应用于摩洛哥肯尼特拉生殖健康参考中心(RCRHKM)的真实乳房x线摄影图像数据,其中包含正常和病理乳房x线摄影图像。
{"title":"Classification of Mammography Images Based on Multifractal Analysis of BIMFs.","authors":"Fatima Ghazi, Khalil Ibrahimi, Fouad Ayoub, Aziza Benkuider, Mohamed Zraidi","doi":"10.1155/ijbi/5940783","DOIUrl":"10.1155/ijbi/5940783","url":null,"abstract":"<p><p>Breast cancer is a real public health problem. Several women with this disease have died from it. Breast cancer is one of the deadliest cancers. Currently, the only way to combat this scourge is the early detection of breast masses. Mammography is a breast x-ray that allows images of the inside of the breast to be obtained using x-rays, thereby detecting possible abnormalities. Computer-aided diagnosis provides significant support in this direction. This work introduces a new system called MF-BIMFs for computer-aided diagnosis that automatically analyzes digital mammograms to discover areas of interest in breast images and offers experts a second opinion. This system was based on the combination of two steps. The first step is the image preprocessing which was based on the bidimensional empirical mode decomposition (BEMD) of breast mammographic images, and their objective is to decompose the image into several BIMF modes and the residual, while the second step is the extraction of features and irregularity properties of the preprocessed images from the multifractal spectrum on each BIMF and the residual and to extract a better representation of each mode and provide details capable of differentiating the two healthy and cancerous states, using these properties as characteristic attributes to evaluate their performance in characterizing two conditions objectively. The rate of this classification is given by SVM. The experimental results indicate that the BIMF<sub>1</sub> mode provided the best classification rate, approximately 97.32%. The interest of this new approach was applied to real mammographic image data from the Reference Center for Reproductive Health of Kenitra, Morocco (RCRHKM), which contains normal and pathological mammographic images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"5940783"},"PeriodicalIF":1.3,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12752853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alveolar Bone Segmentation Methods in Assessing the Effectiveness of Periodontal Defect Regeneration Through Machine Learning of CBCT Data: A Systematic Review. 通过CBCT数据的机器学习评估牙周缺损再生有效性的牙槽骨分割方法:系统综述。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-21 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/9065572
Mahmud Mohammed, Tulio Fernandez-Medina, Manjunath Rajashekhar, Stephanie Baker, Ernest Jennings

Objectives: To evaluate various segmentation methods used for cone-beam computed tomography (CBCT) images of alveolar bone, assessing their effectiveness and potential benefits in digital workflows for periodontal defect regeneration.

Data: This review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) Checklist.

Source: A comprehensive literature search was conducted from May 2024 to June 2025 using MeSH terms on PubMed, Scopus, Web of Science, and Medline databases, with publication date restricted to 5 years. The PRISMA guidelines were followed to ensure a systematic review process, and the review protocol was registered with Prospero. The QUADAS-2 checklist was used to evaluate the risk of bias in the included studies.

Study selection: The initial search yielded 834 articles, which were systematically filtered down to 23 eligible studies. Deep learning methods, particularly U-Net, were the most frequently employed segmentation techniques. Four studies utilized semi-automated methods, while the remaining studies relied on manual or other segmentation methods. The Dice similarity (DC) index, ranging from 76% to 98%, was the primary metric used to assess segmentation performance.

Conclusions: Significant differences were observed between the segmentation of healthy and defective alveolar bone, underscoring the need to enhance deep learning-based methods. Accurate segmentation of periodontal defects in DICOM images is a crucial first step in the scaffold workflow, as it enables precise assessment of defect morphology and volume. This information directly informs scaffold design, ensuring that the scaffold geometry is tailored to the patient-specific defect.

Prospero registration: CRD42024590957.

目的:评价用于牙槽骨锥形束计算机断层扫描(CBCT)图像的各种分割方法,评估其在牙周缺损再生数字化工作流程中的有效性和潜在优势。资料:本综述遵循PRISMA-ScR(系统评价和荟萃分析扩展范围评价的首选报告项目)清单。资料来源:从2024年5月到2025年6月,在PubMed、Scopus、Web of Science和Medline数据库中使用MeSH术语进行了全面的文献检索,出版日期限制为5年。遵循PRISMA指南以确保系统的审查过程,审查协议已在普洛斯彼罗注册。采用QUADAS-2检查表评估纳入研究的偏倚风险。研究选择:最初的搜索产生了834篇文章,系统地过滤出23篇符合条件的研究。深度学习方法,尤其是U-Net,是最常用的分割技术。四项研究使用了半自动方法,而其余研究则依赖于手动或其他分割方法。Dice相似性(DC)指数,范围从76%到98%,是用来评估分割性能的主要指标。结论:健康牙槽骨和缺陷牙槽骨的分割存在显著差异,强调需要加强基于深度学习的方法。DICOM图像中牙周缺陷的准确分割是支架工作流程中至关重要的第一步,因为它可以精确评估缺陷的形态和体积。这些信息直接指导支架设计,确保支架的几何形状适合患者特定的缺陷。普洛斯彼罗注册:CRD42024590957。
{"title":"Alveolar Bone Segmentation Methods in Assessing the Effectiveness of Periodontal Defect Regeneration Through Machine Learning of CBCT Data: A Systematic Review.","authors":"Mahmud Mohammed, Tulio Fernandez-Medina, Manjunath Rajashekhar, Stephanie Baker, Ernest Jennings","doi":"10.1155/ijbi/9065572","DOIUrl":"10.1155/ijbi/9065572","url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate various segmentation methods used for cone-beam computed tomography (CBCT) images of alveolar bone, assessing their effectiveness and potential benefits in digital workflows for periodontal defect regeneration.</p><p><strong>Data: </strong>This review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) Checklist.</p><p><strong>Source: </strong>A comprehensive literature search was conducted from May 2024 to June 2025 using MeSH terms on PubMed, Scopus, Web of Science, and Medline databases, with publication date restricted to 5 years. The PRISMA guidelines were followed to ensure a systematic review process, and the review protocol was registered with Prospero. The QUADAS-2 checklist was used to evaluate the risk of bias in the included studies.</p><p><strong>Study selection: </strong>The initial search yielded 834 articles, which were systematically filtered down to 23 eligible studies. Deep learning methods, particularly U-Net, were the most frequently employed segmentation techniques. Four studies utilized semi-automated methods, while the remaining studies relied on manual or other segmentation methods. The Dice similarity (DC) index, ranging from 76% to 98%, was the primary metric used to assess segmentation performance.</p><p><strong>Conclusions: </strong>Significant differences were observed between the segmentation of healthy and defective alveolar bone, underscoring the need to enhance deep learning-based methods. Accurate segmentation of periodontal defects in DICOM images is a crucial first step in the scaffold workflow, as it enables precise assessment of defect morphology and volume. This information directly informs scaffold design, ensuring that the scaffold geometry is tailored to the patient-specific defect.</p><p><strong>Prospero registration: </strong>CRD42024590957.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"9065572"},"PeriodicalIF":1.3,"publicationDate":"2025-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12752843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping Regional Changes in Multiple-Timepoint Hyperpolarized Gas Ventilation Images and Validation by Radiologist Score. 多时间点超极化气体通风图像的区域变化映射和放射科医师评分验证。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-05 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/1959442
Ummul Afia Shammi, Talissa Altes, Cody Thornburgh, John P Mugler, Craig H Meyer, Kun Qing, X Eduard E de Lange, Jaime Mata, Robert Thomen

Background: Hyperpolarized gas (HPG) magnetic resonance imaging, recently FDA-approved, offers an innovative approach to evaluating gas distribution and lung function in both adults and children.

Purpose: In this study, we present an algorithm for calculating maps of changes in regional ventilation in asthma, cystic fibrosis, and COPD patients before and after receiving treatment. We validate the results with a radiologist's evaluation for accuracy. Our hypothesis is that the change map would be in congruence with a radiologist's visual examination.

Assessment: Nine asthmatics, six cystic fibrosis patients, and five COPD patients underwent hyperpolarized 3He MRI. N4ITK bias correction, voxel smoothing, and normalization to the signal distribution's 95th percentile voxel signal value were performed on images. For calculating regional ventilation change maps, posttreatment images were registered to baseline images, and difference maps were created. Difference-map voxel values of > 60% of the baseline mean signal value were identified as improved, and those of < -60% were identified as worsened. In addition, short-term improvement (STI) was identified where voxels improved at Timepoint 2 but returned to baseline at Timepoint 3. A grading rubric was developed for radiologist scoring that had the following assessment categories: "level of volume discrepancy" and "discrepancy causes" for each ventilation change map.

Results: In 15 out of the 20 cases (75% of the data), there was a small to no volume disparity between the change map and the radiologists' visual evaluation. The rest of the two cases had moderate volume differences, and three cases had large ones.

Conclusion: Our regional change maps demonstrated congruence with visual examination and may be a useful tool for clinicians evaluating ventilation changes longitudinally.

背景:超极化气体(HPG)磁共振成像,最近获得fda批准,提供了一种创新的方法来评估成人和儿童的气体分布和肺功能。目的:在本研究中,我们提出了一种算法,用于计算哮喘、囊性纤维化和COPD患者在接受治疗前后的局部通气变化图。我们用放射科医生的评估来验证结果的准确性。我们的假设是,变化图将与放射科医生的视觉检查一致。评估:9例哮喘患者、6例囊性纤维化患者和5例COPD患者行超极化3He MRI检查。对图像进行N4ITK偏置校正、体素平滑和归一化到信号分布的第95百分位体素信号值。为了计算区域通风变化图,将处理后的图像与基线图像进行配准,并创建差异图。差异图体素值的基线平均信号值的60%被认为是改善的,结果:在20例中有15例(75%的数据),变化图与放射科医生的视觉评价之间存在很小或没有体积差异。其余两例体积差异中等,3例体积差异较大。结论:我们的区域变化图与目视检查一致,可能是临床医生纵向评估通气变化的有用工具。
{"title":"Mapping Regional Changes in Multiple-Timepoint Hyperpolarized Gas Ventilation Images and Validation by Radiologist Score.","authors":"Ummul Afia Shammi, Talissa Altes, Cody Thornburgh, John P Mugler, Craig H Meyer, Kun Qing, X Eduard E de Lange, Jaime Mata, Robert Thomen","doi":"10.1155/ijbi/1959442","DOIUrl":"10.1155/ijbi/1959442","url":null,"abstract":"<p><strong>Background: </strong>Hyperpolarized gas (HPG) magnetic resonance imaging, recently FDA-approved, offers an innovative approach to evaluating gas distribution and lung function in both adults and children.</p><p><strong>Purpose: </strong>In this study, we present an algorithm for calculating maps of changes in regional ventilation in asthma, cystic fibrosis, and COPD patients before and after receiving treatment. We validate the results with a radiologist's evaluation for accuracy. Our hypothesis is that the change map would be in congruence with a radiologist's visual examination.</p><p><strong>Assessment: </strong>Nine asthmatics, six cystic fibrosis patients, and five COPD patients underwent hyperpolarized 3He MRI. N4ITK bias correction, voxel smoothing, and normalization to the signal distribution's 95th percentile voxel signal value were performed on images. For calculating regional ventilation change maps, posttreatment images were registered to baseline images, and difference maps were created. Difference-map voxel values of > 60% of the baseline mean signal value were identified as improved, and those of < -60% were identified as worsened. In addition, short-term improvement (STI) was identified where voxels improved at Timepoint 2 but returned to baseline at Timepoint 3. A grading rubric was developed for radiologist scoring that had the following assessment categories: \"level of volume discrepancy\" and \"discrepancy causes\" for each ventilation change map.</p><p><strong>Results: </strong>In 15 out of the 20 cases (75% of the data), there was a small to no volume disparity between the change map and the radiologists' visual evaluation. The rest of the two cases had moderate volume differences, and three cases had large ones.</p><p><strong>Conclusion: </strong>Our regional change maps demonstrated congruence with visual examination and may be a useful tool for clinicians evaluating ventilation changes longitudinally.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"1959442"},"PeriodicalIF":1.3,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12752820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic Efficacy and Correlation of Intravoxel Incoherent Motion (IVIM) and Contrast-Enhanced (CE) MRI Perfusion Parameters in Oncology Imaging: A Systematic Review and Meta-Analysis. 体素内非相干运动(IVIM)和对比增强(CE) MRI灌注参数在肿瘤成像中的诊断效果和相关性:系统综述和荟萃分析。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-11-18 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/3621023
Abhijith S, Rajagopal Kadavigere, Priya P S, Dharmesh Singh, Priyanka, Tancia Pires, Dileep Kumar, Saikiran Pendem

Background: Intravoxel incoherent motion (IVIM) magnetic resonance imaging (MRI) is a noncontrast technique estimating diffusion and perfusion parameters via multiple b-values, essential for oncology imaging. However, there is limited collective evidence regarding the efficacy of IVIM in oncology imaging compared to contrast-enhanced (CE) MRI perfusion techniques. This systematic review and meta-analysis compared IVIM's diagnostic accuracy and correlation with CE MRI perfusion techniques.

Methods: Following PRISMA guidelines (PROSPERO-registered), a literature search across five databases (PubMed, Scopus, Embase, Web of Science, and Cochrane Library) was conducted. Diagnostic metrics, including AUC, sensitivity, specificity, and correlation coefficients, were analyzed using a random-effects model, with heterogeneity and publication bias assessed via I 2 statistics and Egger's test.

Results: Eighteen studies on breast, rectal, and brain cancers were analyzed. For breast cancer, IVIM showed 83.50% sensitivity and 81.24% specificity compared to dynamic contrast-enhanced (DCE) MRI's 88.04% sensitivity and 65.98% specificity. In rectal cancer, IVIM achieved 70.9% sensitivity and 56.2% specificity, outperforming DCE MRI's 58.11% sensitivity and 72.49% specificity. For gliomas, IVIM demonstrated 92.27% sensitivity and 74.06% specificity compared to dynamic susceptibility contrast (DSC) MRI's 95.71% sensitivity and 92.91% specificity. Correlations between IVIM and CE parameters were weak to moderate.

Conclusion: IVIM demonstrated equal or superior diagnostic performance to CE MRI in breast cancer, rectal cancer, and gliomas, offering a noncontrast alternative. However, unclear parameter correlations warrant future studies focusing on IVIM protocol optimization based on perfusion regimes.

背景:体素内非相干运动(IVIM)磁共振成像(MRI)是一种通过多个b值估计扩散和灌注参数的非对比技术,对肿瘤成像至关重要。然而,与对比增强(CE) MRI灌注技术相比,关于IVIM在肿瘤成像中的有效性的集体证据有限。本系统综述和荟萃分析比较了IVIM的诊断准确性及其与CE MRI灌注技术的相关性。方法:遵循PRISMA指南(普洛斯普洛斯注册),对5个数据库(PubMed, Scopus, Embase, Web of Science和Cochrane Library)进行文献检索。使用随机效应模型分析诊断指标,包括AUC、敏感性、特异性和相关系数,并通过i2统计量和Egger检验评估异质性和发表偏倚。结果:对18项关于乳腺癌、直肠癌和脑癌的研究进行了分析。对于乳腺癌,IVIM的敏感性为83.50%,特异性为81.24%,而动态对比增强(DCE) MRI的敏感性为88.04%,特异性为65.98%。在直肠癌中,IVIM的灵敏度为70.9%,特异度为56.2%,优于DCE MRI的58.11%,特异度为72.49%。对于胶质瘤,IVIM的敏感性为92.27%,特异性为74.06%,而动态敏感性对比(DSC) MRI的敏感性为95.71%,特异性为92.91%。IVIM与CE参数的相关性为弱至中度。结论:IVIM在乳腺癌、直肠癌和胶质瘤的诊断中表现出与CE MRI相同或更好的诊断性能,提供了一种非对比的替代方法。然而,不明确的参数相关性使未来的研究重点放在基于灌注制度的IVIM方案优化上。
{"title":"Diagnostic Efficacy and Correlation of Intravoxel Incoherent Motion (IVIM) and Contrast-Enhanced (CE) MRI Perfusion Parameters in Oncology Imaging: A Systematic Review and Meta-Analysis.","authors":"Abhijith S, Rajagopal Kadavigere, Priya P S, Dharmesh Singh, Priyanka, Tancia Pires, Dileep Kumar, Saikiran Pendem","doi":"10.1155/ijbi/3621023","DOIUrl":"https://doi.org/10.1155/ijbi/3621023","url":null,"abstract":"<p><strong>Background: </strong>Intravoxel incoherent motion (IVIM) magnetic resonance imaging (MRI) is a noncontrast technique estimating diffusion and perfusion parameters via multiple <i>b</i>-values, essential for oncology imaging. However, there is limited collective evidence regarding the efficacy of IVIM in oncology imaging compared to contrast-enhanced (CE) MRI perfusion techniques. This systematic review and meta-analysis compared IVIM's diagnostic accuracy and correlation with CE MRI perfusion techniques.</p><p><strong>Methods: </strong>Following PRISMA guidelines (PROSPERO-registered), a literature search across five databases (PubMed, Scopus, Embase, Web of Science, and Cochrane Library) was conducted. Diagnostic metrics, including AUC, sensitivity, specificity, and correlation coefficients, were analyzed using a random-effects model, with heterogeneity and publication bias assessed via <i>I</i> <sup>2</sup> statistics and Egger's test.</p><p><strong>Results: </strong>Eighteen studies on breast, rectal, and brain cancers were analyzed. For breast cancer, IVIM showed 83.50% sensitivity and 81.24% specificity compared to dynamic contrast-enhanced (DCE) MRI's 88.04% sensitivity and 65.98% specificity. In rectal cancer, IVIM achieved 70.9% sensitivity and 56.2% specificity, outperforming DCE MRI's 58.11% sensitivity and 72.49% specificity. For gliomas, IVIM demonstrated 92.27% sensitivity and 74.06% specificity compared to dynamic susceptibility contrast (DSC) MRI's 95.71% sensitivity and 92.91% specificity. Correlations between IVIM and CE parameters were weak to moderate.</p><p><strong>Conclusion: </strong>IVIM demonstrated equal or superior diagnostic performance to CE MRI in breast cancer, rectal cancer, and gliomas, offering a noncontrast alternative. However, unclear parameter correlations warrant future studies focusing on IVIM protocol optimization based on perfusion regimes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"3621023"},"PeriodicalIF":1.3,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12646736/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145640985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Powered Early Detection of Retinal Conditions: A Deep Learning Approach for Diabetic Retinopathy and Beyond. 人工智能支持的视网膜疾病早期检测:糖尿病视网膜病变及其他疾病的深度学习方法。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-10-06 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/6154285
Ali Basim Mahdi, Zahraa A Mousa Al-Ibraheemi, Zahraa Fadhil Kadhim, Raffef Jabar Abbrahim, Yaqeen Sameer Dhayool, Ghasaq Mankhey Jabbar, Sajjad A Mohammed

Various retinal conditions, such as diabetic macular edema (DME) and choroidal neovascularization (CNV), pose significant risks of visual impairment and vision loss. Early detection through automated and accurate and advanced systems can greatly enhance clinical outcomes for patients as well as for medical staff. This study is aimed at developing a deep learning-based model for the early detection of retinal diseases using OCT images. We utilized a publicly available retinal image dataset comprising images with DME, CNV, drusen, and normal cases. The Inception model was trained and validated using various evaluation metrics. Performance metrics, including accuracy, precision, recall, and F1 score, were calculated. The proposed model achieved an accuracy of 94.2%, with precision, recall, and F1 scores exceeding 92% across all classes. Statistical analysis demonstrated the robustness of the model across folds. Our findings highlight the potential of AI-powered systems in improving early detection of retinal conditions, paving the way for integration into clinical workflows. More efforts are needed to utilize it offline by making it available on ophthalmologist mobile devices to facilitate the diagnosis process and provide better service to patients.

各种视网膜疾病,如糖尿病性黄斑水肿(DME)和脉络膜新生血管(CNV),造成视力损害和视力丧失的重大风险。通过自动化、准确和先进的系统进行早期检测,可以大大提高患者和医务人员的临床效果。本研究旨在开发一种基于深度学习的模型,用于使用OCT图像早期检测视网膜疾病。我们利用了一个公开可用的视网膜图像数据集,包括DME、CNV、dren和正常病例的图像。先启模型使用各种评估量度进行训练和验证。计算性能指标,包括准确性、精密度、召回率和F1分数。该模型的准确率为94.2%,所有类别的准确率、召回率和F1分数均超过92%。统计分析证明了模型跨褶皱的稳健性。我们的研究结果强调了人工智能系统在改善视网膜疾病早期检测方面的潜力,为整合到临床工作流程铺平了道路。需要更多的努力来利用它,使其在眼科医生的移动设备上可用,以促进诊断过程,并为患者提供更好的服务。
{"title":"AI-Powered Early Detection of Retinal Conditions: A Deep Learning Approach for Diabetic Retinopathy and Beyond.","authors":"Ali Basim Mahdi, Zahraa A Mousa Al-Ibraheemi, Zahraa Fadhil Kadhim, Raffef Jabar Abbrahim, Yaqeen Sameer Dhayool, Ghasaq Mankhey Jabbar, Sajjad A Mohammed","doi":"10.1155/ijbi/6154285","DOIUrl":"10.1155/ijbi/6154285","url":null,"abstract":"<p><p>Various retinal conditions, such as diabetic macular edema (DME) and choroidal neovascularization (CNV), pose significant risks of visual impairment and vision loss. Early detection through automated and accurate and advanced systems can greatly enhance clinical outcomes for patients as well as for medical staff. This study is aimed at developing a deep learning-based model for the early detection of retinal diseases using OCT images. We utilized a publicly available retinal image dataset comprising images with DME, CNV, drusen, and normal cases. The Inception model was trained and validated using various evaluation metrics. Performance metrics, including accuracy, precision, recall, and <i>F</i>1 score, were calculated. The proposed model achieved an accuracy of 94.2%, with precision, recall, and <i>F</i>1 scores exceeding 92% across all classes. Statistical analysis demonstrated the robustness of the model across folds. Our findings highlight the potential of AI-powered systems in improving early detection of retinal conditions, paving the way for integration into clinical workflows. More efforts are needed to utilize it offline by making it available on ophthalmologist mobile devices to facilitate the diagnosis process and provide better service to patients.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"6154285"},"PeriodicalIF":1.3,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing TB Bacteria Detection Efficiency: Utilizing RetinaNet-Based Preprocessing Techniques for Small Image Patch Classification. 优化结核菌检测效率:利用基于retinanet的预处理技术进行小图像斑块分类。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-10-05 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/3559598
Shwetha V, Barnini Banerjee, Vijaya Laxmi, Priya Kamath

Tuberculosis (TB), caused by Mycobacterium tuberculosis, is a re-emerging disease that necessitates early and accurate detection. While Ziehl-Neelsen (ZN) staining is effective in highlighting bacterial morphology, automation significantly accelerates the diagnostic workflow. However, detecting TB bacilli-which are typically much smaller than white blood cells (WBCs)-in stained images remains a considerable challenge. This study leverages the ZNSM-iDB dataset, which comprises approximately 2000 publicly available images captured using different staining methods. Notably, 800 images are fully stained with the ZN technique. We propose a novel two-stage pipeline where a RetinaNet-based object detection model functions as a preprocessing step to localize and isolate TB bacilli and WBCs from ZN-stained images. To address the challenges posed by low spatial resolution and background interference, the RetinaNet model is enhanced with dilated convolutional layers to improve fine-grained feature extraction. This approach not only facilitates accurate detection of small objects but also achieves an average precision (AP) of 0.94 for WBCs and 0.97 for TB bacilli. Following detection, a patch-based convolutional neural network (CNN) classifier is employed to classify the extracted regions. The proposed CNN model achieves a remarkable classification accuracy of 93%, outperforming other traditional CNN architectures. This framework demonstrates a robust and scalable solution for automated TB screening using ZN-stained microscopy images.

由结核分枝杆菌引起的结核病是一种重新出现的疾病,需要及早准确地发现。虽然Ziehl-Neelsen (ZN)染色在突出细菌形态方面是有效的,但自动化显著加快了诊断工作流程。然而,在染色图像中检测结核杆菌(通常比白细胞小得多)仍然是一个相当大的挑战。本研究利用了ZNSM-iDB数据集,该数据集包括使用不同染色方法捕获的大约2000张公开可用图像。值得注意的是,有800张图像用ZN技术完全染色。我们提出了一种新的两阶段流水线,其中基于retinanet的目标检测模型作为预处理步骤,从锌染色图像中定位和分离结核杆菌和白细胞。为了解决低空间分辨率和背景干扰所带来的挑战,对retanet模型进行了扩展卷积层的增强,以提高细粒度特征提取。该方法不仅有利于小物体的准确检测,而且白细胞和结核杆菌的平均检测精度(AP)分别为0.94和0.97。检测后,使用基于patch的卷积神经网络(CNN)分类器对提取的区域进行分类。本文提出的CNN模型的分类准确率达到93%,优于其他传统的CNN架构。该框架展示了使用锌染色显微镜图像进行自动结核病筛查的强大且可扩展的解决方案。
{"title":"Optimizing TB Bacteria Detection Efficiency: Utilizing RetinaNet-Based Preprocessing Techniques for Small Image Patch Classification.","authors":"Shwetha V, Barnini Banerjee, Vijaya Laxmi, Priya Kamath","doi":"10.1155/ijbi/3559598","DOIUrl":"10.1155/ijbi/3559598","url":null,"abstract":"<p><p>Tuberculosis (TB), caused by <i>Mycobacterium tuberculosis</i>, is a re-emerging disease that necessitates early and accurate detection. While Ziehl-Neelsen (ZN) staining is effective in highlighting bacterial morphology, automation significantly accelerates the diagnostic workflow. However, detecting TB bacilli-which are typically much smaller than white blood cells (WBCs)-in stained images remains a considerable challenge. This study leverages the ZNSM-iDB dataset, which comprises approximately 2000 publicly available images captured using different staining methods. Notably, 800 images are fully stained with the ZN technique. We propose a novel two-stage pipeline where a RetinaNet-based object detection model functions as a preprocessing step to localize and isolate TB bacilli and WBCs from ZN-stained images. To address the challenges posed by low spatial resolution and background interference, the RetinaNet model is enhanced with dilated convolutional layers to improve fine-grained feature extraction. This approach not only facilitates accurate detection of small objects but also achieves an average precision (AP) of 0.94 for WBCs and 0.97 for TB bacilli. Following detection, a patch-based convolutional neural network (CNN) classifier is employed to classify the extracted regions. The proposed CNN model achieves a remarkable classification accuracy of 93%, outperforming other traditional CNN architectures. This framework demonstrates a robust and scalable solution for automated TB screening using ZN-stained microscopy images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"3559598"},"PeriodicalIF":1.3,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12515574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAGs-Net: A Novel Adjacent-Context Network With Channel Attention Gate for 3D Brain Tumor Image Segmentation. CAGs-Net:一种新的具有通道注意门的邻接上下文网络用于三维脑肿瘤图像分割。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-08-22 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/6656059
Qianqian Ye, Yuhu Shi, Shunjie Guo

Accurate brain tumor segmentation is essential for clinical decision-making, yet remains difficult to automate. Key obstacles include the small volume of lesions, their morphological diversity, poorly defined MRI boundaries, and nonuniform intensity profiles. Furthermore, while traditional segmentation approaches often focus on intralayer relevance, they frequently underutilize the rich semantic correlations between features extracted from adjacent network layers. Concurrently, classical attention mechanisms, while effective for highlighting salient regions, often lack explicit mechanisms for directing feature refinement along specific dimensions. To solve these problems, this paper presents CAGs-Net, a novel network that progressively constructs semantic dependencies between neighboring layers in the UNet hierarchy, enabling effective integration of local and global contextual information. Meanwhile, the channel attention gate was embedded within this adjacent-context network. These gates strategically fuse shallow appearance features and deep semantic information, leveraging channel-wise relationships to refine features by recalibrating voxel spatial responses. In addition, the hybrid loss combining generalized dice loss and binary cross-entropy loss was employed to avoid severe class imbalance inherent in lesion segmentation. Therefore, CAGs-Net uniquely combines adjacent-context modeling with channel attention gates to enhance feature refinement, outperforming traditional UNet-based methods, and the experimental results demonstrated that CAGs-Net shows better segmentation performance in comparison with some state-of-the-art methods for brain tumor image segmentation.

准确的脑肿瘤分割对临床决策至关重要,但仍然难以实现自动化。主要障碍包括病灶体积小、形态多样性、MRI边界不明确和强度分布不均匀。此外,虽然传统的分割方法通常侧重于层内相关性,但它们往往没有充分利用从相邻网络层中提取的特征之间丰富的语义相关性。同时,经典的注意机制,虽然有效地突出突出的区域,往往缺乏明确的机制来指导特征细化沿着特定的维度。为了解决这些问题,本文提出了CAGs-Net,这是一种新的网络,它在UNet层次结构中相邻层之间逐步构建语义依赖关系,从而实现局部和全局上下文信息的有效集成。同时,将通道注意门嵌入到邻接上下文网络中。这些门战略性地融合了浅层外观特征和深层语义信息,利用通道关系通过重新校准体素空间响应来细化特征。此外,采用广义骰子损失和二值交叉熵损失相结合的混合损失,避免了损伤分割中存在严重的类不平衡。因此,CAGs-Net独特地将相邻上下文建模与通道注意门相结合,增强了特征细化,优于传统的基于unet的方法,实验结果表明,与一些最先进的脑肿瘤图像分割方法相比,CAGs-Net具有更好的分割性能。
{"title":"CAGs-Net: A Novel Adjacent-Context Network With Channel Attention Gate for 3D Brain Tumor Image Segmentation.","authors":"Qianqian Ye, Yuhu Shi, Shunjie Guo","doi":"10.1155/ijbi/6656059","DOIUrl":"10.1155/ijbi/6656059","url":null,"abstract":"<p><p>Accurate brain tumor segmentation is essential for clinical decision-making, yet remains difficult to automate. Key obstacles include the small volume of lesions, their morphological diversity, poorly defined MRI boundaries, and nonuniform intensity profiles. Furthermore, while traditional segmentation approaches often focus on intralayer relevance, they frequently underutilize the rich semantic correlations between features extracted from adjacent network layers. Concurrently, classical attention mechanisms, while effective for highlighting salient regions, often lack explicit mechanisms for directing feature refinement along specific dimensions. To solve these problems, this paper presents CAGs-Net, a novel network that progressively constructs semantic dependencies between neighboring layers in the UNet hierarchy, enabling effective integration of local and global contextual information. Meanwhile, the channel attention gate was embedded within this adjacent-context network. These gates strategically fuse shallow appearance features and deep semantic information, leveraging channel-wise relationships to refine features by recalibrating voxel spatial responses. In addition, the hybrid loss combining generalized dice loss and binary cross-entropy loss was employed to avoid severe class imbalance inherent in lesion segmentation. Therefore, CAGs-Net uniquely combines adjacent-context modeling with channel attention gates to enhance feature refinement, outperforming traditional UNet-based methods, and the experimental results demonstrated that CAGs-Net shows better segmentation performance in comparison with some state-of-the-art methods for brain tumor image segmentation.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"6656059"},"PeriodicalIF":1.3,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12396912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Lesion Segmentation in Ultrasound Images: The Impact of Targeted Data Augmentation Strategies. 增强超声图像病变分割:目标数据增强策略的影响。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-08-11 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/3309822
Xu Wang, Patrice Monkam, Bonan Zhao, Shouliang Qi, He Ma, Long Huang, Wei Qian

Automated lesion segmentation in ultrasound (US) images based on deep learning (DL) approaches plays a crucial role in disease diagnosis and treatment. However, the successful implementation of these approaches is conditioned by large-scale and diverse annotated datasets whose obtention is tedious and expertise demanding. Although methods like generative adversarial networks (GANs) can help address sample scarcity, they are often associated with complex training processes and high computational demands, which can limit their practicality and feasibility, especially in resource-constrained scenarios. Therefore, this study is aimed at exploring new solutions to address the challenge of limited annotated samples in automated lesion delineation in US images. Specifically, we propose five distinct mixed sample augmentation strategies and assess their effectiveness using four deep segmentation models for the delineation of two lesion types: breast and thyroid lesions. Extensive experimental analyses indicate that the effectiveness of these augmentation strategies is strongly influenced by both the lesion type and the model architecture. When appropriately selected, these strategies result in substantial performance improvements, with the Dice and Jaccard indices increasing by up to 37.95% and 36.32% for breast lesions and 14.59% and 13.01% for thyroid lesions, respectively. These improvements highlight the potential of the proposed strategies as a reliable solution to address data scarcity in automated lesion segmentation tasks. Furthermore, the study emphasizes the critical importance of carefully selecting data augmentation approaches, offering valuable insights into how their strategic application can significantly enhance the performance of DL models.

基于深度学习(DL)方法的超声图像病灶自动分割在疾病诊断和治疗中起着至关重要的作用。然而,这些方法的成功实现取决于大规模和多样化的注释数据集,这些数据集的注意是繁琐的,并且需要专业知识。虽然像生成对抗网络(gan)这样的方法可以帮助解决样本稀缺性问题,但它们通常与复杂的训练过程和高计算需求相关,这限制了它们的实用性和可行性,特别是在资源受限的情况下。因此,本研究旨在探索新的解决方案,以解决美国图像中自动病变描绘中有限的注释样本的挑战。具体来说,我们提出了五种不同的混合样本增强策略,并使用四种深度分割模型来评估它们的有效性,以描述两种病变类型:乳腺和甲状腺病变。大量的实验分析表明,这些增强策略的有效性受到损伤类型和模型结构的强烈影响。如果选择得当,这些策略可以显著提高性能,乳房病变的Dice和Jaccard指数分别提高了37.95%和36.32%,甲状腺病变的Dice和Jaccard指数分别提高了14.59%和13.01%。这些改进突出了所提出的策略作为解决自动病变分割任务中数据稀缺性的可靠解决方案的潜力。此外,该研究强调了仔细选择数据增强方法的重要性,并就其战略应用如何显著提高深度学习模型的性能提供了有价值的见解。
{"title":"Enhancing Lesion Segmentation in Ultrasound Images: The Impact of Targeted Data Augmentation Strategies.","authors":"Xu Wang, Patrice Monkam, Bonan Zhao, Shouliang Qi, He Ma, Long Huang, Wei Qian","doi":"10.1155/ijbi/3309822","DOIUrl":"10.1155/ijbi/3309822","url":null,"abstract":"<p><p>Automated lesion segmentation in ultrasound (US) images based on deep learning (DL) approaches plays a crucial role in disease diagnosis and treatment. However, the successful implementation of these approaches is conditioned by large-scale and diverse annotated datasets whose obtention is tedious and expertise demanding. Although methods like generative adversarial networks (GANs) can help address sample scarcity, they are often associated with complex training processes and high computational demands, which can limit their practicality and feasibility, especially in resource-constrained scenarios. Therefore, this study is aimed at exploring new solutions to address the challenge of limited annotated samples in automated lesion delineation in US images. Specifically, we propose five distinct mixed sample augmentation strategies and assess their effectiveness using four deep segmentation models for the delineation of two lesion types: breast and thyroid lesions. Extensive experimental analyses indicate that the effectiveness of these augmentation strategies is strongly influenced by both the lesion type and the model architecture. When appropriately selected, these strategies result in substantial performance improvements, with the Dice and Jaccard indices increasing by up to 37.95% and 36.32% for breast lesions and 14.59% and 13.01% for thyroid lesions, respectively. These improvements highlight the potential of the proposed strategies as a reliable solution to address data scarcity in automated lesion segmentation tasks. Furthermore, the study emphasizes the critical importance of carefully selecting data augmentation approaches, offering valuable insights into how their strategic application can significantly enhance the performance of DL models.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"3309822"},"PeriodicalIF":1.3,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Brain Tumor Segmentation Using CBAM-Integrated Deep Learning and Area Quantification. 基于cbam集成深度学习和区域量化的增强脑肿瘤分割。
IF 1.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-08-01 eCollection Date: 2025-01-01 DOI: 10.1155/ijbi/2149042
Rafiqul Islam, Sazzad Hossain

Brain tumors are complex clinical lesions with diverse morphological characteristics, making accurate segmentation from MRI scans a challenging task. Manual segmentation by radiologists is time-consuming and susceptible to human error. Consequently, automated approaches are anticipated to accurately delineate tumor boundaries and quantify tumor burden, addressing these challenges efficiently. The presented work integrates a convolutional block attention module (CBAM) into a deep learning architecture to enhance the accuracy of MRI-based brain tumor segmentation. The deep learning network is built upon a VGG19-based U-Net model, augmented with depthwise and pointwise convolutions to improve feature extraction and processing efficiency during brain tumor segmentation. Furthermore, the proposed framework enhances segmentation precision while simultaneously incorporating tumor area measurement, making it a comprehensive tool for early-stage tumor analysis. Several qualitative assessments are used to assess the performance of the model in terms of tumor segmentation analysis. The qualitative metrics typically analyze the overlap between predicted tumor masks and ground truth annotations, providing information on the segmentation algorithms' accuracy and dependability. Following segmentation, a new approach is used to compute the extent of segmented tumor areas in MRI scans. This involves counting the number of pixels within the segmented tumor masks and multiplying by their area or volume. The computed tumor areas offer quantifiable data for future investigation and clinical interpretation. In general, the proposed methodology is projected to improve segmentation accuracy, efficiency, and clinical relevance compared to existing methods, resulting in better diagnosis, treatment planning, and monitoring of patients with brain tumors.

脑肿瘤是一种复杂的临床病变,具有多种形态特征,因此从MRI扫描中准确分割脑肿瘤是一项具有挑战性的任务。放射科医生进行人工分割既耗时又容易出现人为错误。因此,自动化方法有望准确描绘肿瘤边界和量化肿瘤负担,有效地解决这些挑战。本文将卷积块注意模块(CBAM)集成到深度学习架构中,以提高基于mri的脑肿瘤分割的准确性。该深度学习网络建立在基于vgg19的U-Net模型上,通过深度卷积和点卷积增强,提高了脑肿瘤分割过程中的特征提取和处理效率。此外,该框架在结合肿瘤面积测量的同时提高了分割精度,使其成为早期肿瘤分析的综合工具。几个定性评估被用来评估在肿瘤分割分析方面的模型的性能。定性指标通常分析预测的肿瘤掩模和基础真值注释之间的重叠,提供关于分割算法的准确性和可靠性的信息。在分割之后,采用了一种新的方法来计算MRI扫描中分割的肿瘤区域的范围。这包括计算分割的肿瘤掩模内像素的数量,并乘以它们的面积或体积。计算机计算的肿瘤面积为未来的研究和临床解释提供了可量化的数据。总的来说,与现有方法相比,所提出的方法预计将提高分割的准确性,效率和临床相关性,从而更好地诊断,治疗计划和监测脑肿瘤患者。
{"title":"Enhanced Brain Tumor Segmentation Using CBAM-Integrated Deep Learning and Area Quantification.","authors":"Rafiqul Islam, Sazzad Hossain","doi":"10.1155/ijbi/2149042","DOIUrl":"10.1155/ijbi/2149042","url":null,"abstract":"<p><p>Brain tumors are complex clinical lesions with diverse morphological characteristics, making accurate segmentation from MRI scans a challenging task. Manual segmentation by radiologists is time-consuming and susceptible to human error. Consequently, automated approaches are anticipated to accurately delineate tumor boundaries and quantify tumor burden, addressing these challenges efficiently. The presented work integrates a convolutional block attention module (CBAM) into a deep learning architecture to enhance the accuracy of MRI-based brain tumor segmentation. The deep learning network is built upon a VGG19-based U-Net model, augmented with depthwise and pointwise convolutions to improve feature extraction and processing efficiency during brain tumor segmentation. Furthermore, the proposed framework enhances segmentation precision while simultaneously incorporating tumor area measurement, making it a comprehensive tool for early-stage tumor analysis. Several qualitative assessments are used to assess the performance of the model in terms of tumor segmentation analysis. The qualitative metrics typically analyze the overlap between predicted tumor masks and ground truth annotations, providing information on the segmentation algorithms' accuracy and dependability. Following segmentation, a new approach is used to compute the extent of segmented tumor areas in MRI scans. This involves counting the number of pixels within the segmented tumor masks and multiplying by their area or volume. The computed tumor areas offer quantifiable data for future investigation and clinical interpretation. In general, the proposed methodology is projected to improve segmentation accuracy, efficiency, and clinical relevance compared to existing methods, resulting in better diagnosis, treatment planning, and monitoring of patients with brain tumors.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2025 ","pages":"2149042"},"PeriodicalIF":1.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12334286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1