首页 > 最新文献

BMC Medical Imaging最新文献

英文 中文
Radiograph-based rheumatoid arthritis diagnosis via convolutional neural network 基于卷积神经网络的类风湿性关节炎X光诊断
IF 2.7 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-22 DOI: 10.1186/s12880-024-01362-w
Yong Peng, Xianqian Huang, Minzhi Gan, Keyue Zhang, Yong Chen
Rheumatoid arthritis (RA) is a severe and common autoimmune disease. Conventional diagnostic methods are often subjective, error-prone, and repetitive works. There is an urgent need for a method to detect RA accurately. Therefore, this study aims to develop an automatic diagnostic system based on deep learning for recognizing and staging RA from radiographs to assist physicians in diagnosing RA quickly and accurately. We develop a CNN-based fully automated RA diagnostic model, exploring five popular CNN architectures on two clinical applications. The model is trained on a radiograph dataset containing 240 hand radiographs, of which 39 are normal and 201 are RA with five stages. For evaluation, we use 104 hand radiographs, of which 13 are normal and 91 RA with five stages. The CNN model achieves good performance in RA diagnosis based on hand radiographs. For the RA recognition, all models achieve an AUC above 90% with a sensitivity over 98%. In particular, the AUC of the GoogLeNet-based model is 97.80%, and the sensitivity is 100.0%. For the RA staging, all models achieve over 77% AUC with a sensitivity over 80%. Specifically, the VGG16-based model achieves 83.36% AUC with 92.67% sensitivity. The presented GoogLeNet-based model and VGG16-based model have the best AUC and sensitivity for RA recognition and staging, respectively. The experimental results demonstrate the feasibility and applicability of CNN in radiograph-based RA diagnosis. Therefore, this model has important clinical significance, especially for resource-limited areas and inexperienced physicians.
类风湿性关节炎(RA)是一种严重而常见的自身免疫性疾病。传统的诊断方法往往是主观的、容易出错的和重复性的工作。目前急需一种方法来准确检测 RA。因此,本研究旨在开发一种基于深度学习的自动诊断系统,用于从射线照片中识别和分期 RA,以协助医生快速准确地诊断 RA。我们开发了一个基于 CNN 的全自动 RA 诊断模型,在两个临床应用中探索了五种流行的 CNN 架构。该模型在包含 240 张手部射线照片的射线照片数据集上进行训练,其中 39 张正常,201 张为 RA,共分为五个阶段。在评估中,我们使用了 104 张手部 X 光片,其中 13 张正常,91 张 RA,共分为五个阶段。CNN 模型在基于手部 X 光片的 RA 诊断中表现良好。在 RA 识别方面,所有模型的 AUC 均超过 90%,灵敏度超过 98%。其中,基于 GoogLeNet 的模型的 AUC 为 97.80%,灵敏度为 100.0%。在 RA 分期方面,所有模型的 AUC 均超过 77%,灵敏度超过 80%。具体来说,基于 VGG16 的模型的 AUC 为 83.36%,灵敏度为 92.67%。基于 GoogLeNet 的模型和基于 VGG16 的模型在 RA 识别和分期方面的 AUC 和灵敏度分别最好。实验结果证明了 CNN 在基于射线照片的 RA 诊断中的可行性和适用性。因此,该模型具有重要的临床意义,尤其适用于资源有限的地区和缺乏经验的医生。
{"title":"Radiograph-based rheumatoid arthritis diagnosis via convolutional neural network","authors":"Yong Peng, Xianqian Huang, Minzhi Gan, Keyue Zhang, Yong Chen","doi":"10.1186/s12880-024-01362-w","DOIUrl":"https://doi.org/10.1186/s12880-024-01362-w","url":null,"abstract":"Rheumatoid arthritis (RA) is a severe and common autoimmune disease. Conventional diagnostic methods are often subjective, error-prone, and repetitive works. There is an urgent need for a method to detect RA accurately. Therefore, this study aims to develop an automatic diagnostic system based on deep learning for recognizing and staging RA from radiographs to assist physicians in diagnosing RA quickly and accurately. We develop a CNN-based fully automated RA diagnostic model, exploring five popular CNN architectures on two clinical applications. The model is trained on a radiograph dataset containing 240 hand radiographs, of which 39 are normal and 201 are RA with five stages. For evaluation, we use 104 hand radiographs, of which 13 are normal and 91 RA with five stages. The CNN model achieves good performance in RA diagnosis based on hand radiographs. For the RA recognition, all models achieve an AUC above 90% with a sensitivity over 98%. In particular, the AUC of the GoogLeNet-based model is 97.80%, and the sensitivity is 100.0%. For the RA staging, all models achieve over 77% AUC with a sensitivity over 80%. Specifically, the VGG16-based model achieves 83.36% AUC with 92.67% sensitivity. The presented GoogLeNet-based model and VGG16-based model have the best AUC and sensitivity for RA recognition and staging, respectively. The experimental results demonstrate the feasibility and applicability of CNN in radiograph-based RA diagnosis. Therefore, this model has important clinical significance, especially for resource-limited areas and inexperienced physicians.","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141742743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-contrast CT radiomics-clinical machine learning model for futile recanalization after endovascular treatment in anterior circulation acute ischemic stroke. 前循环急性缺血性脑卒中血管内治疗后徒劳再通的非对比 CT 放射计量学-临床机器学习模型。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-19 DOI: 10.1186/s12880-024-01365-7
Tao Sun, Hai-Yun Yu, Chun-Hua Zhan, Han-Long Guo, Mu-Yun Luo

Objective: To establish a machine learning model based on radiomics and clinical features derived from non-contrast CT to predict futile recanalization (FR) in patients with anterior circulation acute ischemic stroke (AIS) undergoing endovascular treatment.

Methods: A retrospective analysis was conducted on 174 patients who underwent endovascular treatment for acute anterior circulation ischemic stroke between January 2020 and December 2023. FR was defined as successful recanalization but poor prognosis at 90 days (modified Rankin Scale, mRS 4-6). Radiomic features were extracted from non-contrast CT and selected using the least absolute shrinkage and selection operator (LASSO) regression method. Logistic regression (LR) model was used to build models based on radiomic and clinical features. A radiomics-clinical nomogram model was developed, and the predictive performance of the models was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity.

Results: A total of 174 patients were included. 2016 radiomic features were extracted from non-contrast CT, and 9 features were selected to build the radiomics model. Univariate and stepwise multivariate analyses identified admission NIHSS score, hemorrhagic transformation, NLR, and admission blood glucose as independent factors for building the clinical model. The AUC of the radiomics-clinical nomogram model in the training and testing cohorts were 0.860 (95%CI 0.801-0.919) and 0.775 (95%CI 0.605-0.945), respectively.

Conclusion: The radiomics-clinical nomogram model based on non-contrast CT demonstrated satisfactory performance in predicting futile recanalization in patients with anterior circulation acute ischemic stroke.

目的建立一个基于放射组学和非对比CT临床特征的机器学习模型,以预测接受血管内治疗的前循环急性缺血性卒中(AIS)患者的无效再通(FR):对2020年1月至2023年12月期间接受血管内治疗的174例急性前循环缺血性卒中患者进行了回顾性分析。FR定义为成功再通但90天后预后不良(改良Rankin量表,mRS 4-6)。从非对比 CT 中提取放射学特征,并使用最小绝对收缩和选择算子(LASSO)回归法进行筛选。使用逻辑回归(LR)模型建立基于放射学和临床特征的模型。建立了放射学-临床提名图模型,并使用曲线下面积(AUC)、准确性、灵敏度和特异性评估了模型的预测性能:结果:共纳入 174 名患者。结果:共纳入 174 例患者,从非对比 CT 中提取了 2016 个放射组学特征,并选择了 9 个特征建立放射组学模型。单变量和逐步多变量分析确定入院 NIHSS 评分、出血转化、NLR 和入院血糖是建立临床模型的独立因素。在训练队列和测试队列中,放射组学-临床提名图模型的AUC分别为0.860(95%CI 0.801-0.919)和0.775(95%CI 0.605-0.945):基于非对比 CT 的放射计量学-临床提名图模型在预测前循环急性缺血性卒中患者的无效再通方面表现令人满意。
{"title":"Non-contrast CT radiomics-clinical machine learning model for futile recanalization after endovascular treatment in anterior circulation acute ischemic stroke.","authors":"Tao Sun, Hai-Yun Yu, Chun-Hua Zhan, Han-Long Guo, Mu-Yun Luo","doi":"10.1186/s12880-024-01365-7","DOIUrl":"10.1186/s12880-024-01365-7","url":null,"abstract":"<p><strong>Objective: </strong>To establish a machine learning model based on radiomics and clinical features derived from non-contrast CT to predict futile recanalization (FR) in patients with anterior circulation acute ischemic stroke (AIS) undergoing endovascular treatment.</p><p><strong>Methods: </strong>A retrospective analysis was conducted on 174 patients who underwent endovascular treatment for acute anterior circulation ischemic stroke between January 2020 and December 2023. FR was defined as successful recanalization but poor prognosis at 90 days (modified Rankin Scale, mRS 4-6). Radiomic features were extracted from non-contrast CT and selected using the least absolute shrinkage and selection operator (LASSO) regression method. Logistic regression (LR) model was used to build models based on radiomic and clinical features. A radiomics-clinical nomogram model was developed, and the predictive performance of the models was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>A total of 174 patients were included. 2016 radiomic features were extracted from non-contrast CT, and 9 features were selected to build the radiomics model. Univariate and stepwise multivariate analyses identified admission NIHSS score, hemorrhagic transformation, NLR, and admission blood glucose as independent factors for building the clinical model. The AUC of the radiomics-clinical nomogram model in the training and testing cohorts were 0.860 (95%CI 0.801-0.919) and 0.775 (95%CI 0.605-0.945), respectively.</p><p><strong>Conclusion: </strong>The radiomics-clinical nomogram model based on non-contrast CT demonstrated satisfactory performance in predicting futile recanalization in patients with anterior circulation acute ischemic stroke.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11264869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141726861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable lung cancer classification with ensemble transfer learning of VGG16, Resnet50 and InceptionV3 using grad-cam. 利用梯度摄像头对 VGG16、Resnet50 和 InceptionV3 进行集合迁移学习,实现可解释的肺癌分类。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-19 DOI: 10.1186/s12880-024-01345-x
Yogesh Kumaran S, J Jospin Jeya, Mahesh T R, Surbhi Bhatia Khan, Saeed Alzahrani, Mohammed Alojail

Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.

医学成像是诊断各种疾病的重要组成部分,传统方法通常依赖人工解读和传统的机器学习技术。这些方法虽然有效,但也有其固有的局限性,如解释时的主观性和处理复杂图像特征时的局限性。本研究论文提出了一种综合深度学习方法,利用预先训练好的模型-VGG16、ResNet50 和 InceptionV3,在统一的框架内进行组合,以提高医学影像诊断的准确性。该方法侧重于使用调整大小并转换为统一格式的图像进行肺癌检测,以优化性能并确保跨数据集的一致性。我们提出的模型充分利用了每个预训练网络的优势,通过冻结早期卷积层和微调深层,实现了高度的特征提取和鲁棒性。此外,还应用了 SMOTE 和高斯模糊等技术来解决类不平衡问题,从而加强了对代表性不足的类的模型训练。该模型的性能在 IQ-OTH/NCCD 肺癌数据集上得到了验证,该数据集是在 2019 年秋季从伊拉克肿瘤教学医院/国家癌症疾病中心收集的,历时三个月。所提出的模型准确率达到 98.18%,所有类别的精确率和召回率都很高。这一改进凸显了集成深度学习系统在医疗诊断中的潜力,为疾病检测提供了更准确、可靠和高效的手段。
{"title":"Explainable lung cancer classification with ensemble transfer learning of VGG16, Resnet50 and InceptionV3 using grad-cam.","authors":"Yogesh Kumaran S, J Jospin Jeya, Mahesh T R, Surbhi Bhatia Khan, Saeed Alzahrani, Mohammed Alojail","doi":"10.1186/s12880-024-01345-x","DOIUrl":"10.1186/s12880-024-01345-x","url":null,"abstract":"<p><p>Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11264852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141726860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-aided diagnosis system for grading brain tumor using histopathology images based on color and texture features. 基于颜色和纹理特征使用组织病理学图像对脑肿瘤进行分级的计算机辅助诊断系统。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-19 DOI: 10.1186/s12880-024-01355-9
Naira Elazab, Wael Gab Allah, Mohammed Elmogy

Background: Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results.

Methods: In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features.

Results: The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery.

Conclusion: The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patient

背景:癌症病理学显示了疾病的发展和相关的分子特征。它提供了大量可预测癌症的表型信息,对制定治疗计划具有潜在意义。基于计算方法在数字病理领域的卓越表现,利用数字病理图像中丰富的表型信息,我们能够识别低级别胶质瘤(LGG)和高级别胶质瘤(HGG)。由于纹理之间的差异非常微小,仅利用一种特征或少量特征会产生较差的分类结果:在这项工作中,我们使用了多种特征提取方法,这些方法可以从组织病理学图像数据的纹理中提取不同的特征,并对分类结果进行比较。本文选择了成功的特征提取算法 GLCM、LBP、multi-LBGLCM、GLRLM、色矩特征和 RSHD。LBP 算法与 GLCM 算法相结合,形成了 LBGLCM。本研究将 LBGLCM 特征提取方法扩展到使用图像金字塔的多尺度图像,图像金字塔是通过在空间和尺度上对图像进行采样而定义的。预处理阶段首先用于增强图像的对比度,消除噪声和光照影响。然后是特征提取阶段,从组织病理学图像中提取几个重要特征(纹理和颜色)。第三,实施特征融合和缩减步骤,以减少处理的特征数量,从而缩短建议系统的计算时间。最后是分类阶段,对各种脑癌等级进行分类。我们对癌症基因组图谱(TCGA)数据集中来自胶质瘤患者的 821 张全片病理图像进行了分析。数据集中包括两种类型的脑癌:GBM和LGG(II级和III级)。我们的分析包括 506 张 GBM 图像和 315 张 LGG 图像,保证了各种肿瘤等级和组织病理学特征的代表性:采用 10 倍交叉验证技术对胶质瘤患者进行了纹理和颜色特征融合验证,准确率为 95.8%,灵敏度为 96.4%,DSC 为 96.7%,特异性为 97.1%。颜色和纹理特征的组合产生了明显更好的准确性,这支持了它们在预测模型中的协同作用。结果表明,纹理特征与传统图像搭配使用,可以客观、准确、全面地预测胶质瘤:结论:在从HGG中识别LGG方面,研究结果优于现有方法,在文献中对四类胶质瘤的分类方面也具有竞争力。所提出的模型有助于在临床研究中对患者进行分层,选择接受靶向治疗的患者,并定制特定的治疗方案。
{"title":"Computer-aided diagnosis system for grading brain tumor using histopathology images based on color and texture features.","authors":"Naira Elazab, Wael Gab Allah, Mohammed Elmogy","doi":"10.1186/s12880-024-01355-9","DOIUrl":"10.1186/s12880-024-01355-9","url":null,"abstract":"<p><strong>Background: </strong>Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results.</p><p><strong>Methods: </strong>In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features.</p><p><strong>Results: </strong>The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery.</p><p><strong>Conclusion: </strong>The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patient","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11264763/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141726859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STC-UNet: renal tumor segmentation based on enhanced feature extraction at different network levels. STC-UNet:基于不同网络级别的增强特征提取的肾肿瘤分割。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-19 DOI: 10.1186/s12880-024-01359-5
Wei Hu, Shouyi Yang, Weifeng Guo, Na Xiao, Xiaopeng Yang, Xiangyang Ren

Renal tumors are one of the common diseases of urology, and precise segmentation of these tumors plays a crucial role in aiding physicians to improve diagnostic accuracy and treatment effectiveness. Nevertheless, inherent challenges associated with renal tumors, such as indistinct boundaries, morphological variations, and uncertainties in size and location, segmenting renal tumors accurately remains a significant challenge in the field of medical image segmentation. With the development of deep learning, substantial achievements have been made in the domain of medical image segmentation. However, existing models lack specificity in extracting features of renal tumors across different network hierarchies, which results in insufficient extraction of renal tumor features and subsequently affects the accuracy of renal tumor segmentation. To address this issue, we propose the Selective Kernel, Vision Transformer, and Coordinate Attention Enhanced U-Net (STC-UNet). This model aims to enhance feature extraction, adapting to the distinctive characteristics of renal tumors across various network levels. Specifically, the Selective Kernel modules are introduced in the shallow layers of the U-Net, where detailed features are more abundant. By selectively employing convolutional kernels of different scales, the model enhances its capability to extract detailed features of renal tumors across multiple scales. Subsequently, in the deeper layers of the network, where feature maps are smaller yet contain rich semantic information, the Vision Transformer modules are integrated in a non-patch manner. These assist the model in capturing long-range contextual information globally. Their non-patch implementation facilitates the capture of fine-grained features, thereby achieving collaborative enhancement of global-local information and ultimately strengthening the model's extraction of semantic features of renal tumors. Finally, in the decoder segment, the Coordinate Attention modules embedding positional information are proposed aiming to enhance the model's feature recovery and tumor region localization capabilities. Our model is validated on the KiTS19 dataset, and experimental results indicate that compared to the baseline model, STC-UNet shows improvements of 1.60%, 2.02%, 2.27%, 1.18%, 1.52%, and 1.35% in IoU, Dice, Accuracy, Precision, Recall, and F1-score, respectively. Furthermore, the experimental results demonstrate that the proposed STC-UNet method surpasses other advanced algorithms in both visual effectiveness and objective evaluation metrics.

肾脏肿瘤是泌尿外科常见疾病之一,对这些肿瘤进行精确分割对帮助医生提高诊断准确性和治疗效果起着至关重要的作用。然而,由于肾脏肿瘤存在边界不清、形态变化、大小和位置不确定等固有挑战,因此准确分割肾脏肿瘤仍是医学图像分割领域的一项重大挑战。随着深度学习的发展,医学图像分割领域取得了巨大成就。然而,现有模型在提取不同网络层次的肾脏肿瘤特征时缺乏特异性,导致肾脏肿瘤特征提取不足,进而影响肾脏肿瘤分割的准确性。针对这一问题,我们提出了选择性内核、视觉变换器和坐标注意增强型 U-Net(STC-UNet)。该模型旨在增强特征提取,以适应不同网络层次上肾肿瘤的显著特征。具体来说,选择性内核模块被引入到 U-Net 的浅层,这里的细节特征更为丰富。通过选择性地使用不同尺度的卷积核,该模型增强了在多个尺度上提取肾肿瘤细节特征的能力。随后,在特征图较小但包含丰富语义信息的网络深层,以非补丁方式集成了视觉转换器模块。这些模块有助于模型在全局范围内捕捉远距离上下文信息。它们的非补丁实施有利于捕捉细粒度特征,从而实现全局-本地信息的协同增强,最终加强模型对肾脏肿瘤语义特征的提取。最后,在解码器部分,提出了嵌入位置信息的坐标注意模块,旨在增强模型的特征恢复和肿瘤区域定位能力。我们的模型在 KiTS19 数据集上进行了验证,实验结果表明,与基线模型相比,STC-UNet 在 IoU、Dice、Accuracy、Precision、Recall 和 F1-score 方面分别提高了 1.60%、2.02%、2.27%、1.18%、1.52% 和 1.35%。此外,实验结果表明,所提出的 STC-UNet 方法在视觉效果和客观评价指标上都超越了其他先进算法。
{"title":"STC-UNet: renal tumor segmentation based on enhanced feature extraction at different network levels.","authors":"Wei Hu, Shouyi Yang, Weifeng Guo, Na Xiao, Xiaopeng Yang, Xiangyang Ren","doi":"10.1186/s12880-024-01359-5","DOIUrl":"10.1186/s12880-024-01359-5","url":null,"abstract":"<p><p>Renal tumors are one of the common diseases of urology, and precise segmentation of these tumors plays a crucial role in aiding physicians to improve diagnostic accuracy and treatment effectiveness. Nevertheless, inherent challenges associated with renal tumors, such as indistinct boundaries, morphological variations, and uncertainties in size and location, segmenting renal tumors accurately remains a significant challenge in the field of medical image segmentation. With the development of deep learning, substantial achievements have been made in the domain of medical image segmentation. However, existing models lack specificity in extracting features of renal tumors across different network hierarchies, which results in insufficient extraction of renal tumor features and subsequently affects the accuracy of renal tumor segmentation. To address this issue, we propose the Selective Kernel, Vision Transformer, and Coordinate Attention Enhanced U-Net (STC-UNet). This model aims to enhance feature extraction, adapting to the distinctive characteristics of renal tumors across various network levels. Specifically, the Selective Kernel modules are introduced in the shallow layers of the U-Net, where detailed features are more abundant. By selectively employing convolutional kernels of different scales, the model enhances its capability to extract detailed features of renal tumors across multiple scales. Subsequently, in the deeper layers of the network, where feature maps are smaller yet contain rich semantic information, the Vision Transformer modules are integrated in a non-patch manner. These assist the model in capturing long-range contextual information globally. Their non-patch implementation facilitates the capture of fine-grained features, thereby achieving collaborative enhancement of global-local information and ultimately strengthening the model's extraction of semantic features of renal tumors. Finally, in the decoder segment, the Coordinate Attention modules embedding positional information are proposed aiming to enhance the model's feature recovery and tumor region localization capabilities. Our model is validated on the KiTS19 dataset, and experimental results indicate that compared to the baseline model, STC-UNet shows improvements of 1.60%, 2.02%, 2.27%, 1.18%, 1.52%, and 1.35% in IoU, Dice, Accuracy, Precision, Recall, and F1-score, respectively. Furthermore, the experimental results demonstrate that the proposed STC-UNet method surpasses other advanced algorithms in both visual effectiveness and objective evaluation metrics.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11264758/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141726862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic performance of magnetic resonance imaging features to differentiate adrenal pheochromocytoma from adrenal tumors with positive biochemical testing results. 磁共振成像特征在区分肾上腺嗜铬细胞瘤和生化检测结果呈阳性的肾上腺肿瘤方面的诊断性能。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-18 DOI: 10.1186/s12880-024-01350-0
Rukun Huang, Tingsheng Lin, Mengxia Chen, Xiaogong Li, Hongqian Guo

Background: It is extremely essential to accurately differentiate pheochromocytoma from Adrenal incidentalomas (AIs) before operation, especially biochemical tests were inconclusive. We aimed to evaluate the value of magnetic resonance imaging (MRI) features to differentiate pheochromocytomas among adrenal tumors, among which the consequences of biochemical screening tests of catecholamines and/or catecholamine metabolites are positive.

Methods: With institutional review board approval, this study retrospectively compared 35 pheochromocytoma (PHEO) patients with 27 non-pheochromocytoma(non-PHEO) patients between January 2022 to September 2023, among which the consequences of biochemical screening tests of catecholamines and/or catecholamine metabolites are positive. T test was used for the independent continuous data and the chi-square test was used for categorical variables. Univariate and multivariate logistic regression were applied to find the independent variate of the features to differentiate PHEO from non-PHEO and ROC analysis was applied to evaluate the diagnostic value of the independent variate.

Results: We found that the T2-weighted (T2W) signal intensity in patients with pheochromocytoma was higher than other adrenal tumors, with greatly significant (p < 0.001). T2W signal intensity ratio (T2W nodule-to-muscle SI ratio) was an independent risk factor for the differential diagnosis of adrenal PHEOs from non-PHEOs. This feature alone had 91.4% sensitivity and 81.5% specificity to rule out pheochromocytoma based on optimal threshold, with an area under the receiver operating characteristics curve (AUC‑ROC) of 0.910(95% C I: 0.833-0.987).

Conclusion: Our study confirms that T2W signal intensity ratio can differentiate PHEO from non-PHEO, among which the consequences of biochemical screening tests of catecholamines and/or catecholamine metabolites are positive.

背景:手术前准确区分嗜铬细胞瘤和肾上腺偶发瘤(AIs)极为重要,尤其是生化检查并不确定。我们旨在评估磁共振成像(MRI)特征在肾上腺肿瘤中鉴别嗜铬细胞瘤的价值,其中儿茶酚胺和/或儿茶酚胺代谢物的生化筛查结果为阳性:经机构审查委员会批准,本研究回顾性比较了2022年1月至2023年9月期间35例嗜铬细胞瘤(PHEO)患者和27例非嗜铬细胞瘤(non-PHEO)患者,其中儿茶酚胺和/或儿茶酚胺代谢物生化筛查结果呈阳性。独立连续数据采用 T 检验,分类变量采用卡方检验。应用单变量和多变量逻辑回归寻找区分 PHEO 和非 PHEO 特征的独立变量,并应用 ROC 分析评估独立变量的诊断价值:结果:我们发现嗜铬细胞瘤患者的T2加权(T2W)信号强度高于其他肾上腺肿瘤,差异有显著性(P < 0.001)。T2W信号强度比(T2W结节与肌肉SI比)是鉴别诊断肾上腺嗜铬细胞瘤与非嗜铬细胞瘤的独立危险因素。根据最佳阈值,单凭这一特征排除嗜铬细胞瘤的敏感性为 91.4%,特异性为 81.5%,接收者操作特征曲线下面积(AUC-ROC)为 0.910(95% C I:0.833-0.987):我们的研究证实,T2W 信号强度比可以区分 PHEO 和非 PHEO,其中儿茶酚胺和/或儿茶酚胺代谢物的生化筛查结果为阳性。
{"title":"Diagnostic performance of magnetic resonance imaging features to differentiate adrenal pheochromocytoma from adrenal tumors with positive biochemical testing results.","authors":"Rukun Huang, Tingsheng Lin, Mengxia Chen, Xiaogong Li, Hongqian Guo","doi":"10.1186/s12880-024-01350-0","DOIUrl":"10.1186/s12880-024-01350-0","url":null,"abstract":"<p><strong>Background: </strong>It is extremely essential to accurately differentiate pheochromocytoma from Adrenal incidentalomas (AIs) before operation, especially biochemical tests were inconclusive. We aimed to evaluate the value of magnetic resonance imaging (MRI) features to differentiate pheochromocytomas among adrenal tumors, among which the consequences of biochemical screening tests of catecholamines and/or catecholamine metabolites are positive.</p><p><strong>Methods: </strong>With institutional review board approval, this study retrospectively compared 35 pheochromocytoma (PHEO) patients with 27 non-pheochromocytoma(non-PHEO) patients between January 2022 to September 2023, among which the consequences of biochemical screening tests of catecholamines and/or catecholamine metabolites are positive. T test was used for the independent continuous data and the chi-square test was used for categorical variables. Univariate and multivariate logistic regression were applied to find the independent variate of the features to differentiate PHEO from non-PHEO and ROC analysis was applied to evaluate the diagnostic value of the independent variate.</p><p><strong>Results: </strong>We found that the T2-weighted (T2W) signal intensity in patients with pheochromocytoma was higher than other adrenal tumors, with greatly significant (p < 0.001). T2W signal intensity ratio (T2W nodule-to-muscle SI ratio) was an independent risk factor for the differential diagnosis of adrenal PHEOs from non-PHEOs. This feature alone had 91.4% sensitivity and 81.5% specificity to rule out pheochromocytoma based on optimal threshold, with an area under the receiver operating characteristics curve (AUC‑ROC) of 0.910(95% C I: 0.833-0.987).</p><p><strong>Conclusion: </strong>Our study confirms that T2W signal intensity ratio can differentiate PHEO from non-PHEO, among which the consequences of biochemical screening tests of catecholamines and/or catecholamine metabolites are positive.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11264621/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141722932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing medical imaging: detecting polypharmacy and adverse drug effects with Graph Convolutional Networks (GCN). 推进医学成像:利用图卷积网络 (GCN) 检测多药合用和药物不良反应。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-15 DOI: 10.1186/s12880-024-01349-7
Omer Nabeel Dara, Abdullahi Abdu Ibrahim, Tareq Abed Mohammed

Polypharmacy involves an individual using many medications at the same time and is a frequent healthcare technique used to treat complex medical disorders. Nevertheless, it also presents substantial risks of negative medication responses and interactions. Identifying and addressing adverse effects caused by polypharmacy is crucial to ensure patient safety and improve healthcare results. This paper introduces a new method using Graph Convolutional Networks (GCN) to identify polypharmacy side effects. Our strategy involves developing a medicine interaction graph in which edges signify drug-drug intuitive predicated on pharmacological properties and hubs symbolize drugs. GCN is a well-suited profound learning procedure for graph-based representations of social information. It can be used to anticipate the probability of medicate unfavorable impacts and to memorize important representations of sedate intuitive. Tests were conducted on a huge dataset of patients' pharmaceutical records commented on with watched medicate unfavorable impacts in arrange to approve our strategy. Execution of the GCN show, which was prepared on a subset of this dataset, was evaluated through a disarray framework. The perplexity network shows the precision with which the show categories occasions. Our discoveries demonstrate empowering advance within the recognizable proof of antagonistic responses related with polypharmaceuticals. For cardiovascular system target drugs, GCN technique achieved an accuracy of 94.12%, precision of 86.56%, F1-Score of 88.56%, AUC of 89.74% and recall of 87.92%. For respiratory system target drugs, GCN technique achieved an accuracy of 93.38%, precision of 85.64%, F1-Score of 89.79%, AUC of 91.85% and recall of 86.35%. And for nervous system target drugs, GCN technique achieved an accuracy of 95.27%, precision of 88.36%, F1-Score of 86.49%, AUC of 88.83% and recall of 84.73%. This research provides a significant contribution to pharmacovigilance by proposing a data-driven method to detect and reduce polypharmacy side effects, thereby increasing patient safety and healthcare decision-making.

多药治疗是指一个人同时使用多种药物,是治疗复杂内科疾病的常用医疗技术。然而,它也带来了药物不良反应和相互作用的巨大风险。识别和处理由多种药物引起的不良反应对于确保患者安全和改善医疗效果至关重要。本文介绍了一种使用图卷积网络(GCN)识别多种药物副作用的新方法。我们的策略包括开发一个药物相互作用图,其中边表示药物与药物之间基于药理特性的直观关系,而中心则表示药物。GCN 是一种非常适合基于图的社会信息表征的深度学习程序。它可用于预测药物不利影响的概率,并记忆镇静剂直观的重要表征。为了验证我们的策略,我们在一个庞大的患者用药记录数据集上进行了测试。在该数据集的一个子集上编制的 GCN 显示的执行情况通过一个混乱框架进行了评估。困惑度网络显示了显示分类场合的精确度。我们的发现表明,在与多种药物相关的拮抗反应的可识别证明方面取得了令人瞩目的进展。对于心血管系统靶向药物,GCN 技术达到了 94.12% 的准确率、86.56% 的精确度、88.56% 的 F1-Score、89.74% 的 AUC 和 87.92% 的召回率。对于呼吸系统靶向药物,GCN 技术的准确率为 93.38%,精确度为 85.64%,F1-Score 为 89.79%,AUC 为 91.85%,召回率为 86.35%。对于神经系统靶向药物,GCN 技术的准确率为 95.27%,精确度为 88.36%,F1-Score 为 86.49%,AUC 为 88.83%,召回率为 84.73%。这项研究为药物警戒做出了重大贡献,提出了一种数据驱动的方法来检测和减少多药副作用,从而提高患者安全和医疗决策水平。
{"title":"Advancing medical imaging: detecting polypharmacy and adverse drug effects with Graph Convolutional Networks (GCN).","authors":"Omer Nabeel Dara, Abdullahi Abdu Ibrahim, Tareq Abed Mohammed","doi":"10.1186/s12880-024-01349-7","DOIUrl":"10.1186/s12880-024-01349-7","url":null,"abstract":"<p><p>Polypharmacy involves an individual using many medications at the same time and is a frequent healthcare technique used to treat complex medical disorders. Nevertheless, it also presents substantial risks of negative medication responses and interactions. Identifying and addressing adverse effects caused by polypharmacy is crucial to ensure patient safety and improve healthcare results. This paper introduces a new method using Graph Convolutional Networks (GCN) to identify polypharmacy side effects. Our strategy involves developing a medicine interaction graph in which edges signify drug-drug intuitive predicated on pharmacological properties and hubs symbolize drugs. GCN is a well-suited profound learning procedure for graph-based representations of social information. It can be used to anticipate the probability of medicate unfavorable impacts and to memorize important representations of sedate intuitive. Tests were conducted on a huge dataset of patients' pharmaceutical records commented on with watched medicate unfavorable impacts in arrange to approve our strategy. Execution of the GCN show, which was prepared on a subset of this dataset, was evaluated through a disarray framework. The perplexity network shows the precision with which the show categories occasions. Our discoveries demonstrate empowering advance within the recognizable proof of antagonistic responses related with polypharmaceuticals. For cardiovascular system target drugs, GCN technique achieved an accuracy of 94.12%, precision of 86.56%, F1-Score of 88.56%, AUC of 89.74% and recall of 87.92%. For respiratory system target drugs, GCN technique achieved an accuracy of 93.38%, precision of 85.64%, F1-Score of 89.79%, AUC of 91.85% and recall of 86.35%. And for nervous system target drugs, GCN technique achieved an accuracy of 95.27%, precision of 88.36%, F1-Score of 86.49%, AUC of 88.83% and recall of 84.73%. This research provides a significant contribution to pharmacovigilance by proposing a data-driven method to detect and reduce polypharmacy side effects, thereby increasing patient safety and healthcare decision-making.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11247854/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141619269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of color doppler ultrasound and US shear wave elastography with connective tissue growth factor in the risk assessment of papillary thyroid carcinoma. 彩色多普勒超声和带有结缔组织生长因子的美国剪切波弹性成像在甲状腺乳头状癌风险评估中的应用。
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-12 DOI: 10.1186/s12880-024-01354-w
Xiaoling Leng, Jinhui Liu, Qiao Zou, Changchun Wang, Sen Yang

Background: This study aims to investigate the role of shear wave elastography (SWE) and connective tissue growth factor (CTGF) in the assessment of papillary thyroid carcinoma (PTC) prognosis.

Methods: CTGF expression was detected with immunohistochemistry. Clinical and pathological data were collected. Parameters of conventional ultrasound combined with SWE were also collected. The relationship among CTGF expression, ultrasound indicators, the elastic modulus and the clinicopathological parameters were analyzed.

Results: Univariate analysis showed that patients with high risk of PTC were characterized with male, Uygur ethnicity, increased expression of CTGF, convex lesions, calcified, incomplete capsule, intranodular blood flow, rear echo attenuation, cervical lymph node metastasis, lesions larger than 1 cm, psammoma bodies, advanced clinical stage, increased TSH and high value in the shear modulus (P < 0.05). Multivariate analysis demonstrated that the risk factors of high expression of CTGF according to contribution size order were irregular shape, aspect ratio ≥ 1, and increased TSH. The logistic regression model equation was Logit (P) = 1.153 + 1.055 × 1 + 0.926 × 2 + 1.190 × 3 and the Area Under Curve value of the logistic regression was calculated to be 0.850, with a 95% confidence interval of 0.817 to 0.883.

Conclusion: SWE and CTGF are of great value in the risk assessment of PTC. The degree of fibrosis of PTC is closely related to the prognosis. The hardness of PTC lesions and the expression level of CTGF are correlated with the main indexes of conventional ultrasound differentiating benign or malignant nodules. Irregular shape, aspect ratio ≥ 1, and increased TSH are independent factors of CTGF.

背景:本研究旨在探讨剪切波弹性成像(SWE)和结缔组织生长因子(CTGF)在评估甲状腺乳头状癌(PTC)预后中的作用:本研究旨在探讨剪切波弹性成像(SWE)和结缔组织生长因子(CTGF)在评估甲状腺乳头状癌(PTC)预后中的作用:方法:用免疫组化方法检测CTGF的表达。收集临床和病理数据。同时还收集了常规超声结合 SWE 的参数。分析CTGF表达、超声指标、弹性模量和临床病理参数之间的关系:单变量分析显示,PTC 高危患者具有男性、维吾尔族、CTGF 表达增高、病灶凸面、钙化、包膜不完整、结节内血流、后方回声衰减、颈淋巴结转移、病灶大于 1 厘米、脓肿体、临床分期晚期、TSH 增高和剪切模量值高(P 结论:SWE 和 CTGF 是 PTC 高危因素的重要组成部分:SWE 和 CTGF 对 PTC 的风险评估具有重要价值。PTC 的纤维化程度与预后密切相关。PTC 病变的硬度和 CTGF 的表达水平与传统超声区分良性或恶性结节的主要指标相关。形状不规则、纵横比≥1和TSH升高是CTGF的独立因素。
{"title":"Application of color doppler ultrasound and US shear wave elastography with connective tissue growth factor in the risk assessment of papillary thyroid carcinoma.","authors":"Xiaoling Leng, Jinhui Liu, Qiao Zou, Changchun Wang, Sen Yang","doi":"10.1186/s12880-024-01354-w","DOIUrl":"10.1186/s12880-024-01354-w","url":null,"abstract":"<p><strong>Background: </strong>This study aims to investigate the role of shear wave elastography (SWE) and connective tissue growth factor (CTGF) in the assessment of papillary thyroid carcinoma (PTC) prognosis.</p><p><strong>Methods: </strong>CTGF expression was detected with immunohistochemistry. Clinical and pathological data were collected. Parameters of conventional ultrasound combined with SWE were also collected. The relationship among CTGF expression, ultrasound indicators, the elastic modulus and the clinicopathological parameters were analyzed.</p><p><strong>Results: </strong>Univariate analysis showed that patients with high risk of PTC were characterized with male, Uygur ethnicity, increased expression of CTGF, convex lesions, calcified, incomplete capsule, intranodular blood flow, rear echo attenuation, cervical lymph node metastasis, lesions larger than 1 cm, psammoma bodies, advanced clinical stage, increased TSH and high value in the shear modulus (P < 0.05). Multivariate analysis demonstrated that the risk factors of high expression of CTGF according to contribution size order were irregular shape, aspect ratio ≥ 1, and increased TSH. The logistic regression model equation was Logit (P) = 1.153 + 1.055 × 1 + 0.926 × 2 + 1.190 × 3 and the Area Under Curve value of the logistic regression was calculated to be 0.850, with a 95% confidence interval of 0.817 to 0.883.</p><p><strong>Conclusion: </strong>SWE and CTGF are of great value in the risk assessment of PTC. The degree of fibrosis of PTC is closely related to the prognosis. The hardness of PTC lesions and the expression level of CTGF are correlated with the main indexes of conventional ultrasound differentiating benign or malignant nodules. Irregular shape, aspect ratio ≥ 1, and increased TSH are independent factors of CTGF.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11241941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141598345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-V5 based deep learning approach for tooth detection and segmentation on pediatric panoramic radiographs in mixed dentition 基于 YOLO-V5 的深度学习方法用于儿科混合牙全景照片上的牙齿检测和分割
IF 2.7 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-11 DOI: 10.1186/s12880-024-01338-w
Busra Beser, Tugba Reis, Merve Nur Berber, Edanur Topaloglu, Esra Gungor, Münevver Coruh Kılıc, Sacide Duman, Özer Çelik, Alican Kuran, Ibrahim Sevki Bayrakdar
In the interpretation of panoramic radiographs (PRs), the identification and numbering of teeth is an important part of the correct diagnosis. This study evaluates the effectiveness of YOLO-v5 in the automatic detection, segmentation, and numbering of deciduous and permanent teeth in mixed dentition pediatric patients based on PRs. A total of 3854 mixed pediatric patients PRs were labelled for deciduous and permanent teeth using the CranioCatch labeling program. The dataset was divided into three subsets: training (n = 3093, 80% of the total), validation (n = 387, 10% of the total) and test (n = 385, 10% of the total). An artificial intelligence (AI) algorithm using YOLO-v5 models were developed. The sensitivity, precision, F-1 score, and mean average precision-0.5 (mAP-0.5) values were 0.99, 0.99, 0.99, and 0.98 respectively, to teeth detection. The sensitivity, precision, F-1 score, and mAP-0.5 values were 0.98, 0.98, 0.98, and 0.98, respectively, to teeth segmentation. YOLO-v5 based models can have the potential to detect and enable the accurate segmentation of deciduous and permanent teeth using PRs of pediatric patients with mixed dentition.
在解读全景放射照片(PR)时,牙齿的识别和编号是正确诊断的重要组成部分。本研究评估了 YOLO-v5 在根据 PRs 自动检测、分割和编号混合牙列儿科患者的乳牙和恒牙方面的有效性。使用 CranioCatch 标注程序对 3854 名混合牙儿童患者的乳牙和恒牙的 PRs 进行了标注。数据集分为三个子集:训练集(n = 3093,占总数的 80%)、验证集(n = 387,占总数的 10%)和测试集(n = 385,占总数的 10%)。使用 YOLO-v5 模型开发了一种人工智能(AI)算法。牙齿检测的灵敏度、精确度、F-1 分数和平均精确度-0.5 (mAP-0.5) 值分别为 0.99、0.99、0.99 和 0.98。牙齿分割的灵敏度、精确度、F-1 分数和 mAP-0.5 值分别为 0.98、0.98、0.98 和 0.98。基于 YOLO-v5 的模型可以使用混合牙列的儿科患者的 PRs 检测并准确分割乳牙和恒牙。
{"title":"YOLO-V5 based deep learning approach for tooth detection and segmentation on pediatric panoramic radiographs in mixed dentition","authors":"Busra Beser, Tugba Reis, Merve Nur Berber, Edanur Topaloglu, Esra Gungor, Münevver Coruh Kılıc, Sacide Duman, Özer Çelik, Alican Kuran, Ibrahim Sevki Bayrakdar","doi":"10.1186/s12880-024-01338-w","DOIUrl":"https://doi.org/10.1186/s12880-024-01338-w","url":null,"abstract":"In the interpretation of panoramic radiographs (PRs), the identification and numbering of teeth is an important part of the correct diagnosis. This study evaluates the effectiveness of YOLO-v5 in the automatic detection, segmentation, and numbering of deciduous and permanent teeth in mixed dentition pediatric patients based on PRs. A total of 3854 mixed pediatric patients PRs were labelled for deciduous and permanent teeth using the CranioCatch labeling program. The dataset was divided into three subsets: training (n = 3093, 80% of the total), validation (n = 387, 10% of the total) and test (n = 385, 10% of the total). An artificial intelligence (AI) algorithm using YOLO-v5 models were developed. The sensitivity, precision, F-1 score, and mean average precision-0.5 (mAP-0.5) values were 0.99, 0.99, 0.99, and 0.98 respectively, to teeth detection. The sensitivity, precision, F-1 score, and mAP-0.5 values were 0.98, 0.98, 0.98, and 0.98, respectively, to teeth segmentation. YOLO-v5 based models can have the potential to detect and enable the accurate segmentation of deciduous and permanent teeth using PRs of pediatric patients with mixed dentition.","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preoperative prediction of histopathological grading in patients with chondrosarcoma using MRI-based radiomics with semantic features 利用基于核磁共振成像的放射组学语义特征对软骨肉瘤患者的组织病理学分级进行术前预测
IF 2.7 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-11 DOI: 10.1186/s12880-024-01330-4
Xiaofen Li, Jingkun Zhang, Yinping Leng, Jiaqi Liu, Linlin Li, Tianyi Wan, Wentao Dong, Bing Fan, Lianggeng Gong
Distinguishing high-grade from low-grade chondrosarcoma is extremely vital not only for guiding the development of personalized surgical treatment but also for predicting the prognosis of patients. We aimed to establish and validate a magnetic resonance imaging (MRI)-based nomogram for predicting preoperative grading in patients with chondrosarcoma. Approximately 114 patients (60 and 54 cases with high-grade and low-grade chondrosarcoma, respectively) were recruited for this retrospective study. All patients were treated via surgery and histopathologically proven, and they were randomly divided into training (n = 80) and validation (n = 34) sets at a ratio of 7:3. Next, radiomics features were extracted from two sequences using the least absolute shrinkage and selection operator (LASSO) algorithms. The rad-scores were calculated and then subjected to logistic regression to develop a radiomics model. A nomogram combining independent predictive semantic features with radiomic by using multivariate logistic regression was established. The performance of each model was assessed by the receiver operating characteristic (ROC) curve analysis and the area under the curve, while clinical efficacy was evaluated via decision curve analysis (DCA). Ultimately, six optimal radiomics signatures were extracted from T1-weighted imaging (T1WI) and T2-weighted imaging with fat suppression (T2WI-FS) sequences to develop the radiomics model. Tumour cartilage abundance, which emerged as an independent predictor, was significantly related to chondrosarcoma grading (p < 0.05). The AUC values of the radiomics model were 0.85 (95% CI, 0.76 to 0.95) in the training sets, and the corresponding AUC values in the validation sets were 0.82 (95% CI, 0.65 to 0.98), which were far superior to the clinical model AUC values of 0.68 (95% CI, 0.58 to 0.79) in the training sets and 0.72 (95% CI, 0.57 to 0.87) in the validation sets. The nomogram demonstrated good performance in the preoperative distinction of chondrosarcoma. The DCA analysis revealed that the nomogram model had a markedly higher clinical usefulness in predicting chondrosarcoma grading preoperatively than either the rad-score or clinical model alone. The nomogram based on MRI radiomics combined with optimal independent factors had better performance for the preoperative differentiation between low-grade and high-grade chondrosarcoma and has potential as a noninvasive preoperative tool for personalizing clinical plans.
区分高级别软骨肉瘤和低级别软骨肉瘤极为重要,这不仅能指导个性化手术治疗的发展,还能预测患者的预后。我们旨在建立并验证一种基于磁共振成像(MRI)的提名图,用于预测软骨肉瘤患者的术前分级。这项回顾性研究招募了约114名患者(高分级和低分级软骨肉瘤患者分别为60例和54例)。所有患者均通过手术治疗并经组织病理学证实,按7:3的比例随机分为训练集(80例)和验证集(34例)。然后,使用最小绝对收缩和选择算子(LASSO)算法从两个序列中提取放射组学特征。计算出辐射分数后进行逻辑回归,以建立辐射组学模型。利用多变量逻辑回归建立了将独立预测语义特征与放射组学相结合的提名图。每个模型的性能通过接收者操作特征曲线(ROC)分析和曲线下面积进行评估,临床疗效则通过决策曲线分析(DCA)进行评估。最终,从 T1 加权成像(T1WI)和带脂肪抑制的 T2 加权成像(T2WI-FS)序列中提取了六个最佳放射组学特征,以建立放射组学模型。肿瘤软骨丰度是一个独立的预测因子,与软骨肉瘤分级显著相关(p < 0.05)。放射组学模型在训练集中的AUC值为0.85(95% CI,0.76至0.95),在验证集中的相应AUC值为0.82(95% CI,0.65至0.98),远高于临床模型在训练集中的AUC值0.68(95% CI,0.58至0.79)和在验证集中的AUC值0.72(95% CI,0.57至0.87)。提名图在软骨肉瘤的术前鉴别方面表现良好。DCA分析显示,提名图模型在术前预测软骨肉瘤分级方面的临床实用性明显高于单独的放射评分或临床模型。基于核磁共振成像放射组学的提名图结合最佳独立因素,在术前区分低分级和高级别软骨肉瘤方面有更好的表现,有望成为个性化临床计划的无创术前工具。
{"title":"Preoperative prediction of histopathological grading in patients with chondrosarcoma using MRI-based radiomics with semantic features","authors":"Xiaofen Li, Jingkun Zhang, Yinping Leng, Jiaqi Liu, Linlin Li, Tianyi Wan, Wentao Dong, Bing Fan, Lianggeng Gong","doi":"10.1186/s12880-024-01330-4","DOIUrl":"https://doi.org/10.1186/s12880-024-01330-4","url":null,"abstract":"Distinguishing high-grade from low-grade chondrosarcoma is extremely vital not only for guiding the development of personalized surgical treatment but also for predicting the prognosis of patients. We aimed to establish and validate a magnetic resonance imaging (MRI)-based nomogram for predicting preoperative grading in patients with chondrosarcoma. Approximately 114 patients (60 and 54 cases with high-grade and low-grade chondrosarcoma, respectively) were recruited for this retrospective study. All patients were treated via surgery and histopathologically proven, and they were randomly divided into training (n = 80) and validation (n = 34) sets at a ratio of 7:3. Next, radiomics features were extracted from two sequences using the least absolute shrinkage and selection operator (LASSO) algorithms. The rad-scores were calculated and then subjected to logistic regression to develop a radiomics model. A nomogram combining independent predictive semantic features with radiomic by using multivariate logistic regression was established. The performance of each model was assessed by the receiver operating characteristic (ROC) curve analysis and the area under the curve, while clinical efficacy was evaluated via decision curve analysis (DCA). Ultimately, six optimal radiomics signatures were extracted from T1-weighted imaging (T1WI) and T2-weighted imaging with fat suppression (T2WI-FS) sequences to develop the radiomics model. Tumour cartilage abundance, which emerged as an independent predictor, was significantly related to chondrosarcoma grading (p < 0.05). The AUC values of the radiomics model were 0.85 (95% CI, 0.76 to 0.95) in the training sets, and the corresponding AUC values in the validation sets were 0.82 (95% CI, 0.65 to 0.98), which were far superior to the clinical model AUC values of 0.68 (95% CI, 0.58 to 0.79) in the training sets and 0.72 (95% CI, 0.57 to 0.87) in the validation sets. The nomogram demonstrated good performance in the preoperative distinction of chondrosarcoma. The DCA analysis revealed that the nomogram model had a markedly higher clinical usefulness in predicting chondrosarcoma grading preoperatively than either the rad-score or clinical model alone. The nomogram based on MRI radiomics combined with optimal independent factors had better performance for the preoperative differentiation between low-grade and high-grade chondrosarcoma and has potential as a noninvasive preoperative tool for personalizing clinical plans.","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
BMC Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1