首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Enhancing plantar pressure distribution reconstruction with conditional generative adversarial networks from multi-region foot pressure sensing 利用多区域足底压力传感条件生成对抗网络加强足底压力分布重建
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-09 DOI: 10.1016/j.bspc.2024.107187
Hsiao-Lung Chan , Jing-Rong Liang , Ya-Ju Chang , Rou-Shayn Chen , Cheng-Chung Kuo , Wen-Yen Hsu , Meng-Tsan Tsai
Estimating foot pressure distribution and the center of pressure (COP) using a sparse sensor topology offers cost-effective benefits. While deep learning neural networks improve the prediction of information in areas with incomplete sensing, there are still gaps in foot pressure recordings due to limited sensor coverage in certain plantar regions. To address this, we used eleven larger sensors to increase coverage across critical foot areas, including the big toe, little toe, medial, middle, and lateral metatarsus, as well as the medial and lateral arches, foreheels, and heels. These regions are commonly used to study the effects of muscle fatigue during walking and jogging, as well as to predict ground reaction forces during walking. We employed a conditional generative adversarial network (GAN) to reconstruct high-resolution foot pressure distributions from the data collected by these sensors. This method operates on individual samples, eliminating the need for gait cycle segmentation and normalization. Compared to ground truth data from a 99-sensor array, the GAN approach significantly improved COP estimation over direct computation from the eleven sensors. The highest accuracy was achieved during level walking, with reduced performance during jogging and stair walking. In conclusion, the conditional GAN effectively reconstructed foot pressure distributions, and future research should explore reallocating sensor topology to improve resolution and coverage while balancing simplified instrumentation with improved plantar pressure distribution reconstruction.
利用稀疏传感器拓扑结构估算脚压分布和压力中心(COP)具有成本效益。虽然深度学习神经网络改善了不完全传感区域的信息预测,但由于某些足底区域的传感器覆盖范围有限,足底压力记录仍存在缺口。为了解决这个问题,我们使用了 11 个更大的传感器来增加关键足部区域的覆盖范围,包括大脚趾、小脚趾、跖骨内侧、中段和外侧,以及足弓内侧和外侧、前足跟和后足跟。这些区域通常用于研究行走和慢跑时肌肉疲劳的影响,以及预测行走时的地面反作用力。我们采用条件生成式对抗网络 (GAN) 从这些传感器收集的数据中重建高分辨率脚压分布。这种方法在单个样本上运行,无需步态周期分割和归一化。与来自 99 个传感器阵列的地面实况数据相比,GAN 方法显著提高了 COP 估值,而不是直接计算 11 个传感器的结果。平地行走时的准确率最高,而慢跑和阶梯行走时的准确率较低。总之,条件 GAN 有效地重建了足底压力分布,未来的研究应探索重新分配传感器拓扑结构,以提高分辨率和覆盖范围,同时在简化仪器和改进足底压力分布重建之间取得平衡。
{"title":"Enhancing plantar pressure distribution reconstruction with conditional generative adversarial networks from multi-region foot pressure sensing","authors":"Hsiao-Lung Chan ,&nbsp;Jing-Rong Liang ,&nbsp;Ya-Ju Chang ,&nbsp;Rou-Shayn Chen ,&nbsp;Cheng-Chung Kuo ,&nbsp;Wen-Yen Hsu ,&nbsp;Meng-Tsan Tsai","doi":"10.1016/j.bspc.2024.107187","DOIUrl":"10.1016/j.bspc.2024.107187","url":null,"abstract":"<div><div>Estimating foot pressure distribution and the center of pressure (COP) using a sparse sensor topology offers cost-effective benefits. While deep learning neural networks improve the prediction of information in areas with incomplete sensing, there are still gaps in foot pressure recordings due to limited sensor coverage in certain plantar regions. To address this, we used eleven larger sensors to increase coverage across critical foot areas, including the big toe, little toe, medial, middle, and lateral metatarsus, as well as the medial and lateral arches, foreheels, and heels. These regions are commonly used to study the effects of muscle fatigue during walking and jogging, as well as to predict ground reaction forces during walking. We employed a conditional generative adversarial network (GAN) to reconstruct high-resolution foot pressure distributions from the data collected by these sensors. This method operates on individual samples, eliminating the need for gait cycle segmentation and normalization. Compared to ground truth data from a 99-sensor array, the GAN approach significantly improved COP estimation over direct computation from the eleven sensors. The highest accuracy was achieved during level walking, with reduced performance during jogging and stair walking. In conclusion, the conditional GAN effectively reconstructed foot pressure distributions, and future research should explore reallocating sensor topology to improve resolution and coverage while balancing simplified instrumentation with improved plantar pressure distribution reconstruction.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107187"},"PeriodicalIF":4.9,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient IoT enabled heart disease prediction model using Finch hunt optimization modified BiLSTM classifier 利用芬奇狩猎优化改进的 BiLSTM 分类器建立高效的物联网心脏病预测模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-09 DOI: 10.1016/j.bspc.2024.107170
Yogesh Suresh Chichani , Smita L. Kasar
Prediction of Cardiovascular disease (CVD) with a more accurate and timely diagnosis is crucial to ensure accurate classification, which assists medical professionals in providing appropriate treatment to the patient. Recently, healthcare organizations have begun utilizing Internet of Things (IoT) technology to gather sensor information for the purpose of diagnosing and forecasting heart disease. Cloud computing solutions have been utilized to manage the vast amount of data created by IoT devices in the medical profession, which amounts to an enormous number. Heart disease prediction is a challenging undertaking that demands both sophisticated knowledge and expertise. Although a lot of study has been done on the diagnosis of heart disease, the results are not very accurate. Further protecting the data from numerous general privacy concerns is a complex process. To address these limitations, this research utilizes the Finch hunt optimization modified BiLSTM classifier (FHO modified BiLSTM) to develop an IoT enabled Heart disease prediction model. Further, the incorporation of the smart IoT-based framework assists in monitoring heart disease patients and provides effective, timely, and quality healthcare services. Additionally, to improve mobility, privacy, security, low latency, and bandwidth, the biomedical data are stored in a cloud server that is equipped with a decentralized blockchain. The proposed approach exploits the Bi-LSTM model to improve the prediction abilities and extract intricate temporal patterns from patient data by combining predictive modeling. Specifically, the FHO integrates the characteristics of honey badger and sparrow to find the optimal solution for tuning the hyperparameters in the modified BiLSTM which in turn enhances the prediction accuracy. For analyzing the performance of the proposed method the CACHET-CADB dataset with 1602 samples is utilized. The experimental results demonstrates that the proposed FHO-modified Bi-LSTM attains the values of 95.17%, 96.52%, 93.86%, and 97.24% for F1-score, precision, recall, and accuracy respectively at 80% of training which exceeded the other existing techniques.
对心血管疾病(CVD)进行更准确、更及时的诊断预测对于确保准确分类至关重要,这有助于医疗专业人员为患者提供适当的治疗。最近,医疗机构开始利用物联网(IoT)技术收集传感器信息,用于诊断和预测心脏病。云计算解决方案已被用于管理医疗行业物联网设备产生的大量数据,这些数据数量庞大。心脏病预测是一项极具挑战性的工作,需要复杂的知识和专业技能。虽然对心脏病的诊断进行了大量研究,但结果并不十分准确。此外,保护数据免受众多隐私问题的影响也是一个复杂的过程。为了解决这些局限性,本研究利用芬奇狩猎优化改进型 BiLSTM 分类器(FHO 改进型 BiLSTM)开发了一个支持物联网的心脏病预测模型。此外,基于物联网的智能框架有助于监测心脏病患者,并提供有效、及时和优质的医疗保健服务。此外,为了提高移动性、隐私性、安全性、低延迟和带宽,生物医学数据被存储在配备了分散式区块链的云服务器中。所提出的方法利用 Bi-LSTM 模型提高预测能力,并通过结合预测建模从患者数据中提取复杂的时间模式。具体来说,FHO 综合了蜜獾和麻雀的特点,为调整修正后的 BiLSTM 中的超参数找到了最优解,从而提高了预测准确性。为了分析拟议方法的性能,我们使用了包含 1602 个样本的 CACHET-CADB 数据集。实验结果表明,在 80% 的训练时间内,所提出的 FHO 修正 Bi-LSTM 的 F1 分数、精确度、召回率和准确率分别达到了 95.17%、96.52%、93.86% 和 97.24%,超过了其他现有技术。
{"title":"An efficient IoT enabled heart disease prediction model using Finch hunt optimization modified BiLSTM classifier","authors":"Yogesh Suresh Chichani ,&nbsp;Smita L. Kasar","doi":"10.1016/j.bspc.2024.107170","DOIUrl":"10.1016/j.bspc.2024.107170","url":null,"abstract":"<div><div>Prediction of Cardiovascular disease (CVD) with a more accurate and timely diagnosis is crucial to ensure accurate classification, which assists medical professionals in providing appropriate treatment to the patient. Recently, healthcare organizations have begun utilizing Internet of Things (IoT) technology to gather sensor information for the purpose of diagnosing and forecasting heart disease. Cloud computing solutions have been utilized to manage the vast amount of data created by IoT devices in the medical profession, which amounts to an enormous number. Heart disease prediction is a challenging undertaking that demands both sophisticated knowledge and expertise. Although a lot of study has been done on the diagnosis of heart disease, the results are not very accurate.<!--> <!-->Further protecting the data from numerous general privacy concerns is a complex process. To address these limitations, this research utilizes the Finch hunt optimization modified BiLSTM classifier (FHO modified BiLSTM) to develop an IoT enabled Heart disease prediction model. Further, the incorporation of the smart IoT-based framework assists in monitoring heart disease patients and provides effective, timely, and quality healthcare services. Additionally, to improve mobility, privacy, security, low latency, and bandwidth, the biomedical data are stored in a cloud server that is equipped with a<!--> <!-->decentralized<!--> <!-->blockchain. The proposed approach exploits the Bi-LSTM model to improve the prediction abilities and extract intricate temporal patterns from patient data by combining predictive modeling. Specifically, the FHO integrates the characteristics of honey badger and sparrow to find the optimal solution for tuning the hyperparameters in the modified BiLSTM which in turn enhances the prediction accuracy. For analyzing the performance of the proposed method the CACHET-CADB dataset with 1602 samples is utilized. The experimental results demonstrates that the proposed FHO-modified Bi-LSTM attains the values of 95.17%, 96.52%, 93.86%, and 97.24% for F1-score, precision, recall, and accuracy respectively at 80% of training which exceeded the other existing techniques.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107170"},"PeriodicalIF":4.9,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UNeSt: A fast segmentation network for colorectal polyps based on MLP and deep separable convolution UNeSt:基于 MLP 和深度可分离卷积的结直肠息肉快速分割网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-09 DOI: 10.1016/j.bspc.2024.107165
Jian Li , Peng Ding , Fengwu Lin , Zhaomin Chen , Ali Asghar Heidari , Huiling Chen
In medical image segmentation, conventional methodologies based on the UNet method often focus on improving the network performance but overlook the parameters and computational complexity. Due to the limitation of computing resources, these methods can hardly be applied in the landscape of point-of-care (PoC) applications. This study presents UNeSt, a rapid segmentation network tailored for colorectal polyps. The architectural foundation of UNeSt hinges upon the synergistic integration of the depth separable convolutional layer (DSC) and multilayer perceptron (MLP). UNeSt achieves an innovative fusion of these components, resulting in a substantial reduction in model parameters and computational complexity, which is concomitant with a remarkable enhancement in inference speed. Specifically, UNeSt incorporates the convolutional block attention module (CBAM) within the convolutional encoder to extract channel and spatial information proficiently. Furthermore, we introduce an attention mechanism to address the positional information discrepancies introduced in the MLP stage. This comprehensive approach contributes significantly to the augmentation of accuracy in colorectal polyp segmentation. Finally, UNeSt employs skip connections between various levels of encoders and decoders, thereby mitigating information loss problems. In the context of this investigation, UNeSt underwent rigorous evaluation using a demanding polyp segmentation dataset. Relative to UNeXt, a widely employed and exceedingly lightweight network model, the proposed model in this study exhibits a noteworthy reduction with 1.6x fewer parameters, a 2.5x decrease in computational complexity (measured in GFLOPs), and a 1.9x acceleration in inference speed.
在医学影像分割中,基于 UNet 方法的传统方法通常侧重于提高网络性能,但忽略了参数和计算复杂性。由于计算资源的限制,这些方法很难应用到医疗点(PoC)的应用中。本研究介绍了专为大肠息肉定制的快速分割网络 UNeSt。UNeSt 的架构基础取决于深度可分离卷积层(DSC)和多层感知器(MLP)的协同整合。UNeSt 实现了这些组件的创新性融合,从而大幅降低了模型参数和计算复杂度,同时显著提高了推理速度。具体来说,UNeSt 在卷积编码器中加入了卷积块注意模块(CBAM),以熟练提取信道和空间信息。此外,我们还引入了注意力机制,以解决 MLP 阶段引入的位置信息差异问题。这种综合方法大大提高了大肠息肉分割的准确性。最后,UNeSt 在各级编码器和解码器之间采用了跳转连接,从而减轻了信息丢失问题。在本次调查中,UNeSt 使用要求严格的息肉分割数据集进行了严格评估。与 UNeXt(一种广泛使用的超轻量级网络模型)相比,本研究中提出的模型参数减少了 1.6 倍,计算复杂度降低了 2.5 倍(以 GFLOPs 为单位),推理速度加快了 1.9 倍,这些都是值得注意的。
{"title":"UNeSt: A fast segmentation network for colorectal polyps based on MLP and deep separable convolution","authors":"Jian Li ,&nbsp;Peng Ding ,&nbsp;Fengwu Lin ,&nbsp;Zhaomin Chen ,&nbsp;Ali Asghar Heidari ,&nbsp;Huiling Chen","doi":"10.1016/j.bspc.2024.107165","DOIUrl":"10.1016/j.bspc.2024.107165","url":null,"abstract":"<div><div>In medical image segmentation, conventional methodologies based on the UNet method often focus on improving the network performance but overlook the parameters and computational complexity. Due to the limitation of computing resources, these methods can hardly be applied in the landscape of point-of-care (PoC) applications. This study presents UNeSt, a rapid segmentation network tailored for colorectal polyps. The architectural foundation of UNeSt hinges upon the synergistic integration of the depth separable convolutional layer (DSC) and multilayer perceptron (MLP). UNeSt achieves an innovative fusion of these components, resulting in a substantial reduction in model parameters and computational complexity, which is concomitant with a remarkable enhancement in inference speed. Specifically, UNeSt incorporates the convolutional block attention module (CBAM) within the convolutional encoder to extract channel and spatial information proficiently. Furthermore, we introduce an attention mechanism to address the positional information discrepancies introduced in the MLP stage. This comprehensive approach contributes significantly to the augmentation of accuracy in colorectal polyp segmentation. Finally, UNeSt employs skip connections between various levels of encoders and decoders, thereby mitigating information loss problems. In the context of this investigation, UNeSt underwent rigorous evaluation using a demanding polyp segmentation dataset. Relative to UNeXt, a widely employed and exceedingly lightweight network model, the proposed model in this study exhibits a noteworthy reduction with 1.6x fewer parameters, a 2.5x decrease in computational complexity (measured in GFLOPs), and a 1.9x acceleration in inference speed.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107165"},"PeriodicalIF":4.9,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PU-CDM: A pyramid UNet based conditional diffusion model for sparse-view reconstruction in EPRI PU-CDM:用于 EPRI 稀疏视图重建的基于金字塔 UNet 的条件扩散模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-09 DOI: 10.1016/j.bspc.2024.107182
Peng Liu , Yanjun Zhang , Yarui Xi , Chenyun Fang , Zhiwei Qiao
Sparse-view reconstruction in electron paramagnetic resonance imaging (EPRI) aims to reduce scanning times, which is critical for tumor oxygen imaging, yet is often plagued by streak artifacts in filtered back-projection (FBP) reconstructions. To address this, we propose a pyramid UNet based conditional diffusion model (PU-CDM) to suppress these streak artifacts in EPRI images. PU-CDM uniquely introduces pyramid pooling and aggregation into the UNet architecture of the conditional diffusion model, while incorporating two advanced mechanisms—dense convolutions and self-attention—into the input module. By significantly improving the accuracy of the Gaussian noise prediction network in the conditional diffusion model, PU-CDM achieves superior performance in sparse-view reconstruction, generating high-quality images with only 5 sampling steps. Experimental results, both qualitative and quantitative, show that the images reconstructed by PU-CDM outperform those reconstructed by some existing representative deep learning models in terms of artifact removal and structural fidelity. PU-CDM can achieve accurate sparse-view reconstruction in EPRI, thus promoting EPRI towards fast scanning. In addition, PU-CDM can also be used for fast magnetic resonance imaging (MRI), low-dose computed tomography (LDCT) reconstruction, and natural image processing.
电子顺磁共振成像(EPRI)中的稀疏视图重建旨在缩短扫描时间,这对肿瘤氧成像至关重要,但在滤波后投影(FBP)重建中经常会出现条纹伪影。为此,我们提出了一种基于金字塔 UNet 的条件扩散模型(PU-CDM),以抑制 EPRI 图像中的条纹伪影。PU-CDM 在条件扩散模型的 UNet 架构中独特地引入了金字塔池化和聚合,同时在输入模块中加入了两种先进的机制--密集卷积和自我注意。通过大幅提高条件扩散模型中高斯噪声预测网络的准确性,PU-CDM 在稀疏视图重建方面取得了卓越的性能,只需 5 个采样步骤就能生成高质量的图像。定性和定量实验结果表明,PU-CDM 重建的图像在去除伪影和结构保真度方面优于现有的一些代表性深度学习模型重建的图像。PU-CDM 可以在 EPRI 中实现精确的稀疏视图重建,从而促进 EPRI 走向快速扫描。此外,PU-CDM 还可用于快速磁共振成像(MRI)、低剂量计算机断层扫描(LDCT)重建和自然图像处理。
{"title":"PU-CDM: A pyramid UNet based conditional diffusion model for sparse-view reconstruction in EPRI","authors":"Peng Liu ,&nbsp;Yanjun Zhang ,&nbsp;Yarui Xi ,&nbsp;Chenyun Fang ,&nbsp;Zhiwei Qiao","doi":"10.1016/j.bspc.2024.107182","DOIUrl":"10.1016/j.bspc.2024.107182","url":null,"abstract":"<div><div>Sparse-view reconstruction in electron paramagnetic resonance imaging (EPRI) aims to reduce scanning times, which is critical for tumor oxygen imaging, yet is often plagued by streak artifacts in filtered back-projection (FBP) reconstructions. To address this, we propose a pyramid UNet based conditional diffusion model (PU-CDM) to suppress these streak artifacts in EPRI images. PU-CDM uniquely introduces pyramid pooling and aggregation into the UNet architecture of the conditional diffusion model, while incorporating two advanced mechanisms—dense convolutions and self-attention—into the input module. By significantly improving the accuracy of the Gaussian noise prediction network in the conditional diffusion model, PU-CDM achieves superior performance in sparse-view reconstruction, generating high-quality images with only 5 sampling steps. Experimental results, both qualitative and quantitative, show that the images reconstructed by PU-CDM outperform those reconstructed by some existing representative deep learning models in terms of artifact removal and structural fidelity. PU-CDM can achieve accurate sparse-view reconstruction in EPRI, thus promoting EPRI towards fast scanning. In addition, PU-CDM can also be used for fast magnetic resonance imaging (MRI), low-dose computed tomography (LDCT) reconstruction, and natural image processing.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107182"},"PeriodicalIF":4.9,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A collaborative multi-task model for immunohistochemical molecular sub-types of multi-modal breast cancer MRI images 多模态乳腺癌 MRI 图像免疫组化分子亚型的多任务协作模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-08 DOI: 10.1016/j.bspc.2024.107137
Haozhen Xiang , Yuqi Xiong , Yingwei Shen , Jiaxin Li , Deshan Liu
Clinically, personalized treatment developed based on the immunohistochemical (IHC) molecular sub-types of breast cancer can enhance long-term survival rates. Nevertheless, IHC, as an invasive detection method, may pose some risks of tumor metastasis caused by puncture. This work propose a collaborative multi-task model based on multi-modal data. Firstly, a dual-stream learning network based on Swin Transformer is employed to extract features from both DCE and T1WI images. Specifically, an Shared Representation (SR) module extracts shared representations, while an Enhancement of Unique features (EU) module enhances specific features. Subsequently, a multi-path classification network is constructed, which comprehensively considers the MRI image features, lesion location, and morphological features. Comprehensive experiments using clinical MRI images show the proposed method outperforms state-of-the-art, with an accuracy of 85.1%, sensitivity of 84.0%, specificity of 95.1%, and an F1 score of 83.6%.
在临床上,根据乳腺癌的免疫组化(IHC)分子亚型进行个性化治疗可以提高长期生存率。然而,IHC 作为一种侵入性检测方法,可能存在穿刺导致肿瘤转移的风险。本研究提出了一种基于多模态数据的多任务协作模型。首先,采用基于 Swin Transformer 的双流学习网络从 DCE 和 T1WI 图像中提取特征。具体来说,共享表征(SR)模块提取共享表征,而增强独特特征(EU)模块则增强特定特征。随后,构建了一个多路径分类网络,该网络综合考虑了磁共振成像特征、病变位置和形态特征。使用临床 MRI 图像进行的综合实验表明,所提出的方法优于最先进的方法,准确率为 85.1%,灵敏度为 84.0%,特异性为 95.1%,F1 得分为 83.6%。
{"title":"A collaborative multi-task model for immunohistochemical molecular sub-types of multi-modal breast cancer MRI images","authors":"Haozhen Xiang ,&nbsp;Yuqi Xiong ,&nbsp;Yingwei Shen ,&nbsp;Jiaxin Li ,&nbsp;Deshan Liu","doi":"10.1016/j.bspc.2024.107137","DOIUrl":"10.1016/j.bspc.2024.107137","url":null,"abstract":"<div><div>Clinically, personalized treatment developed based on the immunohistochemical (IHC) molecular sub-types of breast cancer can enhance long-term survival rates. Nevertheless, IHC, as an invasive detection method, may pose some risks of tumor metastasis caused by puncture. This work propose a collaborative multi-task model based on multi-modal data. Firstly, a dual-stream learning network based on Swin Transformer is employed to extract features from both DCE and T1WI images. Specifically, an Shared Representation (SR) module extracts shared representations, while an Enhancement of Unique features (EU) module enhances specific features. Subsequently, a multi-path classification network is constructed, which comprehensively considers the MRI image features, lesion location, and morphological features. Comprehensive experiments using clinical MRI images show the proposed method outperforms state-of-the-art, with an accuracy of 85.1%, sensitivity of 84.0%, specificity of 95.1%, and an F1 score of 83.6%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107137"},"PeriodicalIF":4.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CL-MRI: Self-Supervised contrastive learning to improve the accuracy of undersampled MRI reconstruction CL-MRI:自我监督对比学习提高欠采样磁共振成像重建的准确性
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-08 DOI: 10.1016/j.bspc.2024.107185
Mevan Ekanayake , Zhifeng Chen , Mehrtash Harandi , Gary Egan , Zhaolin Chen
Deep learning (DL) methods have emerged as the state-of-the-art for Magnetic Resonance Imaging (MRI) reconstruction. DL methods typically involve training deep neural networks to take undersampled MRI images as input and transform them into high-quality MRI images through data-driven processes. However, deep learning models often fail with higher levels of undersampling due to the insufficient information in the input, which is crucial for producing high-quality MRI images. Thus, optimizing the information content at the input of a DL reconstruction model could significantly improve reconstruction accuracy. In this paper, we introduce a self-supervised pretraining procedure using contrastive learning to improve the accuracy of undersampled DL MRI reconstruction. We use contrastive learning to transform the MRI image representations into a latent space that maximizes mutual information among different undersampled representations and optimizes the information content at the input of the downstream DL reconstruction models. Our experiments demonstrate improved reconstruction accuracy across a range of acceleration factors and datasets, both quantitatively and qualitatively. Furthermore, our extended experiments validate the proposed framework’s robustness under adversarial conditions, such as measurement noise, different k-space sampling patterns, and pathological abnormalities, and also prove the transfer learning capabilities on MRI datasets with completely different anatomy. Additionally, we conducted experiments to visualize and analyze the properties of the proposed MRI contrastive learning latent space. Code available here.
深度学习(DL)方法已成为磁共振成像(MRI)重建的最先进方法。深度学习方法通常包括训练深度神经网络,将采样不足的磁共振成像图像作为输入,并通过数据驱动过程将其转换为高质量的磁共振成像图像。然而,深度学习模型在较高的欠采样水平下往往会失败,原因是输入信息不足,而这对生成高质量的 MRI 图像至关重要。因此,优化 DL 重建模型输入端的信息含量可以显著提高重建精度。在本文中,我们介绍了一种使用对比学习的自监督预训练程序,以提高欠采样 DL MRI 重建的准确性。我们利用对比学习将 MRI 图像表征转换为一个潜空间,该潜空间能最大化不同欠采样表征之间的互信息,并优化下游 DL 重建模型输入端的信息含量。我们的实验证明,在各种加速因子和数据集上,重建精度都有了定量和定性的提高。此外,我们的扩展实验还验证了所提出的框架在诸如测量噪声、不同 k 空间采样模式和病理异常等对抗条件下的鲁棒性,并证明了在解剖结构完全不同的 MRI 数据集上的迁移学习能力。此外,我们还进行了实验,对所提出的磁共振成像对比学习潜空间的特性进行了可视化分析。代码在此提供。
{"title":"CL-MRI: Self-Supervised contrastive learning to improve the accuracy of undersampled MRI reconstruction","authors":"Mevan Ekanayake ,&nbsp;Zhifeng Chen ,&nbsp;Mehrtash Harandi ,&nbsp;Gary Egan ,&nbsp;Zhaolin Chen","doi":"10.1016/j.bspc.2024.107185","DOIUrl":"10.1016/j.bspc.2024.107185","url":null,"abstract":"<div><div>Deep learning (DL) methods have emerged as the state-of-the-art for Magnetic Resonance Imaging (MRI) reconstruction. DL methods typically involve training deep neural networks to take undersampled MRI images as input and transform them into high-quality MRI images through data-driven processes. However, deep learning models often fail with higher levels of undersampling due to the insufficient information in the input, which is crucial for producing high-quality MRI images. Thus, optimizing the information content at the input of a DL reconstruction model could significantly improve reconstruction accuracy. In this paper, we introduce a self-supervised pretraining procedure using contrastive learning to improve the accuracy of undersampled DL MRI reconstruction. We use contrastive learning to transform the MRI image representations into a latent space that maximizes mutual information among different undersampled representations and optimizes the information content at the input of the downstream DL reconstruction models. Our experiments demonstrate improved reconstruction accuracy across a range of acceleration factors and datasets, both quantitatively and qualitatively. Furthermore, our extended experiments validate the proposed framework’s robustness under adversarial conditions, such as measurement noise, different k-space sampling patterns, and pathological abnormalities, and also prove the transfer learning capabilities on MRI datasets with completely different anatomy. Additionally, we conducted experiments to visualize and analyze the properties of the proposed MRI contrastive learning latent space. Code available <span><span>here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107185"},"PeriodicalIF":4.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAR-GAN: Multi attention residual generative adversarial network for tumor segmentation in breast ultrasounds MAR-GAN:用于乳腺超声波肿瘤分割的多注意残差生成对抗网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-08 DOI: 10.1016/j.bspc.2024.107171
Imran Ul Haq , Haider Ali , Yuefeng Li , Zhe Liu

Introduction

Ultrasonography is among the most regularly used methods for earlier detection of breast cancer. Automatic and precise segmentation of breast masses in breast ultrasound (US) images is essential but still a challenge due to several causes of uncertainties, like the high variety of tumor shapes and sizes, obscure tumor borders, very low SNR, and speckle noise.

Method

To deal with these uncertainties, this work presents an effective and automated GAN based approach for tumor segmentation in breast US named MAR-GAN, to extract rich, informative features from US images. In MAR-GAN the capabilities of the traditional encoder-decoder generator were enhanced by multiple modifications. Multi-scale residual blocks were used to retrieve additional aspects of the tumor area for a more precise description. A novel boundary and foreground attention (BFA) module is proposed to increase attention for the tumor region and boundary curve. The squeeze and excitation (SE) and the adaptive context selection (ACS) modules were added to increase representational capability on encoder side and facilitates better selection and aggregation of contextual information on the decoder side respectively. The L1-norm and structural similarity index metric (SSIM) were added into the MAR-GAN’s loss function to capture rich local context information from the tumors’ surroundings.

Results

Two breast US datasets were utilized to evaluate the effectiveness of the suggested approach. Using the BUSI dataset, our network outperformed several state-of-the-art segmentations models in IoU and Dice metrics, scoring 89.27 %, 94.21 %, respectively. The suggested approach achieved encouraging results on UDIAT dataset, with IoU and Dice scores of 82.75 %, 88.54 %, respectively.
导言 超声波检查是早期发现乳腺癌最常用的方法之一。为了应对这些不确定性,本研究提出了一种基于 GAN 的有效且自动化的乳腺 US 肿瘤分割方法 MAR-GAN,用于从 US 图像中提取丰富的信息特征。在 MAR-GAN 中,传统编码器-解码器生成器的功能通过多种修改得到了增强。多尺度残留块用于检索肿瘤区域的其他方面,以获得更精确的描述。提出了一个新颖的边界和前景关注(BFA)模块,以增加对肿瘤区域和边界曲线的关注。此外,还增加了挤压和激发(SE)模块和自适应上下文选择(ACS)模块,以提高编码器端的表征能力,并促进解码器端更好地选择和聚合上下文信息。在 MAR-GAN 的损失函数中加入了 L1 正态和结构相似性指数度量(SSIM),以捕捉肿瘤周围丰富的局部上下文信息。利用 BUSI 数据集,我们的网络在 IoU 和 Dice 指标上优于几种最先进的分割模型,得分分别为 89.27 % 和 94.21 %。建议的方法在 UDIAT 数据集上取得了令人鼓舞的结果,IoU 和 Dice 分数分别为 82.75 % 和 88.54 %。
{"title":"MAR-GAN: Multi attention residual generative adversarial network for tumor segmentation in breast ultrasounds","authors":"Imran Ul Haq ,&nbsp;Haider Ali ,&nbsp;Yuefeng Li ,&nbsp;Zhe Liu","doi":"10.1016/j.bspc.2024.107171","DOIUrl":"10.1016/j.bspc.2024.107171","url":null,"abstract":"<div><h3>Introduction</h3><div>Ultrasonography is among the most regularly used methods for earlier detection of breast cancer. Automatic and precise segmentation of breast masses in breast ultrasound (US) images is essential but still a challenge due to several causes of uncertainties, like the high variety of tumor shapes and sizes, obscure tumor borders, very low SNR, and speckle noise.</div></div><div><h3>Method</h3><div>To deal with these uncertainties, this work presents an effective and automated GAN based approach for tumor segmentation in breast US named MAR-GAN, to extract rich, informative features from US images. In MAR-GAN the capabilities of the traditional encoder-decoder generator were enhanced by multiple modifications. Multi-scale residual blocks were used to retrieve additional aspects of the tumor area for a more precise description. A novel boundary and foreground attention (BFA) module is proposed to increase attention for the tumor region and boundary curve. The squeeze and excitation (SE) and the adaptive context selection (ACS) modules were added to increase representational capability on encoder side and facilitates better selection and aggregation of contextual information on the decoder side respectively. The L1-norm and structural similarity index metric (SSIM) were added into the MAR-GAN’s loss function to capture rich local context information from the tumors’ surroundings.</div></div><div><h3>Results</h3><div>Two breast US datasets were utilized to evaluate the effectiveness of the suggested approach. Using the BUSI dataset, our network outperformed several state-of-the-art segmentations models in IoU and Dice metrics, scoring 89.27 %, 94.21 %, respectively. The suggested approach achieved encouraging results on UDIAT dataset, with IoU and Dice scores of 82.75 %, 88.54 %, respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107171"},"PeriodicalIF":4.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based comprehensive robotic system for lower limb rehabilitation 基于深度学习的下肢康复综合机器人系统
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107178
Prithwijit Mukherjee, Anisha Halder Roy
In the modern era, a significant percentage of people around the world suffer from knee pain-related problems. ‘Knee pain’ can be alleviated by performing knee rehabilitation exercises in the correct posture on a regular basis. In our research, an attention mechanism-based CNN-TLSTM (Convolution Neural Network-tanh Long Sort-Term Memory) network has been proposed for assessing the knee pain level of a person. Here, electroencephalogram (EEG) signals of the frontal, parietal, and temporal lobes, electromyography (EMG) signals of the hamstring and quadriceps muscles, and knee bending angle have been used for knee pain detection. First, the CNN network has been utilized for automated feature extraction from the EEG, knee bending angle, and EMG data, and subsequently, the TLSTM network has been used as a classifier. The trained CNN-TLSTM model can classify the knee pain level of a person into five categories, namely no pain, low pain, medium pain, moderate pain, and high pain, with an overall accuracy of 95.88 %. In the hardware part, a prototype of an automated robotic knee rehabilitation system has been designed to help a person perform three rehabilitation exercises, i.e., sitting knee bending, straight leg rise, and active knee bending, according to his/her pain level, without the presence of any physiotherapist. The novelty of our research lies in (i) designing a novel deep learning-based classifier model for broadly classifying knee pain into five categories, (ii) introducing attention mechanism into the TLSTM network to boost its classification performance, and (iii) developing a user-friendly rehabilitation device for knee rehabilitation.
在现代社会,全世界有相当一部分人都受到膝关节疼痛相关问题的困扰。膝关节疼痛 "可以通过定期以正确姿势进行膝关节康复训练来缓解。我们的研究提出了一种基于注意力机制的 CNN-TLSTM(卷积神经网络-长分类记忆)网络,用于评估人的膝关节疼痛程度。在这里,额叶、顶叶和颞叶的脑电图(EEG)信号、腘绳肌和股四头肌的肌电图(EMG)信号以及膝关节弯曲角度被用于膝关节疼痛检测。首先,利用 CNN 网络从脑电图、膝关节弯曲角度和肌电图数据中自动提取特征,然后将 TLSTM 网络用作分类器。经过训练的 CNN-TLSTM 模型可将人的膝关节疼痛程度分为五类,即无疼痛、低疼痛、中等疼痛、中度疼痛和高度疼痛,总体准确率为 95.88%。在硬件部分,我们设计了一个自动机器人膝关节康复系统的原型,可以在没有任何理疗师在场的情况下,根据患者的疼痛程度帮助其进行三种康复训练,即坐位屈膝、直腿起立和主动屈膝。我们研究的新颖之处在于:(i)设计了一种基于深度学习的新型分类器模型,可将膝关节疼痛大致分为五类;(ii)在 TLSTM 网络中引入注意力机制,以提高其分类性能;以及(iii)开发了一种用户友好型膝关节康复设备。
{"title":"A deep learning-based comprehensive robotic system for lower limb rehabilitation","authors":"Prithwijit Mukherjee,&nbsp;Anisha Halder Roy","doi":"10.1016/j.bspc.2024.107178","DOIUrl":"10.1016/j.bspc.2024.107178","url":null,"abstract":"<div><div>In the modern era, a significant percentage of people around the world suffer from knee pain-related problems. ‘Knee pain’ can be alleviated by performing knee rehabilitation exercises in the correct posture on a regular basis. In our research, an attention mechanism-based CNN-TLSTM (Convolution Neural Network-tanh Long Sort-Term Memory) network has been proposed for assessing the knee pain level of a person. Here, electroencephalogram (EEG) signals of the frontal, parietal, and temporal lobes, electromyography (EMG) signals of the hamstring and quadriceps muscles, and knee bending angle have been used for knee pain detection. First, the CNN network has been utilized for automated feature extraction from the EEG, knee bending angle, and EMG data, and subsequently, the TLSTM network has been used as a classifier. The trained CNN-TLSTM model can classify the knee pain level of a person into five categories, namely no pain, low pain, medium pain, moderate pain, and high pain, with an overall accuracy of 95.88 %. In the hardware part, a prototype of an automated robotic knee rehabilitation system has been designed to help a person perform three rehabilitation exercises, i.e., sitting knee bending, straight leg rise, and active knee bending, according to his/her pain level, without the presence of any physiotherapist. The novelty of our research lies in (i) designing a novel deep learning-based classifier model for broadly classifying knee pain into five categories, (ii) introducing attention mechanism into the TLSTM network to boost its classification performance, and (iii) developing a user-friendly rehabilitation device for knee rehabilitation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107178"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFI-ViT: A coarse-to-fine inference based vision transformer for gastric cancer subtype detection using pathological images CFI-ViT:利用病理图像进行胃癌亚型检测的基于粗到细推理的视觉变换器
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107160
Xinghang Wang , Haibo Tao , Bin Wang , Huaiping Jin , Zhenhui Li
Accurate detection of histopathological cancer subtypes is crucial for personalized treatment. Currently, deep learning methods based on histopathology images have become an effective solution to this problem. However, existing deep learning methods for histopathology image classification often suffer from high computational complexity, not considering the variability of different regions, and failing to synchronize the focus on local–global information effectively. To address these issues, we propose a coarse-to-fine inference based vision transformer (ViT) network (CFI-ViT) for pathological image detection of gastric cancer subtypes. CFI-ViT combines global attention and discriminative and differentiable modules to achieve two-stage inference. In the coarse inference stage, a ViT model with relative position embedding is employed to extract global information from the input images. If the critical information is not sufficiently identified, the differentiable module is adopted to extract local image regions with discrimination for fine-grained screening in the fine inference stage. The effectiveness and superiority of the proposed CFI-ViT method have been validated through three pathological image datasets of gastric cancer, including one private dataset clinically collected from Yunnan Cancer Hospital in China and two publicly available datasets, i.e., HE-GHI-DS and TCGA-STAD. The experimental results demonstrate that CFI-ViT achieves superior recognition accuracy and generalization performance compared to traditional methods, while using only 80 % of the computational resources required by the ViT model.
准确检测组织病理学癌症亚型对于个性化治疗至关重要。目前,基于组织病理学图像的深度学习方法已成为解决这一问题的有效方法。然而,现有的组织病理学图像分类深度学习方法往往存在计算复杂度高、未考虑不同区域的差异性、无法有效同步关注局部和全局信息等问题。为了解决这些问题,我们提出了一种基于视觉变换器(ViT)的粗到细推理网络(CFI-ViT),用于胃癌亚型的病理图像检测。CFI-ViT 结合了全局注意力、判别和可微分模块,实现了两阶段推理。在粗推理阶段,采用具有相对位置嵌入的 ViT 模型从输入图像中提取全局信息。如果关键信息识别不充分,则在精细推理阶段采用可微分模块提取具有区分度的局部图像区域,进行细粒度筛选。我们通过三个胃癌病理图像数据集验证了 CFI-ViT 方法的有效性和优越性,其中包括一个从中国云南省肿瘤医院临床收集的私有数据集和两个公开数据集,即 HE-GHI-DS 和 TCGA-STAD。实验结果表明,与传统方法相比,CFI-ViT 获得了更高的识别准确率和泛化性能,而所需的计算资源仅为 ViT 模型的 80%。
{"title":"CFI-ViT: A coarse-to-fine inference based vision transformer for gastric cancer subtype detection using pathological images","authors":"Xinghang Wang ,&nbsp;Haibo Tao ,&nbsp;Bin Wang ,&nbsp;Huaiping Jin ,&nbsp;Zhenhui Li","doi":"10.1016/j.bspc.2024.107160","DOIUrl":"10.1016/j.bspc.2024.107160","url":null,"abstract":"<div><div>Accurate detection of histopathological cancer subtypes is crucial for personalized treatment. Currently, deep learning methods based on histopathology images have become an effective solution to this problem. However, existing deep learning methods for histopathology image classification often suffer from high computational complexity, not considering the variability of different regions, and failing to synchronize the focus on local–global information effectively. To address these issues, we propose a coarse-to-fine inference based vision transformer (ViT) network (CFI-ViT) for pathological image detection of gastric cancer subtypes. CFI-ViT combines global attention and discriminative and differentiable modules to achieve two-stage inference. In the coarse inference stage, a ViT model with relative position embedding is employed to extract global information from the input images. If the critical information is not sufficiently identified, the differentiable module is adopted to extract local image regions with discrimination for fine-grained screening in the fine inference stage. The effectiveness and superiority of the proposed CFI-ViT method have been validated through three pathological image datasets of gastric cancer, including one private dataset clinically collected from Yunnan Cancer Hospital in China and two publicly available datasets, i.e., HE-GHI-DS and TCGA-STAD. The experimental results demonstrate that CFI-ViT achieves superior recognition accuracy and generalization performance compared to traditional methods, while using only 80 % of the computational resources required by the ViT model.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107160"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of severe coronary artery disease based on clinical phonocardiogram and large kernel convolution interaction network 基于临床心电图和大核卷积交互网络的严重冠状动脉疾病检测
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-06 DOI: 10.1016/j.bspc.2024.107186
Chongbo Yin , Jian Qin , Yan Shi , Yineng Zheng , Xingming Guo
Heart sound auscultation coupled with machine learning algorithms is a risk-free and low-cost method for coronary artery disease detection (CAD). However, current studies mainly focus on CAD screening, namely classifying CAD and non-CAD, due to limited clinical data and algorithm performance. This leaves a gap to investigate CAD severity by phonocardiogram (PCG). To solve the issue, we first establish a clinical PCG dataset for CAD patients. The dataset includes 150 subjects with 80 severe CAD and 70 non-severe CAD patients. Then, we propose the large kernel convolution interaction network (LKCIN) to detect CAD severity. It integrates automatic feature extraction and pattern classification and simplifies PCG processing steps. The developed large kernel interaction block (LKIB) has three properties: long-distance dependency, local receptive field, and channel interaction, which efficiently improves feature extraction capabilities in LKCIN. Apart from it, a separate downsampling block is proposed to alleviate feature losses during forward propagation, following the LKIBs. Experiment is performed on the clinical PCG data, and LKCIN obtains good classification performance with accuracy 85.97 %, sensitivity 85.64 %, and specificity 86.26 %. Our study breaks conventional CAD screening, and provides a reliable option for CAD severity detection in clinical practice.
心音听诊结合机器学习算法是一种无风险、低成本的冠状动脉疾病(CAD)检测方法。然而,由于临床数据和算法性能有限,目前的研究主要集中于 CAD 筛查,即对 CAD 和非 CAD 进行分类。这为通过声心动图(PCG)研究 CAD 严重程度留下了空白。为了解决这个问题,我们首先建立了一个针对 CAD 患者的临床 PCG 数据集。该数据集包括 150 名受试者,其中 80 名重度 CAD 患者和 70 名非重度 CAD 患者。然后,我们提出了大核卷积交互网络(LKCIN)来检测 CAD 的严重程度。它集成了自动特征提取和模式分类,简化了 PCG 处理步骤。所开发的大核交互块(LKIB)具有三个特性:长距离依赖性、局部感受野和通道交互性,可有效提高 LKCIN 的特征提取能力。除此以外,还提出了一个单独的下采样块,以减轻 LKIB 在前向传播过程中的特征损失。在临床 PCG 数据上进行了实验,LKCIN 获得了良好的分类性能,准确率为 85.97%,灵敏度为 85.64%,特异性为 86.26%。我们的研究打破了传统的 CAD 筛查,为临床实践中 CAD 严重程度的检测提供了可靠的选择。
{"title":"Detection of severe coronary artery disease based on clinical phonocardiogram and large kernel convolution interaction network","authors":"Chongbo Yin ,&nbsp;Jian Qin ,&nbsp;Yan Shi ,&nbsp;Yineng Zheng ,&nbsp;Xingming Guo","doi":"10.1016/j.bspc.2024.107186","DOIUrl":"10.1016/j.bspc.2024.107186","url":null,"abstract":"<div><div>Heart sound auscultation coupled with machine learning algorithms is a risk-free and low-cost method for coronary artery disease detection (CAD). However, current studies mainly focus on CAD screening, namely classifying CAD and non-CAD, due to limited clinical data and algorithm performance. This leaves a gap to investigate CAD severity by phonocardiogram (PCG). To solve the issue, we first establish a clinical PCG dataset for CAD patients. The dataset includes 150 subjects with 80 severe CAD and 70 non-severe CAD patients. Then, we propose the large kernel convolution interaction network (LKCIN) to detect CAD severity. It integrates automatic feature extraction and pattern classification and simplifies PCG processing steps. The developed large kernel interaction block (LKIB) has three properties: long-distance dependency, local receptive field, and channel interaction, which efficiently improves feature extraction capabilities in LKCIN. Apart from it, a separate downsampling block is proposed to alleviate feature losses during forward propagation, following the LKIBs. Experiment is performed on the clinical PCG data, and LKCIN obtains good classification performance with accuracy 85.97 %, sensitivity 85.64 %, and specificity 86.26 %. Our study breaks conventional CAD screening, and provides a reliable option for CAD severity detection in clinical practice.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107186"},"PeriodicalIF":4.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1