首页 > 最新文献

Biocybernetics and Biomedical Engineering最新文献

英文 中文
Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images 超声图像中前列腺精确分割的多阶段全卷积网络
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.08.002
Yujie Feng , Chukwuemeka Clinton Atabansi , Jing Nie , Haijun Liu , Hang Zhou , Huai Zhao , Ruixia Hong , Fang Li , Xichuan Zhou

Prostate cancer is one of the most commonly diagnosed non-cutaneous malignant tumors and the sixth major cause of cancer-related death generally found in men globally. Automatic segmentation of prostate regions has a wide range of applications in prostate cancer diagnosis and treatment. It is challenging to extract powerful spatial features for precise prostate segmentation methods due to the wide variation in prostate size, shape, and histopathologic heterogeneity among patients. Most of the existing CNN-based architectures often produce unsatisfactory results and inaccurate boundaries in prostate segmentation, which are caused by inadequate discriminative feature maps and the limited amount of spatial information. To address these issues, we propose a novel deep learning technique called Multi-Stage FCN architecture for 2D prostate segmentation that captures more precise spatial information and accurate prostate boundaries. In addition, a new prostate ultrasound image dataset known as CCH-TRUSPS was collected from Chongqing University Cancer Hospital, including prostate ultrasound images of various prostate cancer architectures. We evaluate our method on the CCH-TRUSPS dataset and the publicly available Multi-site T2-weighted MRI dataset using five commonly used metrics for medical image analysis. When compared to other CNN-based methods on the CCH-TRUSPS test set, our Multi-Stage FCN achieves the highest and best binary accuracy of 99.15%, the DSC score of 94.90%, the IoU score of 89.80%, the precision of 94.67%, and the recall of 96.49%. The statistical and visual results demonstrate that our approach outperforms previous CNN-based techniques in all ramifications and can be used for the clinical diagnosis of prostate cancer.

前列腺癌是最常见的非皮肤恶性肿瘤之一,也是全球男性癌症相关死亡的第六大原因。前列腺区域自动分割在前列腺癌的诊断和治疗中有着广泛的应用。由于前列腺大小、形状和组织病理异质性的广泛差异,为精确的前列腺分割方法提取强大的空间特征是具有挑战性的。现有的大多数基于cnn的架构在前列腺分割中往往产生不理想的结果和不准确的边界,这是由不充分的判别特征映射和有限的空间信息造成的。为了解决这些问题,我们提出了一种新的深度学习技术,称为多阶段FCN架构,用于二维前列腺分割,可以捕获更精确的空间信息和准确的前列腺边界。此外,从重庆大学肿瘤医院收集了一个新的前列腺超声图像数据集CCH-TRUSPS,包括各种前列腺癌结构的前列腺超声图像。我们在CCH-TRUSPS数据集和公开可用的多站点t2加权MRI数据集上使用五种常用的医学图像分析指标来评估我们的方法。在CCH-TRUSPS测试集上,与其他基于cnn的方法相比,我们的多阶段FCN的最高和最佳二值准确率为99.15%,DSC评分为94.90%,IoU评分为89.80%,精密度为94.67%,召回率为96.49%。统计和视觉结果表明,我们的方法在所有分支中都优于以前基于cnn的技术,可用于前列腺癌的临床诊断。
{"title":"Multi-stage fully convolutional network for precise prostate segmentation in ultrasound images","authors":"Yujie Feng ,&nbsp;Chukwuemeka Clinton Atabansi ,&nbsp;Jing Nie ,&nbsp;Haijun Liu ,&nbsp;Hang Zhou ,&nbsp;Huai Zhao ,&nbsp;Ruixia Hong ,&nbsp;Fang Li ,&nbsp;Xichuan Zhou","doi":"10.1016/j.bbe.2023.08.002","DOIUrl":"10.1016/j.bbe.2023.08.002","url":null,"abstract":"<div><p><span><span>Prostate cancer is one of the most commonly diagnosed non-cutaneous malignant tumors and the sixth major cause of cancer-related death generally found in men globally. Automatic segmentation of prostate regions has a wide range of applications in prostate cancer diagnosis and treatment. It is challenging to extract powerful spatial features for precise prostate </span>segmentation methods due to the wide variation in prostate size, shape, and histopathologic heterogeneity among patients. Most of the existing CNN-based architectures often produce unsatisfactory results and inaccurate boundaries in prostate segmentation, which are caused by inadequate discriminative feature maps and the limited amount of spatial information. To address these issues, we propose a novel </span>deep learning<span> technique called Multi-Stage FCN architecture for 2D prostate segmentation that captures more precise spatial information and accurate prostate boundaries. In addition, a new prostate ultrasound image dataset known as CCH-TRUSPS was collected from Chongqing University Cancer Hospital, including prostate ultrasound images of various prostate cancer architectures. We evaluate our method on the CCH-TRUSPS dataset and the publicly available Multi-site T2-weighted MRI dataset using five commonly used metrics for medical image analysis. When compared to other CNN-based methods on the CCH-TRUSPS test set, our Multi-Stage FCN achieves the highest and best binary accuracy of 99.15%, the DSC score of 94.90%, the IoU score of 89.80%, the precision of 94.67%, and the recall of 96.49%. The statistical and visual results demonstrate that our approach outperforms previous CNN-based techniques in all ramifications and can be used for the clinical diagnosis of prostate cancer.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 586-602"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43556776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-guided multiple instance learning for COPD identification: To combine the intensity and morphology 注意引导多实例学习识别COPD:将强度与形态学相结合
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.004
Yanan Wu , Shouliang Qi , Jie Feng , Runsheng Chang , Haowen Pang , Jie Hou , Mengqi Li , Yingxi Wang , Shuyue Xia , Wei Qian

Chronic obstructive pulmonary disease (COPD) is a complex and multi-component respiratory disease. Computed tomography (CT) images can characterize lesions in COPD patients, but the image intensity and morphology of lung components have not been fully exploited. Two datasets (Dataset 1 and 2) comprising a total of 561 subjects were obtained from two centers. A multiple instance learning (MIL) method is proposed for COPD identification. First, randomly selected slices (instances) from CT scans and multi-view 2D snapshots of the 3D airway tree and lung field extracted from CT images are acquired. Then, three attention-guided MIL models (slice-CT, snapshot-airway, and snapshot-lung-field models) are trained. In these models, a deep convolution neural network (CNN) is utilized for feature extraction. Finally, the outputs of the above three MIL models are combined using logistic regression to produce the final prediction. For Dataset 1, the accuracy of the slice-CT MIL model with 20 instances was 88.1%. The backbone of VGG-16 outperformed Alexnet, Resnet18, Resnet26, and Mobilenet_v2 in feature extraction. The snapshot-airway and snapshot-lung-field MIL models achieved accuracies of 89.4% and 90.0%, respectively. After the three models were combined, the accuracy reached 95.8%. The proposed model outperformed several state-of-the-art methods and afforded an accuracy of 83.1% for the external dataset (Dataset 2). The proposed weakly supervised MIL method is feasible for COPD identification. The effective CNN module and attention-guided MIL pooling module contribute to performance enhancement. The morphology information of the airway and lung field is beneficial for identifying COPD.

慢性阻塞性肺疾病(COPD)是一种复杂的多组分呼吸系统疾病。计算机断层扫描(CT)图像可以表征慢性阻塞性肺病患者的病变,但图像强度和肺成分的形态尚未得到充分利用。两个数据集(数据集1和数据集2)共561名受试者,来自两个中心。提出了一种基于多实例学习(MIL)的慢阻肺识别方法。首先,从CT扫描中随机选择切片(实例),并从CT图像中提取三维气道树和肺场的多视图二维快照。然后,训练了三种注意力引导的MIL模型(切片- ct,快照-气道和快照-肺场模型)。在这些模型中,使用深度卷积神经网络(CNN)进行特征提取。最后,使用逻辑回归将上述三种MIL模型的输出组合以产生最终预测。对于数据集1,包含20个实例的切片ct MIL模型的准确率为88.1%。VGG-16的主干在特征提取方面优于Alexnet、Resnet18、Resnet26和Mobilenet_v2。快照气道和快照长场MIL模型的准确率分别为89.4%和90.0%。三种模型组合后,准确率达到95.8%。所提出的模型优于几种最先进的方法,并为外部数据集(数据集2)提供83.1%的准确率。所提出的弱监督MIL方法对于COPD识别是可行的。有效的CNN模块和注意引导的MIL池模块有助于提高性能。气道和肺野的形态学信息有利于COPD的鉴别。
{"title":"Attention-guided multiple instance learning for COPD identification: To combine the intensity and morphology","authors":"Yanan Wu ,&nbsp;Shouliang Qi ,&nbsp;Jie Feng ,&nbsp;Runsheng Chang ,&nbsp;Haowen Pang ,&nbsp;Jie Hou ,&nbsp;Mengqi Li ,&nbsp;Yingxi Wang ,&nbsp;Shuyue Xia ,&nbsp;Wei Qian","doi":"10.1016/j.bbe.2023.06.004","DOIUrl":"10.1016/j.bbe.2023.06.004","url":null,"abstract":"<div><p><span>Chronic obstructive pulmonary disease<span> (COPD) is a complex and multi-component respiratory disease. Computed tomography (CT) images can characterize lesions in COPD patients, but the image intensity and morphology of lung components have not been fully exploited. Two datasets (Dataset 1 and 2) comprising a total of 561 subjects were obtained from two centers. A multiple instance learning (MIL) method is proposed for COPD identification. First, randomly selected slices (instances) from CT scans and multi-view 2D snapshots of the 3D </span></span>airway tree<span><span> and lung field extracted from CT images are acquired. Then, three attention-guided MIL models (slice-CT, snapshot-airway, and snapshot-lung-field models) are trained. In these models, a deep convolution<span> neural network (CNN) is utilized for feature extraction. Finally, the outputs of the above three MIL models are combined using </span></span>logistic regression to produce the final prediction. For Dataset 1, the accuracy of the slice-CT MIL model with 20 instances was 88.1%. The backbone of VGG-16 outperformed Alexnet, Resnet18, Resnet26, and Mobilenet_v2 in feature extraction. The snapshot-airway and snapshot-lung-field MIL models achieved accuracies of 89.4% and 90.0%, respectively. After the three models were combined, the accuracy reached 95.8%. The proposed model outperformed several state-of-the-art methods and afforded an accuracy of 83.1% for the external dataset (Dataset 2). The proposed weakly supervised MIL method is feasible for COPD identification. The effective CNN module and attention-guided MIL pooling module contribute to performance enhancement. The morphology information of the airway and lung field is beneficial for identifying COPD.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 568-585"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42298880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient simultaneous segmentation and classification of brain tumors from MRI scans using deep learning 利用深度学习对MRI扫描中的脑肿瘤进行有效的同时分割和分类
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.08.003
Akshya Kumar Sahoo , Priyadarsan Parida , K. Muralibabu , Sonali Dash

Brain tumors can be difficult to diagnose, as they may have similar radiographic characteristics, and a thorough examination may take a considerable amount of time. To address these challenges, we propose an intelligent system for the automatic extraction and identification of brain tumors from 2D CE MRI images. Our approach comprises two stages. In the first stage, we use an encoder-decoder based U-net with residual network as the backbone to detect different types of brain tumors, including glioma, meningioma, and pituitary tumors. Our method achieved an accuracy of 99.60%, a sensitivity of 90.20%, a specificity of 99.80%, a dice similarity coefficient of 90.11%, and a precision of 90.50% for tumor extraction. In the second stage, we employ a YOLO2 (you only look once) based transfer learning approach to classify the extracted tumors, achieving a classification accuracy of 97%. Our proposed approach outperforms state-of-the-art methods found in the literature. The results demonstrate the potential of our method to aid in the diagnosis and treatment of brain tumors.

脑肿瘤很难诊断,因为它们可能具有相似的放射学特征,彻底的检查可能需要相当长的时间。为了解决这些挑战,我们提出了一种从二维CE MRI图像中自动提取和识别脑肿瘤的智能系统。我们的方法包括两个阶段。在第一阶段,我们使用基于编码器-解码器的U-net,以残馀网络为骨干来检测不同类型的脑肿瘤,包括胶质瘤、脑膜瘤和垂体瘤。该方法的肿瘤提取准确率为99.60%,灵敏度为90.20%,特异性为99.80%,骰子相似系数为90.11%,精密度为90.50%。在第二阶段,我们采用基于YOLO2(你只看一次)的迁移学习方法对提取的肿瘤进行分类,分类准确率达到97%。我们提出的方法优于文献中发现的最先进的方法。结果证明了我们的方法在脑肿瘤诊断和治疗方面的潜力。
{"title":"Efficient simultaneous segmentation and classification of brain tumors from MRI scans using deep learning","authors":"Akshya Kumar Sahoo ,&nbsp;Priyadarsan Parida ,&nbsp;K. Muralibabu ,&nbsp;Sonali Dash","doi":"10.1016/j.bbe.2023.08.003","DOIUrl":"10.1016/j.bbe.2023.08.003","url":null,"abstract":"<div><p><span>Brain tumors can be difficult to diagnose, as they may have similar radiographic characteristics, and a thorough examination may take a considerable amount of time. To address these challenges, we propose an intelligent system for the automatic extraction and identification of brain tumors from 2D CE MRI images. Our approach comprises two stages. In the first stage, we use an encoder-decoder based U-net with residual network<span><span><span> as the backbone to detect different types of brain tumors, including glioma, meningioma, and </span>pituitary tumors. Our method achieved an accuracy of 99.60%, a sensitivity of 90.20%, a specificity of 99.80%, a </span>dice similarity coefficient of 90.11%, and a precision of 90.50% for tumor extraction. In the second stage, we employ a YOLO2 (you only look once) based </span></span>transfer learning<span> approach to classify the extracted tumors, achieving a classification accuracy of 97%. Our proposed approach outperforms state-of-the-art methods found in the literature. The results demonstrate the potential of our method to aid in the diagnosis and treatment of brain tumors.</span></p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 616-633"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45686531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-invasive waveform analysis for emergency triage via simulated hemorrhage: An experimental study using novel dynamic lower body negative pressure model 模拟出血急诊分诊的无创波形分析:基于新型动态下体负压模型的实验研究
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.002
Naimahmed Nesaragi , Lars Øivind Høiseth , Hemin Ali Qadir , Leiv Arne Rosseland , Per Steinar Halvorsen , Ilangko Balasingham

The extent to which advanced waveform analysis of non-invasive physiological signals can diagnose levels of hypovolemia remains insufficiently explored. The present study explores the discriminative ability of a deep learning (DL) framework to classify levels of ongoing hypovolemia, simulated via novel dynamic lower body negative pressure (LBNP) model among healthy volunteers. We used a dynamic LBNP protocol as opposed to the traditional model, where LBNP is applied in a predictable step-wise, progressively descending manner. This dynamic LBNP version assists in circumventing the problem posed in terms of time dependency, as in real-life pre-hospital settings intravascular blood volume may fluctuate due to volume resuscitation. A supervised DL-based framework for ternary classification was realized by segmenting the underlying noninvasive signal and labeling segments with corresponding LBNP target levels. The proposed DL model with two inputs was trained with respective time–frequency representations extracted on waveform segments to classify each of them into blood volume loss: Class 1 (mild); Class 2 (moderate); or Class 3 (severe). At the outset, the latent space derived at the end of the DL model via late fusion among both inputs assists in enhanced classification performance. When evaluated in a 3-fold cross-validation setup with stratified subjects, the experimental findings demonstrated PPG to be a potential surrogate for variations in blood volume with average classification performance, AUROC: 0.8861, AUPRC: 0.8141, F1-score:72.16%, Sensitivity:79.06%, and Specificity:89.21%. Our proposed DL algorithm on PPG signal demonstrates the possibility to capture the complex interplay in physiological responses related to both bleeding and fluid resuscitation using this challenging LBNP setup.

非侵入性生理信号的高级波形分析可以在多大程度上诊断低血容量水平,目前还没有得到充分的探索。本研究探讨了深度学习(DL)框架在健康志愿者中通过新型动态下半身负压(LBNP)模型模拟的持续低血容量水平分类的判别能力。与传统模型相反,我们使用了动态LBNP协议,在传统模型中,LBNP以可预测的逐步递减方式应用。这种动态LBNP版本有助于避免时间依赖性方面的问题,因为在现实生活中的院前环境中,血管内血容量可能会因容量复苏而波动。通过分割潜在的无创信号并用相应的LBNP靶水平标记片段,实现了基于监督DL的三元分类框架。所提出的具有两个输入的DL模型使用在波形段上提取的各自的时间-频率表示进行训练,以将每个波形段分类为血容量损失:1类(轻度);2级(中等);或3级(严重)。一开始,通过两个输入之间的后期融合在DL模型末尾导出的潜在空间有助于增强分类性能。当在分层受试者的3倍交叉验证设置中进行评估时,实验结果表明PPG是血容量变化的潜在替代品,具有平均分类性能,AUROC:0.861,AUPRC:0.8141,F1得分:72.16%,灵敏度:79.06%,特异性:89.21%。我们提出的PPG信号DL算法证明了使用这种具有挑战性的LBNP设置捕捉与出血和液体复苏相关的生理反应中复杂相互作用的可能性。
{"title":"Non-invasive waveform analysis for emergency triage via simulated hemorrhage: An experimental study using novel dynamic lower body negative pressure model","authors":"Naimahmed Nesaragi ,&nbsp;Lars Øivind Høiseth ,&nbsp;Hemin Ali Qadir ,&nbsp;Leiv Arne Rosseland ,&nbsp;Per Steinar Halvorsen ,&nbsp;Ilangko Balasingham","doi":"10.1016/j.bbe.2023.06.002","DOIUrl":"https://doi.org/10.1016/j.bbe.2023.06.002","url":null,"abstract":"<div><p>The extent to which advanced waveform analysis of non-invasive physiological signals can diagnose levels of hypovolemia remains insufficiently explored. The present study explores the discriminative ability of a deep learning (DL) framework to classify levels of ongoing hypovolemia, simulated via novel dynamic lower body negative pressure (LBNP) model among healthy volunteers. We used a dynamic LBNP protocol as opposed to the traditional model, where LBNP is applied in a predictable step-wise, progressively descending manner. This dynamic LBNP version assists in circumventing the problem posed in terms of time dependency, as in real-life pre-hospital settings intravascular blood volume may fluctuate due to volume resuscitation. A supervised DL-based framework for ternary classification was realized by segmenting the underlying noninvasive signal and labeling segments with corresponding LBNP target levels. The proposed DL model with two inputs was trained with respective time–frequency representations extracted on waveform segments to classify each of them into blood volume loss: Class 1 (mild); Class 2 (moderate); or Class 3 (severe). At the outset, the latent space derived at the end of the DL model via late fusion among both inputs assists in enhanced classification performance. When evaluated in a 3-fold cross-validation setup with stratified subjects, the experimental findings demonstrated PPG to be a potential surrogate for variations in blood volume with average classification performance, AUROC: 0.8861, AUPRC: 0.8141, <span><math><mrow><mi>F</mi><mn>1</mn></mrow></math></span>-score:72.16%, Sensitivity:79.06%, and Specificity:89.21%. Our proposed DL algorithm on PPG signal demonstrates the possibility to capture the complex interplay in physiological responses related to both bleeding and fluid resuscitation using this challenging LBNP setup.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 551-567"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BA-Net: Brightness prior guided attention network for colonic polyp segmentation BA-Net:用于结肠息肉分割的亮度优先引导注意力网络
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.08.001
Haiying Xia , Yilin Qin , Yumei Tan , Shuxiang Song

Automatic polyp segmentation at colonoscopy plays an important role in the early diagnosis and surgery of colorectal cancer. However, the diversity of polyps in different images greatly increases the difficulty of accurately segmenting polyps. Manual segmentation of polyps in colonoscopic images is time-consuming and the rate of polyps missed remains high. In this paper, we propose a brightness prior guided attention network (BA-Net) for automatic polyp segmentation. Specifically, we first aggregate the high-level features of the last three layers of the encoder with an enhanced receptive field (ERF) module, which further fed to the decoder to obtain the initial prediction maps. Then, we introduce a brightness prior fusion (BF) module that fuses the brightness prior information into the multi-scale side-out high-level semantic features. The BF module aims to induce the network to localize salient regions, which may be potential polyps, to obtain better segmentation results. Finally, we propose a global reverse attention (GRA) module to combine the output of the BF module and the initial prediction map for obtaining long-range dependence and reverse refinement prediction results. With iterative refinement from higher-level semantics to lower-level semantics, our BA-Net can achieve more refined and accurate segmentation. Extensive experiments show that our BA-Net outperforms the state-of-the-art methods on six common polyp datasets.

结肠镜下息肉自动分割在结直肠癌的早期诊断和手术中具有重要作用。然而,不同图像中息肉的多样性大大增加了准确分割息肉的难度。人工分割结肠镜图像中的息肉是费时的,息肉漏报率仍然很高。本文提出了一种用于息肉自动分割的亮度优先引导注意网络(BA-Net)。具体来说,我们首先将编码器最后三层的高级特征与增强的接受场(ERF)模块聚合在一起,并将其进一步馈送到解码器以获得初始预测映射。然后,引入亮度先验融合(BF)模块,将亮度先验信息融合到多尺度侧出高级语义特征中;BF模块的目的是诱导网络定位可能是潜在息肉的显著区域,以获得更好的分割结果。最后,我们提出了一个全局反向关注(GRA)模块,将BF模块的输出与初始预测映射结合起来,获得远程依赖和反向细化预测结果。通过从高级语义到低级语义的迭代细化,我们的BA-Net可以实现更精细和准确的分割。大量的实验表明,我们的BA-Net在六个常见的息肉数据集上优于最先进的方法。
{"title":"BA-Net: Brightness prior guided attention network for colonic polyp segmentation","authors":"Haiying Xia ,&nbsp;Yilin Qin ,&nbsp;Yumei Tan ,&nbsp;Shuxiang Song","doi":"10.1016/j.bbe.2023.08.001","DOIUrl":"10.1016/j.bbe.2023.08.001","url":null,"abstract":"<div><p>Automatic polyp segmentation at colonoscopy plays an important role in the early diagnosis and surgery of colorectal cancer. However, the diversity of polyps in different images greatly increases the difficulty of accurately segmenting polyps. Manual segmentation of polyps in colonoscopic images is time-consuming and the rate of polyps missed remains high. In this paper, we propose a brightness prior guided attention network (BA-Net) for automatic polyp segmentation. Specifically, we first aggregate the high-level features of the last three layers of the encoder with an enhanced receptive field (ERF) module, which further fed to the decoder to obtain the initial prediction maps. Then, we introduce a brightness prior fusion (BF) module that fuses the brightness prior information into the multi-scale side-out high-level semantic features. The BF module aims to induce the network to localize salient regions, which may be potential polyps, to obtain better segmentation results. Finally, we propose a global reverse attention (GRA) module to combine the output of the BF module and the initial prediction map for obtaining long-range dependence and reverse refinement prediction results. With iterative refinement from higher-level semantics to lower-level semantics, our BA-Net can achieve more refined and accurate segmentation. Extensive experiments show that our BA-Net outperforms the state-of-the-art methods on six common polyp datasets.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 603-615"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44506676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based cross-modal multi-contrast network for ophthalmic diseases diagnosis 基于变压器的跨模态多对比网络眼科疾病诊断
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.001
Yang Yu, Hongqing Zhu

Automatic diagnosis of various ophthalmic diseases from ocular medical images is vital to support clinical decisions. Most current methods employ a single imaging modality, especially 2D fundus images. Considering that the diagnosis of ophthalmic diseases can greatly benefit from multiple imaging modalities, this paper further improves the accuracy of diagnosis by effectively utilizing cross-modal data. In this paper, we propose Transformer-based cross-modal multi-contrast network for efficiently fusing color fundus photograph (CFP) and optical coherence tomography (OCT) modality to diagnose ophthalmic diseases. We design multi-contrast learning strategy to extract discriminate features from cross-modal data for diagnosis. Then channel fusion head captures the semantically shared information across different modalities and the similarity features between patients of the same category. Meanwhile, we use a class-balanced training strategy to cope with the situation that medical datasets are usually class-imbalanced. Our method is evaluated on public benchmark datasets for cross-modal ophthalmic disease diagnosis. The experimental results demonstrate that our method outperforms other approaches. The codes and models are available at https://github.com/ecustyy/tcmn.

从眼部医学图像中自动诊断各种眼部疾病对支持临床决策至关重要。目前大多数方法采用单一成像方式,特别是二维眼底图像。考虑到多种成像模式对眼科疾病的诊断大有裨益,本文通过有效利用跨模式数据进一步提高了诊断的准确性。本文提出了一种基于transformer的跨模态多对比网络,用于有效融合彩色眼底照片(CFP)和光学相干断层扫描(OCT)模式来诊断眼科疾病。我们设计了多对比学习策略,从跨模态数据中提取判别特征进行诊断。然后,通道融合头捕获不同模式之间的语义共享信息和同一类别患者之间的相似特征。同时,我们采用类平衡训练策略来应对医疗数据集通常是类不平衡的情况。我们的方法在跨模态眼病诊断的公共基准数据集上进行了评估。实验结果表明,该方法优于其他方法。代码和模型可在https://github.com/ecustyy/tcmn上获得。
{"title":"Transformer-based cross-modal multi-contrast network for ophthalmic diseases diagnosis","authors":"Yang Yu,&nbsp;Hongqing Zhu","doi":"10.1016/j.bbe.2023.06.001","DOIUrl":"10.1016/j.bbe.2023.06.001","url":null,"abstract":"<div><p><span><span>Automatic diagnosis of various ophthalmic diseases from ocular medical images is vital to support clinical decisions. Most current methods employ a single </span>imaging modality<span>, especially 2D fundus images. Considering that the diagnosis of ophthalmic diseases can greatly benefit from multiple imaging modalities, this paper further improves the accuracy of diagnosis by effectively utilizing cross-modal data. In this paper, we propose Transformer-based cross-modal multi-contrast network for efficiently fusing color fundus photograph (CFP) and optical coherence tomography (OCT) modality to diagnose ophthalmic diseases. We design multi-contrast learning strategy to extract discriminate features from cross-modal data for diagnosis. Then channel fusion head captures the semantically shared information across different modalities and the similarity features between patients of the same category. Meanwhile, we use a class-balanced training strategy to cope with the situation that medical datasets are usually class-imbalanced. Our method is evaluated on public benchmark datasets for cross-modal ophthalmic disease diagnosis. The experimental results demonstrate that our method outperforms other approaches. The codes and models are available at </span></span><span>https://github.com/ecustyy/tcmn</span><svg><path></path></svg>.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 507-527"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43259056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of various lung diseases including COVID-19 using extreme learning machine algorithm based on the features extracted from a lightweight CNN architecture 基于从轻量级CNN架构中提取的特征,使用极端学习机算法检测包括新冠肺炎在内的各种肺部疾病
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-07-01 DOI: 10.1016/j.bbe.2023.06.003
Md. Nahiduzzaman , Md Omaer Faruq Goni , Md. Robiul Islam , Abu Sayeed , Md. Shamim Anower , Mominul Ahsan , Julfikar Haider , Marcin Kowalski

Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.

在世界各地,肺炎、心脏肥大和结核病等几种肺部疾病会导致严重疾病、住院甚至死亡,特别是对老年人和身体脆弱的患者。在过去几十年里,几种新型肺部相关疾病夺走了数百万人的生命,COVID-19夺走了近627万人的生命。在当前的COVID-19大流行中,及时、正确的诊断和适当的治疗对于抗击肺部疾病至关重要。本研究提出了一种基于机器学习(ML)技术的七种肺部疾病智能识别系统,以辅助医学专家。肺部疾病的胸部x射线(CXR)图像是从几个公开的数据库中收集的。使用轻量级卷积神经网络(CNN)从CXR图像的原始像素值中提取特征特征。使用Pearson相关系数(PCC)确定了最佳特征子集。最后,使用极限学习机(ELM)来执行分类任务,以帮助更快的学习和降低计算复杂度。本文提出的CNN-PCC-ELM模型对8类分类的准确率为96.22%,曲线下面积(AUC)为99.48%。在COVID-19、肺炎和结核病的二分类和多分类检测中,该模型的结果比现有的最先进(SOTA)模型表现出更好的性能。在8类分类中,该模型对COVID-19检测的准确率为100%,召回率为99%,fi-score为100%,ROC为99.99%,具有较强的鲁棒性。因此,该模型掩盖了现有的开拓性模型,无法准确区分COVID-19与其他肺部疾病,从而帮助医生有效地治疗患者。
{"title":"Detection of various lung diseases including COVID-19 using extreme learning machine algorithm based on the features extracted from a lightweight CNN architecture","authors":"Md. Nahiduzzaman ,&nbsp;Md Omaer Faruq Goni ,&nbsp;Md. Robiul Islam ,&nbsp;Abu Sayeed ,&nbsp;Md. Shamim Anower ,&nbsp;Mominul Ahsan ,&nbsp;Julfikar Haider ,&nbsp;Marcin Kowalski","doi":"10.1016/j.bbe.2023.06.003","DOIUrl":"10.1016/j.bbe.2023.06.003","url":null,"abstract":"<div><p>Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 3","pages":"Pages 528-550"},"PeriodicalIF":6.4,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42255709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDCF_Net: A Multi-dimensional hybrid network for liver and tumor segmentation from CT MDCF_Net:一种用于肝脏和肿瘤CT分割的多维混合网络
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-04-01 DOI: 10.1016/j.bbe.2023.04.004
Jian Jiang , Yanjun Peng , Qingfan Hou , Jiao Wang

The segmentation of the liver and liver tumors is critical in the diagnosis of liver cancer, and the high mortality rate of liver cancer has made it one of the most popular areas for segmentation research. Some deep learning segmentation methods outperformed traditional methods in terms of segmentation results. However, they are unable to obtain satisfactory segmentation results due to blurred original image boundaries, the presence of noise, very small lesion sites, and other factors. In this paper, we propose MDCF_Net, which has dual encoding branches composed of CNN and CnnFormer and can fully utilize multi-dimensional image features. First, it extracts both intra-slice and inter-slice information and improves the accuracy of the network output by symmetrically using multi-dimensional fusion layers. In the meantime, we propose a novel feature map stacking approach that focuses on the correlation of adjacent channels of two feature maps, improving the network's ability to perceive 3D features. Furthermore, the two coding branches collaborate to obtain both texture and edge features, and the network segmentation performance is further improved. Extensive experiments were carried out on the public datasets LiTS to determine the optimal slice thickness for this task. The superiority of the segmentation performance of our proposed MDCF_Net was confirmed by comparison with other leading methods on two public datasets, the LiTS and the 3DIRCADb.

肝脏和肝脏肿瘤的分割是肝癌诊断的关键,肝癌的高死亡率使其成为分割研究的热门领域之一。一些深度学习分割方法在分割结果上优于传统方法。但由于原始图像边界模糊、存在噪声、病变部位很小等因素,无法获得满意的分割结果。本文提出MDCF_Net,它具有由CNN和CnnFormer组成的双编码分支,可以充分利用图像的多维特征。首先,它同时提取片内和片间信息,并通过对称地使用多维融合层来提高网络输出的准确性;同时,我们提出了一种新的特征图叠加方法,该方法关注两个特征图相邻通道的相关性,提高了网络对3D特征的感知能力。此外,两个编码分支协同获得纹理和边缘特征,进一步提高了网络分割性能。在公共数据集LiTS上进行了大量的实验,以确定该任务的最佳切片厚度。通过在LiTS和3DIRCADb两个公共数据集上与其他领先的分割方法进行比较,证实了我们所提出的MDCF_Net分割性能的优越性。
{"title":"MDCF_Net: A Multi-dimensional hybrid network for liver and tumor segmentation from CT","authors":"Jian Jiang ,&nbsp;Yanjun Peng ,&nbsp;Qingfan Hou ,&nbsp;Jiao Wang","doi":"10.1016/j.bbe.2023.04.004","DOIUrl":"10.1016/j.bbe.2023.04.004","url":null,"abstract":"<div><p><span><span>The segmentation of the liver and liver tumors is critical in the diagnosis of liver cancer, and the high mortality rate of liver cancer has made it one of the most popular areas for segmentation research. Some deep learning </span>segmentation methods outperformed traditional methods in terms of segmentation results. However, they are unable to obtain satisfactory segmentation results due to blurred original image boundaries, the presence of noise, very small lesion sites, and other factors. In this paper, we propose MDCF_Net, which has dual encoding branches composed of </span>CNN and CnnFormer and can fully utilize multi-dimensional image features. First, it extracts both intra-slice and inter-slice information and improves the accuracy of the network output by symmetrically using multi-dimensional fusion layers. In the meantime, we propose a novel feature map stacking approach that focuses on the correlation of adjacent channels of two feature maps, improving the network's ability to perceive 3D features. Furthermore, the two coding branches collaborate to obtain both texture and edge features, and the network segmentation performance is further improved. Extensive experiments were carried out on the public datasets LiTS to determine the optimal slice thickness for this task. The superiority of the segmentation performance of our proposed MDCF_Net was confirmed by comparison with other leading methods on two public datasets, the LiTS and the 3DIRCADb.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 2","pages":"Pages 494-506"},"PeriodicalIF":6.4,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47898926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting muscle fatigue during dynamic contractions using wavelet analysis of surface electromyography signal 利用表面肌电信号的小波分析预测动态收缩过程中的肌肉疲劳
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-04-01 DOI: 10.1016/j.bbe.2023.04.002
MohammadJavad Shariatzadeh , Ehsan Hadizadeh Hafshejani , Cameron J.Mitchell , Mu Chiao , Dana Grecov

Muscle fatigue is defined as a reduction in the capability of muscle to exert force or power. Although surface electromyography (sEMG) signals during exercise have been used to assess muscle fatigue, analyzing the sEMG signal during dynamic contractions is difficult because of the many signal distorting factors such as electrode movements, and variations in muscle tissue conductivity. Besides the non-deterministic and non-stationary nature of sEMG in dynamic contractions, no fatigue indicator is available to predict the ability of a muscle to apply force based on the sEMG signal properties.

In this study, we designed and manufactured a novel wearable sensor system with both sEMG electrodes and motion tracking sensors to monitor the dynamic muscle movements of human subjects. We detected the state of muscle fatigue using a new wavelet analysis method to predict the maximum isometric force the subject can apply during dynamic contraction.

Our method of signal processing consists of four main steps. 1- Segmenting sEMG signals using motion tracking signals. 2- Determine the most suitable mother wavelet for discrete wavelet transformation (DWT) based on cross-correlation between wavelets and signals. 3- Deoinsing the sEMG using the DWT method. 4- Calculation of normalized energy in different decomposition levels to predict maximal voluntary isometric contraction force as an indicator of muscle fatigue.

The monitoring system was tested on healthy adults doing biceps curl exercises, and the results of the wavelet decomposition method were compared to well-known muscle fatigue indices in the literature.

肌肉疲劳被定义为肌肉施加力量或动力的能力下降。虽然运动过程中的表面肌电图(sEMG)信号已被用于评估肌肉疲劳,但由于电极运动和肌肉组织电导率变化等许多信号扭曲因素,分析动态收缩过程中的表面肌电图信号是困难的。除了动态收缩时表面肌电信号的不确定性和非平稳性外,没有疲劳指标可以根据表面肌电信号的特性来预测肌肉施加力的能力。在这项研究中,我们设计并制造了一种新型的可穿戴传感器系统,该系统具有肌电信号电极和运动跟踪传感器,用于监测人类受试者的动态肌肉运动。我们使用一种新的小波分析方法来检测肌肉疲劳状态,以预测受试者在动态收缩时可以施加的最大等长力。我们的信号处理方法包括四个主要步骤。1-使用运动跟踪信号分割肌电信号。2-根据小波和信号之间的相互关系,确定离散小波变换(DWT)最合适的母小波。3-使用DWT方法去除表面肌电信号。4-计算不同分解水平的归一化能量,以预测最大自主等长收缩力,作为肌肉疲劳的指标。对健康成人进行肱二头肌弯曲运动的监测系统进行了测试,并将小波分解方法的结果与文献中已知的肌肉疲劳指标进行了比较。
{"title":"Predicting muscle fatigue during dynamic contractions using wavelet analysis of surface electromyography signal","authors":"MohammadJavad Shariatzadeh ,&nbsp;Ehsan Hadizadeh Hafshejani ,&nbsp;Cameron J.Mitchell ,&nbsp;Mu Chiao ,&nbsp;Dana Grecov","doi":"10.1016/j.bbe.2023.04.002","DOIUrl":"10.1016/j.bbe.2023.04.002","url":null,"abstract":"<div><p>Muscle fatigue is defined as a reduction in the capability of muscle to exert force or power. Although surface electromyography<span> (sEMG) signals during exercise have been used to assess muscle fatigue, analyzing the sEMG signal during dynamic contractions is difficult because of the many signal distorting factors such as electrode movements, and variations in muscle tissue conductivity. Besides the non-deterministic and non-stationary nature of sEMG in dynamic contractions, no fatigue indicator is available to predict the ability of a muscle to apply force based on the sEMG signal properties.</span></p><p>In this study, we designed and manufactured a novel wearable sensor<span><span> system with both sEMG electrodes and motion tracking sensors to monitor the dynamic muscle movements of human subjects. We detected the state of muscle fatigue using a new </span>wavelet analysis method to predict the maximum isometric force the subject can apply during dynamic contraction.</span></p><p>Our method of signal processing consists of four main steps. 1- Segmenting sEMG signals using motion tracking signals. 2- Determine the most suitable mother wavelet for discrete wavelet transformation (DWT) based on cross-correlation between wavelets and signals. 3- Deoinsing the sEMG using the DWT method. 4- Calculation of normalized energy in different decomposition levels<span> to predict maximal voluntary isometric contraction force as an indicator of muscle fatigue.</span></p><p>The monitoring system was tested on healthy adults doing biceps curl exercises, and the results of the wavelet decomposition method were compared to well-known muscle fatigue indices in the literature.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 2","pages":"Pages 428-441"},"PeriodicalIF":6.4,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45374300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-Hilbert transform based bidirectional least squares grey transform and modified binary grey wolf optimization for the identification of epileptic EEGs 基于小波-希尔伯特变换的双向最小二乘灰色变换和改进的二元灰狼优化用于癫痫脑电图识别
IF 6.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2023-04-01 DOI: 10.1016/j.bbe.2023.04.003
Chang Liu , Wanzhong Chen , Tao Zhang

Wavelet based seizure detection is an importance topic for epilepsy diagnosis via electroencephalogram (EEG), but its performance is closely related to the choice of wavelet bases. To overcome this issue, a fusion method of wavelet packet transformation (WPT), Hilbert transform based bidirectional least squares grey transform (HTBiLSGT), modified binary grey wolf optimization (MBGWO) and fuzzy K-Nearest Neighbor (FKNN) was proposed. The HTBiLSGT was first proposed to model the envelope change of a signal, then WPT based HTBiLSGT was developed for EEG feature extraction by performing HTBiLSGT for each subband of each wavelet level. To select discriminative features, MBGWO was further put forward and employed to conduct feature selection, and the selected features were finally fed into FKNN for classification. The Bonn and CHB-MIT EEG datasets were used to verify the effectiveness of the proposed technique. Experimental results indicate the proposed WPT based HTBiLSGT, MBGWO and FKNN can respectively lead to the highest accuracies of 100% and 98.60 ± 1.35% for the ternary and quinary classification cases of Bonn dataset, it also results in the overall accuracy of 99.48 ± 0.61 for the CHB-MIT dataset, and the proposal is proven to be insensitive to the choice of wavelet bases.

基于小波的癫痫发作检测是脑电图诊断的一个重要课题,但其性能与小波基的选择密切相关。为了克服这一问题,提出了小波包变换(WPT)、基于Hilbert变换的双向最小二乘灰变换(HTBiLSGT)、改进二值灰狼优化(MBGWO)和模糊k近邻(FKNN)的融合方法。首先提出了HTBiLSGT来模拟信号的包络变化,然后通过对每个小波水平的每个子带进行HTBiLSGT,开发了基于WPT的HTBiLSGT用于脑电信号特征提取。为了选择判别特征,我们进一步提出并利用MBGWO进行特征选择,最后将选择的特征输入FKNN进行分类。利用Bonn和CHB-MIT脑电数据集验证了所提出技术的有效性。实验结果表明,本文提出的基于WPT的HTBiLSGT、MBGWO和FKNN在Bonn数据集的三元和五元分类情况下的最高准确率分别为100%和98.60 ± 1.35%,CHB-MIT数据集的总体准确率为99.48 ± 0.61,并且该方法对小波基的选择不敏感。
{"title":"Wavelet-Hilbert transform based bidirectional least squares grey transform and modified binary grey wolf optimization for the identification of epileptic EEGs","authors":"Chang Liu ,&nbsp;Wanzhong Chen ,&nbsp;Tao Zhang","doi":"10.1016/j.bbe.2023.04.003","DOIUrl":"10.1016/j.bbe.2023.04.003","url":null,"abstract":"<div><p>Wavelet based seizure detection is an importance topic for epilepsy diagnosis via electroencephalogram (EEG), but its performance is closely related to the choice of wavelet bases. To overcome this issue, a fusion method of wavelet packet transformation (WPT), Hilbert transform based bidirectional least squares grey transform (HTBiLSGT), modified binary grey wolf optimization (MBGWO) and fuzzy K-Nearest Neighbor (FKNN) was proposed. The HTBiLSGT was first proposed to model the envelope change of a signal, then WPT based HTBiLSGT was developed for EEG feature extraction by performing HTBiLSGT for each subband of each wavelet level. To select discriminative features, MBGWO was further put forward and employed to conduct feature selection, and the selected features were finally fed into FKNN for classification. The Bonn and CHB-MIT EEG datasets were used to verify the effectiveness of the proposed technique. Experimental results indicate the proposed WPT based HTBiLSGT, MBGWO and FKNN can respectively lead to the highest accuracies of 100% and 98.60 ± 1.35% for the ternary and quinary classification cases of Bonn dataset, it also results in the overall accuracy of 99.48 ± 0.61 for the CHB-MIT dataset, and the proposal is proven to be insensitive to the choice of wavelet bases.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"43 2","pages":"Pages 442-462"},"PeriodicalIF":6.4,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49373643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Biocybernetics and Biomedical Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1