首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
DEF-SwinE2NET: Dual enhanced features guided with multi-model fusion for brain tumor classification using preprocessing optimization DEF-SwinE2NET:利用预处理优化多模型融合引导脑肿瘤分类的双重增强特征
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-28 DOI: 10.1016/j.bspc.2024.107079
Muhammad Ghulam Abbas Malik , Adnan Saeed , Khurram Shehzad , Muddesar Iqbal
Brain tumors exhibit significant variability in shape, size, and location, making it difficult to achieve consistent and accurate classification. It requires advanced algorithms for handling diverse tumor presentations. To solve this issue, we propose a Dual-Enhanced Features Scheme (DEFS) with a Swin-Transformer model based on the EfficientNetV2S to improve the classification and reuse parameters. In DEFS, the dense-block with dilation enables to uncovering of hidden details and spatial relationships across varying scales in the model which are typically obscured by traditional convolutional-layers. This module is particularly crucial in medical imaging, where tumors and anomalies can present in various sizes and shapes. Further, the dual-attention mechanism in the enhanced Featured scheme enhances the explainability and interpretability of the model by using spatial and channel-wise information. Additionally, the Swin-Transformer-block improves the model’s capabilities to capture global patterns in brain-tumor images, which is highly advantageous in medical-imaging where the location and extent of abnormalities, such as tumors, can vary significantly. To strengthen the proposed DEF-SwinE2NET, we used EfficientNetV2S as a baseline-model due to its effectiveness and accurate classification compared to its predecessors. We evaluated DEFSwinE2NET using three benchmark datasets: two were sourced from Kaggle and one from a Figshare repositories. Several preprocessing-steps were applied to enhance the MRI-images before training including image cropping, median-filter noise-reduction, contrast-limited adaptive histogram equalization (CLAHE) for local-contrast enhancement, Laplacian-edge enhancement to highlight critical features, and data augmentation to improve model robustness and generalization. The DEF-SwinE2NET model achieves remarkable results with an accuracy of 99.43 %, a sensitivity of 99.39 %, and an F1-score of 99.41 %.
脑肿瘤在形状、大小和位置上有很大差异,因此很难实现一致而准确的分类。这需要先进的算法来处理不同的肿瘤表现。为解决这一问题,我们提出了一种基于 EfficientNetV2S 的 Swin-Transformer 模型的双增强特征方案(DEFS),以改进分类和重用参数。在 DEFS 中,具有扩张功能的密集块能够揭示模型中不同尺度的隐藏细节和空间关系,而传统卷积层通常会掩盖这些细节和关系。这一模块在医学成像中尤为重要,因为在医学成像中,肿瘤和异常点的大小和形状各不相同。此外,增强型特征方案中的双重关注机制通过使用空间和信道信息,提高了模型的可解释性和可解读性。此外,Swin-Transformer-block 还提高了模型捕捉脑肿瘤图像中全局模式的能力,这在医学影像中非常有利,因为肿瘤等异常的位置和程度可能会有很大差异。为了加强所提出的 DEF-SwinE2NET,我们使用了 EfficientNetV2S 作为基线模型,因为与前者相比,EfficientNetV2S 更有效、分类更准确。我们使用三个基准数据集对 DEFSwinE2NET 进行了评估:两个数据集来自 Kaggle,一个来自 Figshare 数据库。在训练前,我们采用了多个预处理步骤来增强核磁共振成像图像,包括图像裁剪、中值滤波降噪、用于增强局部对比度的对比度限制自适应直方图均衡化(CLAHE)、用于突出关键特征的拉普拉斯边缘增强,以及用于提高模型鲁棒性和泛化的数据增强。DEF-SwinE2NET 模型取得了显著的成果,准确率达 99.43%,灵敏度达 99.39%,F1 分数达 99.41%。
{"title":"DEF-SwinE2NET: Dual enhanced features guided with multi-model fusion for brain tumor classification using preprocessing optimization","authors":"Muhammad Ghulam Abbas Malik ,&nbsp;Adnan Saeed ,&nbsp;Khurram Shehzad ,&nbsp;Muddesar Iqbal","doi":"10.1016/j.bspc.2024.107079","DOIUrl":"10.1016/j.bspc.2024.107079","url":null,"abstract":"<div><div>Brain tumors exhibit significant variability in shape, size, and location, making it difficult to achieve consistent and accurate classification. It requires advanced algorithms for handling diverse tumor presentations. To solve this issue, we propose a Dual-Enhanced Features Scheme (DEFS) with a Swin-Transformer model based on the EfficientNetV2S to improve the classification and reuse parameters. In DEFS, the dense-block with dilation enables to uncovering of hidden details and spatial relationships across varying scales in the model which are typically obscured by traditional convolutional-layers. This module is particularly crucial in medical imaging, where tumors and anomalies can present in various sizes and shapes. Further, the dual-attention mechanism in the enhanced Featured scheme enhances the explainability and interpretability of the model by using spatial and channel-wise information. Additionally, the Swin-Transformer-block improves the model’s capabilities to capture global patterns in brain-tumor images, which is highly advantageous in medical-imaging where the location and extent of abnormalities, such as tumors, can vary significantly. To strengthen the proposed DEF-SwinE2NET, we used EfficientNetV2S as a baseline-model due to its effectiveness and accurate classification compared to its predecessors. We evaluated DEFSwinE2NET using three benchmark datasets: two were sourced from Kaggle and one from a Figshare repositories. Several preprocessing-steps were applied to enhance the MRI-images before training including image cropping, median-filter noise-reduction, contrast-limited adaptive histogram equalization (CLAHE) for local-contrast enhancement, Laplacian-edge enhancement to highlight critical features, and data augmentation to improve model robustness and generalization. The DEF-SwinE2NET model achieves remarkable results with an accuracy of 99.43 %, a sensitivity of 99.39 %, and an F1-score of 99.41 %.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107079"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wireless capsule endoscopy anomaly classification via dynamic multi-task learning 通过动态多任务学习进行无线胶囊内窥镜异常分类
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-25 DOI: 10.1016/j.bspc.2024.107081
Xingcun Li , Qinghua Wu , Kun Wu
Wireless capsule endoscopy (WCE) provides a painless, non-invasive means for early gastrointestinal disease detection and cancer prevention. However, clinicians must diagnose only about 5% of lesion images from tens of thousands of frames, highlighting the need for computer-assisted diagnostic methods to enhance efficiency and reduce the elevated misdiagnosis rates attributed to visual fatigue. Previous research heavily relied on module design, an effective yet highly coupled method with the baseline and incurring additional computational costs. This paper proposes a dynamic multi-task learning method that combines triplet loss and weighted cross-entropy loss to respectively guide the model in learning compact fine-grained representations and establishing less biased decision boundaries, without incurring additional computational costs. Our method outperforms previous advanced methods on two publicly available datasets, achieving an F1 score of 96.47% on Kvasir-Capsule and an F1 score of 96.75% with an accuracy of 96.72% on CAD-CAP. Visualization of the representations and heatmaps confirms the model’s precision in focusing on the lesion area. The prediction model has been uploaded to https://github.com/xli122/WCE_MTL.
无线胶囊内窥镜(WCE)为早期胃肠道疾病检测和癌症预防提供了一种无痛、无创的手段。然而,临床医生只能从数万帧病变图像中诊断出约 5%的病变,这凸显了对计算机辅助诊断方法的需求,以提高效率并降低因视觉疲劳而导致的高误诊率。以往的研究在很大程度上依赖于模块设计,这是一种有效但与基线高度耦合的方法,会产生额外的计算成本。本文提出了一种动态多任务学习方法,该方法结合了三重损失和加权交叉熵损失,分别指导模型学习紧凑的细粒度表征和建立较少偏差的决策边界,同时不会产生额外的计算成本。我们的方法在两个公开数据集上的表现优于之前的先进方法,在 Kvasir-Capsule 数据集上的 F1 得分为 96.47%,在 CAD-CAP 数据集上的 F1 得分为 96.75%,准确率为 96.72%。可视化表示和热图证实了该模型在聚焦病变区域方面的精确性。预测模型已上传至 https://github.com/xli122/WCE_MTL。
{"title":"Wireless capsule endoscopy anomaly classification via dynamic multi-task learning","authors":"Xingcun Li ,&nbsp;Qinghua Wu ,&nbsp;Kun Wu","doi":"10.1016/j.bspc.2024.107081","DOIUrl":"10.1016/j.bspc.2024.107081","url":null,"abstract":"<div><div>Wireless capsule endoscopy (WCE) provides a painless, non-invasive means for early gastrointestinal disease detection and cancer prevention. However, clinicians must diagnose only about 5% of lesion images from tens of thousands of frames, highlighting the need for computer-assisted diagnostic methods to enhance efficiency and reduce the elevated misdiagnosis rates attributed to visual fatigue. Previous research heavily relied on module design, an effective yet highly coupled method with the baseline and incurring additional computational costs. This paper proposes a dynamic multi-task learning method that combines triplet loss and weighted cross-entropy loss to respectively guide the model in learning compact fine-grained representations and establishing less biased decision boundaries, without incurring additional computational costs. Our method outperforms previous advanced methods on two publicly available datasets, achieving an F1 score of 96.47% on Kvasir-Capsule and an F1 score of 96.75% with an accuracy of 96.72% on CAD-CAP. Visualization of the representations and heatmaps confirms the model’s precision in focusing on the lesion area. The prediction model has been uploaded to <span><span>https://github.com/xli122/WCE_MTL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107081"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CheXDouble: Dual-Supervised interpretable disease diagnosis model CheXDouble:双重监督可解释疾病诊断模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-25 DOI: 10.1016/j.bspc.2024.107026
Zhiwei Tang , You Yang
Chest X-ray imaging, commonly used for diagnosing cardiopulmonary diseases, typically requires radiologists to devote considerable effort to reading and interpreting the images. Moreover, diagnostic outcomes can vary due to differences in radiologists’ experience. Deep learning for chest X-ray disease diagnosis holds great promise for enhancing diagnostic accuracy and reducing the workload of radiologists. However, traditional deep learning models for medical image classification are often difficult to interpret. To address this, we introduce the Global Attention Alignment Module, which utilizes cardiopulmonary mask for supervised training. This provides the model with spatial location priors during training, thereby enhancing the interpretability of the saliency maps and the disease classification performance. Additionally, most chest X-ray datasets suffer from severe imbalances between positive and negative samples for diseases, leading to classification imbalance issues when training models. Thus, we propose the Improved Focal Loss, which dynamically adjusts the weight of negative samples in the loss function based on sample statistics, effectively mitigating the imbalance issue in the dataset. Moreover, the training of deep learning models for medical image classification requires substantial data support. Therefore, we conducted a quantitative analysis to explore the impact of five different data augmentation methods on model classification performance across various input image sizes, identifying the most effective data augmentation strategy. Ultimately, through these proposed methods, we developed the dual-supervised medical imaging disease diagnosis model CheXDouble, which surpasses previous state-of-the-art models with its highly competitive disease classification performance.
胸部 X 光成像常用于诊断心肺疾病,通常需要放射科医生花费大量精力来阅读和解读图像。此外,由于放射科医生的经验不同,诊断结果也会有所差异。用于胸部 X 射线疾病诊断的深度学习在提高诊断准确性和减少放射科医生工作量方面大有可为。然而,用于医学图像分类的传统深度学习模型往往难以解释。为解决这一问题,我们引入了全局注意力对齐模块,该模块利用心肺掩模进行监督训练。这就在训练过程中为模型提供了空间位置先验,从而提高了突出图的可解释性和疾病分类性能。此外,大多数胸部 X 光数据集都存在疾病正负样本严重不平衡的问题,导致模型训练时出现分类不平衡问题。因此,我们提出了 "改进的焦点损失"(Improved Focal Loss),根据样本统计数据动态调整损失函数中负样本的权重,有效缓解了数据集的不平衡问题。此外,用于医学图像分类的深度学习模型的训练需要大量的数据支持。因此,我们进行了定量分析,探讨了五种不同的数据增强方法对不同输入图像大小的模型分类性能的影响,找出了最有效的数据增强策略。最终,通过这些建议的方法,我们开发出了双监督医学影像疾病诊断模型 CheXDouble,它以极具竞争力的疾病分类性能超越了以往最先进的模型。
{"title":"CheXDouble: Dual-Supervised interpretable disease diagnosis model","authors":"Zhiwei Tang ,&nbsp;You Yang","doi":"10.1016/j.bspc.2024.107026","DOIUrl":"10.1016/j.bspc.2024.107026","url":null,"abstract":"<div><div>Chest X-ray imaging, commonly used for diagnosing cardiopulmonary diseases, typically requires radiologists to devote considerable effort to reading and interpreting the images. Moreover, diagnostic outcomes can vary due to differences in radiologists’ experience. Deep learning for chest X-ray disease diagnosis holds great promise for enhancing diagnostic accuracy and reducing the workload of radiologists. However, traditional deep learning models for medical image classification are often difficult to interpret. To address this, we introduce the Global Attention Alignment Module, which utilizes cardiopulmonary mask for supervised training. This provides the model with spatial location priors during training, thereby enhancing the interpretability of the saliency maps and the disease classification performance. Additionally, most chest X-ray datasets suffer from severe imbalances between positive and negative samples for diseases, leading to classification imbalance issues when training models. Thus, we propose the Improved Focal Loss, which dynamically adjusts the weight of negative samples in the loss function based on sample statistics, effectively mitigating the imbalance issue in the dataset. Moreover, the training of deep learning models for medical image classification requires substantial data support. Therefore, we conducted a quantitative analysis to explore the impact of five different data augmentation methods on model classification performance across various input image sizes, identifying the most effective data augmentation strategy. Ultimately, through these proposed methods, we developed the dual-supervised medical imaging disease diagnosis model CheXDouble, which surpasses previous state-of-the-art models with its highly competitive disease classification performance.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107026"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neighborhood transformer for sparse-view X-ray 3D foot reconstruction 用于稀疏视角 X 射线三维足部重建的邻域变换器
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-25 DOI: 10.1016/j.bspc.2024.107082
Wei Wang , Li An , Mingquan Zhou , Gengyin Han
In medical imaging, Sparse-View X-ray 3D reconstruction is crucial for analyzing and diagnosing foot bone structures. However, existing methods face limitations when handling sparse view data and complex bone structures. To enhance reconstruction accuracy and detail preservation, this paper proposes an innovative Sparse-View X-ray 3D foot reconstruction technique based on Neighborhood Transformer. A new Neighborhood Position Encoding strategy is introduced, which divides X-ray images into local regions using a window mechanism and precisely selects these regions through nearest neighbor methods, thereby capturing detailed features in the images. Building upon existing NeRF (Neural Radiance Fields) technology, the paper introduces the Neighborhood Transformer module. This module significantly improves the expression capability for complex foot bone structures through depthwise separable convolutions and a dual-branch local–global Transformer network. Additionally, an adaptive weight learning strategy is applied within the Transformer module, enabling the model to better capture long-distance dependencies, thereby improving its ability to handle sparse view data.
在医学成像中,稀疏视图 X 射线三维重建对于分析和诊断足部骨骼结构至关重要。然而,现有方法在处理稀疏视图数据和复杂骨骼结构时面临局限。为了提高重建精度并保留细节,本文提出了一种基于邻域变换器的创新稀疏视图 X 射线三维足部重建技术。本文引入了一种新的邻域位置编码策略,利用窗口机制将 X 射线图像划分为局部区域,并通过近邻方法精确选择这些区域,从而捕捉图像中的细节特征。在现有 NeRF(神经辐射场)技术的基础上,本文引入了 Neighborhood Transformer 模块。该模块通过深度可分离卷积和局部-全局双分支变换器网络,大大提高了复杂足骨结构的表达能力。此外,变换器模块还采用了自适应权重学习策略,使模型能够更好地捕捉长距离依赖关系,从而提高处理稀疏视图数据的能力。
{"title":"Neighborhood transformer for sparse-view X-ray 3D foot reconstruction","authors":"Wei Wang ,&nbsp;Li An ,&nbsp;Mingquan Zhou ,&nbsp;Gengyin Han","doi":"10.1016/j.bspc.2024.107082","DOIUrl":"10.1016/j.bspc.2024.107082","url":null,"abstract":"<div><div>In medical imaging, Sparse-View X-ray 3D reconstruction is crucial for analyzing and diagnosing foot bone structures. However, existing methods face limitations when handling sparse view data and complex bone structures. To enhance reconstruction accuracy and detail preservation, this paper proposes an innovative Sparse-View X-ray 3D foot reconstruction technique based on Neighborhood Transformer. A new Neighborhood Position Encoding strategy is introduced, which divides X-ray images into local regions using a window mechanism and precisely selects these regions through nearest neighbor methods, thereby capturing detailed features in the images. Building upon existing NeRF (Neural Radiance Fields) technology, the paper introduces the Neighborhood Transformer module. This module significantly improves the expression capability for complex foot bone structures through depthwise separable convolutions and a dual-branch local–global Transformer network. Additionally, an adaptive weight learning strategy is applied within the Transformer module, enabling the model to better capture long-distance dependencies, thereby improving its ability to handle sparse view data.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107082"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based pulse wave analysis for classification of circle of Willis topology: An in silico study with 30,618 virtual subjects 基于机器学习的脉搏波分析用于威利斯圈拓扑分类:利用 30,618 名虚拟受试者进行的硅学研究
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-25 DOI: 10.1016/j.bspc.2024.106999
Ahmet Sen , Miquel Aguirre , Peter H Charlton , Laurent Navarro , Stéphane Avril , Jordi Alastruey

Background and Objective

The topology of the circle of Willis (CoW) is crucial in cerebral circulation and significantly impacts patient management. Incomplete CoW structures increase stroke risk and post-stroke damage. Current detection methods using computed tomography and magnetic resonance scans are often invasive, time-consuming, and costly. This study investigated the use of machine learning (ML) to classify CoW topology through arterial blood flow velocity pulse waves (PWs), which can be noninvasively measured with Doppler ultrasound.

Methods

A database of in silico PWs from 30,618 virtual subjects, aged 25 to 75 years, with complete and incomplete CoW topologies was created and validated against in vivo data. Seven ML architectures were trained and tested using 45 combinations of carotid, vertebral and brachial artery PWs, with varying levels of artificial noise to mimic real-world measurement errors. SHapley Additive exPlanations (SHAP) were used to interpret the predictions made by the artificial neural network (ANN) models.

Results

A convolutional neural network achieved the highest accuracy (98%) for CoW topology classification using a combination of one vertebral and one common carotid velocity PW without noise. Under a 20% noise-to-signal ratio, a multi-layer perceptron model had the highest prediction rate (79%). All ML models performed best for topologies lacking posterior communication arteries. Mean and peak systolic velocities were identified as key features influencing ANN predictions.

Conclusions

ML-based PW analysis shows significant potential for efficient, noninvasive CoW topology detection via Doppler ultrasound. The dataset, post-processing tools, and ML code, are freely available to support further research.
背景和目的 威利斯圈(CoW)的拓扑结构在脑循环中至关重要,对患者的管理有重大影响。不完整的CoW结构会增加中风风险和中风后的损害。目前使用计算机断层扫描和磁共振扫描进行检测的方法往往具有侵入性、耗时且成本高昂。本研究调查了使用机器学习(ML)通过动脉血流速度脉搏波(PW)对CoW拓扑结构进行分类的方法,脉搏波可通过多普勒超声进行无创测量。方法建立了一个来自30618名虚拟受试者(年龄在25至75岁之间)的具有完整和不完整CoW拓扑结构的硅学脉搏波数据库,并根据体内数据进行了验证。使用 45 种颈动脉、椎动脉和肱动脉 PW 组合对七种 ML 架构进行了训练和测试,并使用不同程度的人工噪音来模拟真实世界的测量误差。结果卷积神经网络使用一个椎动脉和一个颈总动脉速度 PW 组合进行 CoW 拓扑分类的准确率最高(98%),且无噪声。在噪声信号比为 20% 的情况下,多层感知器模型的预测率最高(79%)。所有 ML 模型在缺乏后交通动脉的拓扑结构中表现最佳。平均收缩速度和峰值收缩速度被认为是影响 ANN 预测的关键特征。数据集、后处理工具和 ML 代码可免费提供,以支持进一步的研究。
{"title":"Machine learning-based pulse wave analysis for classification of circle of Willis topology: An in silico study with 30,618 virtual subjects","authors":"Ahmet Sen ,&nbsp;Miquel Aguirre ,&nbsp;Peter H Charlton ,&nbsp;Laurent Navarro ,&nbsp;Stéphane Avril ,&nbsp;Jordi Alastruey","doi":"10.1016/j.bspc.2024.106999","DOIUrl":"10.1016/j.bspc.2024.106999","url":null,"abstract":"<div><h3>Background and Objective</h3><div>The topology of the circle of Willis (CoW) is crucial in cerebral circulation and significantly impacts patient management. Incomplete CoW structures increase stroke risk and post-stroke damage. Current detection methods using computed tomography and magnetic resonance scans are often invasive, time-consuming, and costly. This study investigated the use of machine learning (ML) to classify CoW topology through arterial blood flow velocity pulse waves (PWs), which can be noninvasively measured with Doppler ultrasound.</div></div><div><h3>Methods</h3><div>A database of <em>in silico</em> PWs from 30,618 virtual subjects, aged 25 to 75 years, with complete and incomplete CoW topologies was created and validated against <em>in vivo</em> data. Seven ML architectures were trained and tested using 45 combinations of carotid, vertebral and brachial artery PWs, with varying levels of artificial noise to mimic real-world measurement errors. SHapley Additive exPlanations (SHAP) were used to interpret the predictions made by the artificial neural network (ANN) models.</div></div><div><h3>Results</h3><div>A convolutional neural network achieved the highest accuracy (98%) for CoW topology classification using a combination of one vertebral and one common carotid velocity PW without noise. Under a 20% noise-to-signal ratio, a multi-layer perceptron model had the highest prediction rate (79%). All ML models performed best for topologies lacking posterior communication arteries. Mean and peak systolic velocities were identified as key features influencing ANN predictions.</div></div><div><h3>Conclusions</h3><div>ML-based PW analysis shows significant potential for efficient, noninvasive CoW topology detection via Doppler ultrasound. The dataset, post-processing tools, and ML code, are freely available to support further research.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 106999"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An edge association graph network conforming to embryonic morphology for automated grading of day 3 human embryos 符合胚胎形态的边缘关联图网络,用于第 3 天人类胚胎的自动分级
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-25 DOI: 10.1016/j.bspc.2024.107108
Shuailin You , Chi Dong , Bo Huang , Langyuan Fu , Yaqiao Zhang , Lihong Han , Xinmeng Rong , Ying Jin , Dongxu Yi , Huazhe Yang , Zhiying Tian , Wenyan Jiang

Purpose

Embryo grading is the essential component of assisted reproductive technologies and a crucial prerequisite for ensuring successful embryo transfer. An effective embryo grading method can help embryologists automatically evaluate the quality of embryos and select high-quality embryos.

Methods

This study enrolled 5836 embryonic images from 2880 couples who have underwent assisted reproductive therapy at our hospital between September 2016 and March 2023. We proposed an edge association graph (EAG) model that contains a two-stage network: (i) a first-stage edge segmentation network that aims to quantify embryo cells and fragments edges; and (ii) a second-stage network that utilizes quantitative edge information to construct an edge relationship graph, and extracts spatial topological information by integrating the graph neural network (GNN) to accomplish the task of embryo grading. Five embryologists of varying years of experience were invited to compare embryo grading with the EAG on an independent test set.

Results and conclusions

Our EAG successfully achieved automatic embryo 4-category grading and showed higher performance compared to existing state-of-arts methods based on microscopic (accuracy = 0.8696, recall = 0.8484, precision = 0.8883 and F1-score = 0.8658) and time-lapse (accuracy = 0.7671, recall = 0.6843, precision = 0.7663 and F1-score = 0.6918) images of embryos. The performance of EAG outperformed five embryologists average, which indicates its superior for embryo grading and has good potential for clinically assisted embryo reproduction applications.
目的胚胎分级是辅助生殖技术的重要组成部分,也是确保胚胎移植成功的重要前提。一种有效的胚胎分级方法可以帮助胚胎学家自动评估胚胎质量,选择高质量的胚胎。本研究从 2016 年 9 月至 2023 年 3 月期间在我院接受辅助生殖治疗的 2880 对夫妇中选取了 5836 张胚胎图像。我们提出了一种边缘关联图(EAG)模型,该模型包含两个阶段的网络:(i) 第一阶段边缘分割网络,旨在量化胚胎细胞和碎片边缘;(ii) 第二阶段网络,利用量化边缘信息构建边缘关系图,并通过整合图神经网络(GNN)提取空间拓扑信息来完成胚胎分级任务。结果和结论我们的 EAG 成功实现了胚胎 4 类自动分级,与现有的基于显微镜的方法相比表现出更高的性能(准确率 = 0.8696, recall = 0.8484, precision = 0.8883 and F1-score = 0.8658) and time-lapse (accuracy = 0.7671, recall = 0.6843, precision = 0.7663 and F1-score = 0.6918) images of embryos.EAG 的性能超过了五位胚胎学家的平均水平,这表明它在胚胎分级方面具有优势,在临床辅助胚胎复制应用方面具有良好的潜力。
{"title":"An edge association graph network conforming to embryonic morphology for automated grading of day 3 human embryos","authors":"Shuailin You ,&nbsp;Chi Dong ,&nbsp;Bo Huang ,&nbsp;Langyuan Fu ,&nbsp;Yaqiao Zhang ,&nbsp;Lihong Han ,&nbsp;Xinmeng Rong ,&nbsp;Ying Jin ,&nbsp;Dongxu Yi ,&nbsp;Huazhe Yang ,&nbsp;Zhiying Tian ,&nbsp;Wenyan Jiang","doi":"10.1016/j.bspc.2024.107108","DOIUrl":"10.1016/j.bspc.2024.107108","url":null,"abstract":"<div><h3>Purpose</h3><div>Embryo grading is the essential component of assisted reproductive technologies and a crucial prerequisite for ensuring successful embryo transfer. An effective embryo grading method can help embryologists automatically evaluate the quality of embryos and select high-quality embryos.</div></div><div><h3>Methods</h3><div>This study enrolled 5836 embryonic images from 2880 couples who have underwent assisted reproductive therapy at our hospital between September 2016 and March 2023. We proposed an edge association graph (EAG) model that contains a two-stage network: (i) a first-stage edge segmentation network that aims to quantify embryo cells and fragments edges; and (ii) a second-stage network that utilizes quantitative edge information to construct an edge relationship graph, and extracts spatial topological information by integrating the graph neural network (GNN) to accomplish the task of embryo grading. Five embryologists of varying years of experience were invited to compare embryo grading with the EAG on an independent test set.</div></div><div><h3>Results and conclusions</h3><div>Our EAG successfully achieved automatic embryo 4-category grading and showed higher performance compared to existing state-of-arts methods based on microscopic (accuracy = 0.8696, recall = 0.8484, precision = 0.8883 and F1-score = 0.8658) and time-lapse (accuracy = 0.7671, recall = 0.6843, precision = 0.7663 and F1-score = 0.6918) images of embryos. The performance of EAG outperformed five embryologists average, which indicates its superior for embryo grading and has good potential for clinically assisted embryo reproduction applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107108"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image fusion by multiple features in the propagated filtering domain 通过传播滤波域中的多重特征进行图像融合
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-25 DOI: 10.1016/j.bspc.2024.106990
Jiao Du , Weisheng Li , Yidong Peng , Qianjing Zong
Visual high-contrast information, such as texture and color, contained in input biomedical imaging data should be preserved as much as possible in the fused image. To preserve the high-intensity textural and color information from input images, an image fusion method is proposed in this paper that utilizes propagated filtering and multiple features from the input images as two modalities. The method includes three steps. First, the inputs are decomposed into multiscale coarse images containing edge information and multiscale detail images containing textural information obtained by propagated filtering using different window sizes. Second, an entropy-based rule is used to combine the coarse images to contain much more edge information. A multiple features-based rule, including luminance, orientation and phase, is used to combine the detail images with the aim of preserving textural information and color information with less distortion. Finally, the fused image is obtained by adding the fused coarse and fused detail images in spatial-domain transformation. The experimental results on the fusion of co-registered biomedical image show that the proposed method preserves textural information with high-intensity and true color information.
输入生物医学成像数据中包含的视觉高对比度信息,如纹理和颜色,应尽可能保留在融合图像中。为了保留输入图像中的高强度纹理和颜色信息,本文提出了一种图像融合方法,利用传播滤波和输入图像中的多种特征作为两种模式。该方法包括三个步骤。首先,将输入图像分解为包含边缘信息的多尺度粗糙图像和包含纹理信息的多尺度细节图像,这些信息是通过使用不同窗口大小的传播滤波获得的。其次,使用基于熵的规则来组合粗图像,使其包含更多的边缘信息。然后,使用基于多个特征的规则(包括亮度、方向和相位)来组合细节图像,目的是以较小的失真保留纹理信息和色彩信息。最后,通过空间域变换将融合后的粗糙图像和融合后的细节图像相加得到融合图像。共配准生物医学图像融合的实验结果表明,所提出的方法保留了高强度的纹理信息和真实的色彩信息。
{"title":"Image fusion by multiple features in the propagated filtering domain","authors":"Jiao Du ,&nbsp;Weisheng Li ,&nbsp;Yidong Peng ,&nbsp;Qianjing Zong","doi":"10.1016/j.bspc.2024.106990","DOIUrl":"10.1016/j.bspc.2024.106990","url":null,"abstract":"<div><div>Visual high-contrast information, such as texture and color, contained in input biomedical imaging data should be preserved as much as possible in the fused image. To preserve the high-intensity textural and color information from input images, an image fusion method is proposed in this paper that utilizes propagated filtering and multiple features from the input images as two modalities. The method includes three steps. First, the inputs are decomposed into multiscale coarse images containing edge information and multiscale detail images containing textural information obtained by propagated filtering using different window sizes. Second, an entropy-based rule is used to combine the coarse images to contain much more edge information. A multiple features-based rule, including luminance, orientation and phase, is used to combine the detail images with the aim of preserving textural information and color information with less distortion. Finally, the fused image is obtained by adding the fused coarse and fused detail images in spatial-domain transformation. The experimental results on the fusion of co-registered biomedical image show that the proposed method preserves textural information with high-intensity and true color information.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 106990"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lung vessel segmentation and abnormality classification based on hybrid mobile-Lenet using CT image 基于混合移动网络的肺血管分割和异常分类(使用 CT 图像
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-24 DOI: 10.1016/j.bspc.2024.107072
Sadish Sendil Murugaraj , Kalpana Vadivelu , Prabhu Thirugnana Sambandam , B. Santhosh Kumar
It is acknowledged from studies that viral pneumonia affects the lung vessels. Nevertheless, the diagnostic ability of a chest Computed Tomography (CT) imaging parameter is rarely leveraged. This research introduced the Hybrid Mobile LeNet (HM-LeNet) for lung vessel segmentation and abnormality classification. Firstly, the input image of CT is obtained from the database. Later, the preprocessing procedure is executed by utilizing the Non-Local Means (NLM) filter. Then, the lung lobe segmentation is carried out by using the K-Net. After that, the pulmonary vessel segmentation is performed. Finally, the features are extracted to classify the lung abnormality by utilizing the designed HM-LeNet, which is the integration of MobileNet and LeNet. The lung abnormalities are classified as emphysema, nodules, or pulmonary embolisms. The established HM-LeNet attained the maximum accuracy, True Positive Rate (TPR), and True Negative Rate (TNR) of 92.7%, 96.6%, and 94.7% respectively.
研究表明,病毒性肺炎会影响肺血管。然而,胸部计算机断层扫描(CT)成像参数的诊断能力却很少被利用。本研究引入了混合移动 LeNet(HM-LeNet)用于肺血管分割和异常分类。首先,从数据库中获取 CT 输入图像。然后,利用非局部均值(NLM)滤波器执行预处理程序。然后,利用 K-Net 进行肺叶分割。然后,进行肺血管分割。最后,利用设计的 HM-LeNet 对肺部异常进行分类,HM-LeNet 是 MobileNet 和 LeNet 的集成。肺部异常被分为肺气肿、肺结节或肺栓塞。所建立的 HM-LeNet 的最高准确率、真阳性率 (TPR) 和真阴性率 (TNR) 分别为 92.7%、96.6% 和 94.7%。
{"title":"Lung vessel segmentation and abnormality classification based on hybrid mobile-Lenet using CT image","authors":"Sadish Sendil Murugaraj ,&nbsp;Kalpana Vadivelu ,&nbsp;Prabhu Thirugnana Sambandam ,&nbsp;B. Santhosh Kumar","doi":"10.1016/j.bspc.2024.107072","DOIUrl":"10.1016/j.bspc.2024.107072","url":null,"abstract":"<div><div>It is acknowledged from studies that viral pneumonia affects the lung vessels. Nevertheless, the diagnostic ability of a chest Computed Tomography (CT) imaging parameter is rarely leveraged. This research introduced the Hybrid Mobile LeNet (HM-LeNet) for lung vessel segmentation and abnormality classification. Firstly, the input image of CT is obtained from the database. Later, the preprocessing procedure is executed by utilizing the Non-Local Means (NLM) filter. Then, the lung lobe segmentation is carried out by using the K-Net. After that, the pulmonary vessel segmentation is performed. Finally, the features are extracted to classify the lung abnormality by utilizing the designed HM-LeNet, which is the integration of MobileNet and LeNet. The lung abnormalities are classified as emphysema, nodules, or pulmonary embolisms. The established HM-LeNet attained the maximum accuracy, True Positive Rate (TPR), and True Negative Rate (TNR) of 92.7%, 96.6%, and 94.7% respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107072"},"PeriodicalIF":4.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying diagnostic biomarkers for Erythemato-Squamous diseases using explainable machine learning 利用可解释的机器学习识别红斑狼疮的诊断生物标志物
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-24 DOI: 10.1016/j.bspc.2024.107101
Zheng Wang , Li Chang , Tong Shi , Hui Hu , Chong Wang , Kaibin Lin , Jianglin Zhang
Erythemato-squamous diseases (ESD) are a heterogeneous group encompassing six clinically and histopathologically overlapping subtypes, representing a substantial diagnostic challenge within dermatology. The existing body of research reveals a notable void in detailed examinations that deconvolute the distinct features endemic to each ESD variant. To bridge this knowledge gap, our study applied Explainable Artificial Intelligence (XAI) techniques to systematically elucidate the intricate diagnostic biomarker profiles unique to each ESD category. Methodological rigor was fortified through the employment of stratified cross-validation, bolstering the robustness and generalizability of our diagnostic model. The CatBoost classifier emerged as a preeminent algorithm within our analytical framework, manifesting exemplary classification prowess with an accuracy of 99.07%, precision of 99.12%, recall of 98.89%, and an F1 score of 98.97%. Central to our inquiry was the deployment of Shapley Additive exPlanations (SHAP) values, which afforded granular insight into the contributory weight of individual diagnostic biomarkers for each ESD subtype. Our findings delineated pivotal diagnostic biomarkers including saw-tooth appearance of retes (STAR), melanin incontinence (MI), vacuolisation and damage of basal layer (VDBL), polygonal papules (PP), and band-like infiltrate (BLI) as instrumental in the identification of seborrheic dermatitis, while Psoriasis was characterized by fibrosis of the papillary dermis (FPD), thinning of the suprapapillary epidermis (TSE), elongation of the rete ridges (ERR), clubbing of the rete ridges (CRR), and notable psoriatic spongiosis. This integrative approach, leveraging the analytical acumen of Random Forest coupled with the interpretability afforded by SHAP, signifies a significant advancement in the nuanced diagnostic landscape of ESD.
红斑鳞状上皮内瘤病(ESD)是一种异质性疾病,包括六种在临床和组织病理学上相互重叠的亚型,是皮肤病学诊断中的一大难题。现有的研究表明,在详细检查以确定每种 ESD 变体的独特特征方面存在明显的空白。为了弥补这一知识空白,我们的研究应用了可解释人工智能(XAI)技术,系统地阐明了每个 ESD 类别所特有的复杂的诊断生物标志物特征。通过分层交叉验证加强了方法的严谨性,从而提高了诊断模型的稳健性和普适性。在我们的分析框架中,CatBoost 分类器是一种杰出的算法,它的分类能力堪称典范,准确率为 99.07%,精确率为 99.12%,召回率为 98.89%,F1 分数为 98.97%。我们研究的核心是利用沙普利加性前平面图(SHAP)值,该值可让我们深入了解每种 ESD 亚型的单个诊断生物标志物的贡献权重。我们的研究结果确定了一些关键的诊断生物标志物,包括锯齿状网状外观(STAR)、黑色素失禁(MI)、空泡化和基底层损伤(VDBL)、多角形丘疹(PP)和带状浸润(BLI),它们在脂溢性皮炎的鉴别中起着重要作用、而银屑病的特征则是乳头状真皮纤维化(FPD)、乳头上表皮变薄(TSE)、齿状嵴伸长(ERR)、齿状嵴俱乐部化(CRR)和明显的银屑病海绵状增生。这种综合方法利用随机森林(Random Forest)的敏锐分析能力和 SHAP 的可解释性,标志着 ESD 细微诊断领域的重大进步。
{"title":"Identifying diagnostic biomarkers for Erythemato-Squamous diseases using explainable machine learning","authors":"Zheng Wang ,&nbsp;Li Chang ,&nbsp;Tong Shi ,&nbsp;Hui Hu ,&nbsp;Chong Wang ,&nbsp;Kaibin Lin ,&nbsp;Jianglin Zhang","doi":"10.1016/j.bspc.2024.107101","DOIUrl":"10.1016/j.bspc.2024.107101","url":null,"abstract":"<div><div>Erythemato-squamous diseases (ESD) are a heterogeneous group encompassing six clinically and histopathologically overlapping subtypes, representing a substantial diagnostic challenge within dermatology. The existing body of research reveals a notable void in detailed examinations that deconvolute the distinct features endemic to each ESD variant. To bridge this knowledge gap, our study applied Explainable Artificial Intelligence (XAI) techniques to systematically elucidate the intricate diagnostic biomarker profiles unique to each ESD category. Methodological rigor was fortified through the employment of stratified cross-validation, bolstering the robustness and generalizability of our diagnostic model. The CatBoost classifier emerged as a preeminent algorithm within our analytical framework, manifesting exemplary classification prowess with an accuracy of 99.07%, precision of 99.12%, recall of 98.89%, and an F1 score of 98.97%. Central to our inquiry was the deployment of Shapley Additive exPlanations (SHAP) values, which afforded granular insight into the contributory weight of individual diagnostic biomarkers for each ESD subtype. Our findings delineated pivotal diagnostic biomarkers including saw-tooth appearance of retes (STAR), melanin incontinence (MI), vacuolisation and damage of basal layer (VDBL), polygonal papules (PP), and band-like infiltrate (BLI) as instrumental in the identification of seborrheic dermatitis, while Psoriasis was characterized by fibrosis of the papillary dermis (FPD), thinning of the suprapapillary epidermis (TSE), elongation of the rete ridges (ERR), clubbing of the rete ridges (CRR), and notable psoriatic spongiosis. This integrative approach, leveraging the analytical acumen of Random Forest coupled with the interpretability afforded by SHAP, signifies a significant advancement in the nuanced diagnostic landscape of ESD.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107101"},"PeriodicalIF":4.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel SLCA-UNet architecture for automatic MRI brain tumor segmentation 用于自动磁共振成像脑肿瘤分割的新型 SLCA-UNet 架构
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-24 DOI: 10.1016/j.bspc.2024.107047
P.S. Tejashwini , J. Thriveni , K.R. Venugopal
When it comes to brain tumors, there’s no other disease that has as heavy an impact on life expectancy, and not only is it among the main causes of death globally. The only way out of this is through prompt identification and prediction of brain tumors to reduce related deaths. MRI remains the conventional imaging method used; however, manually segmenting its images can take time, hence taking long periods before a diagnosis is made. A potential answer to this challenge has been found in deep learning models based on the UNet architecture, which seems promising for automating biomedical image analysis. However traditional UNet models are complicated as they struggle with accuracy and processing related information contextually. Therefore, we present Scleral Residue Class Attention UNet (SLCA-UNet), an improved version of UNet incorporating, among others, residual dense blocks, layered attention, and even channel attention modules into it, thus making it capable of capturing wide and thin features more efficiently than before. The results from experiments conducted on the Brain Tumor Segmentation Dataset 2020 indicated that the SLCA-UNet performed well in terms of indistinct metrics, showcasing its usefulness when it comes to automatic brain tumor segmentation. This development is one step further compared to other ways used so far since there’s gained better precision as well as faster detection options available for tumors than ever before.
说到脑肿瘤,没有其他疾病会像它一样严重影响人们的预期寿命,不仅如此,它还是全球主要死亡原因之一。唯一的出路就是及时发现和预测脑肿瘤,以减少相关死亡。核磁共振成像仍是传统的成像方法,但人工分割图像需要一定时间,因此需要很长时间才能做出诊断。基于 UNet 架构的深度学习模型是应对这一挑战的潜在答案,它似乎有望实现生物医学图像分析的自动化。然而,传统的 UNet 模型非常复杂,因为它们在准确性和处理相关上下文信息方面存在困难。因此,我们提出了巩膜残留类注意力 UNet(SLCA-UNet),它是 UNet 的改进版,将残留密集块、分层注意力、甚至通道注意力模块等纳入其中,从而使其能够比以前更有效地捕捉宽窄特征。在 2020 年脑肿瘤分割数据集上进行的实验结果表明,SLCA-UNet 在模糊度指标方面表现出色,展示了其在脑肿瘤自动分割方面的实用性。与迄今为止使用的其他方法相比,这项技术的发展又向前迈进了一步,因为它比以往任何时候都更精确、更快速地检测肿瘤。
{"title":"A novel SLCA-UNet architecture for automatic MRI brain tumor segmentation","authors":"P.S. Tejashwini ,&nbsp;J. Thriveni ,&nbsp;K.R. Venugopal","doi":"10.1016/j.bspc.2024.107047","DOIUrl":"10.1016/j.bspc.2024.107047","url":null,"abstract":"<div><div>When it comes to brain tumors, there’s no other disease that has as heavy an impact on life expectancy, and not only is it among the main causes of death globally. The only way out of this is through prompt identification and prediction of brain tumors to reduce related deaths. MRI remains the conventional imaging method used; however, manually segmenting its images can take time, hence taking long periods before a diagnosis is made. A potential answer to this challenge has been found in deep learning models based on the UNet architecture, which seems promising for automating biomedical image analysis. However traditional UNet models are complicated as they struggle with accuracy and processing related information contextually. Therefore, we present Scleral Residue Class Attention UNet (SLCA-UNet), an improved version of UNet incorporating, among others, residual dense blocks, layered attention, and even channel attention modules into it, thus making it capable of capturing wide and thin features more efficiently than before. The results from experiments conducted on the Brain Tumor Segmentation Dataset 2020 indicated that the SLCA-UNet performed well in terms of indistinct metrics, showcasing its usefulness when it comes to automatic brain tumor segmentation. This development is one step further compared to other ways used so far since there’s gained better precision as well as faster detection options available for tumors than ever before.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107047"},"PeriodicalIF":4.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1