首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
CSA-Net: A lightweight channel split attention network with residual feature fusion for retinal vessel segmentation CSA-Net:基于残差特征融合的轻型通道分割注意网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-13 DOI: 10.1016/j.bspc.2026.109834
MinShan Jiang, Hongkai Liu, Shuai Huang, Jihui Mao, Yongfei Zhu, Xuedian Zhang
Automatic retinal vessel segmentation is vital for clinical assessment and therapeutic intervention. Extracting global and local features from fundus images remains a significant challenge for current methods. To address this, we propose a lightweight channel split attention network (CSA-Net), which integrates channel split attention and residual feature fusion, and can effectively capture global context information and fine-grained vascular details. In our model, we first suggest a channel split attention (CSA) module to facilitate multiscale feature aggregation and acquisition of global information. Then, we introduce a residual feature fusion (RFF) module to reduce information loss by incorporating residuals and enhancing feature maps during the multiscale fusion process. In addition, we setup a lightweight design using adaptive inverted residual encoders with varied kernel sizes to increase the computational efficiency. Five publicly available fundus datasets (DRIVE, CHASEDB1, STARE, HRF, LES-AV) were used to test our model. Experimental results demonstrate that CSA-Net achieves state-of-the-art performance, with ACC values up to 0.9830 and AUC values of 0.9948 with only 2.39 M parameters. Ablation studies validate the effectiveness of individual modules. The proposed CSA-Net achieves a good balance between segmentation accuracy and model complexity. In multiple retinal vascular segmentation benchmark tests, it achieves competitive or better performance with fewer parameters.
视网膜血管自动分割对临床评估和治疗干预至关重要。从眼底图像中提取全局和局部特征仍然是当前方法面临的重大挑战。为了解决这个问题,我们提出了一种轻量级的通道分裂注意网络(CSA-Net),该网络集成了通道分裂注意和残差特征融合,可以有效地捕获全局上下文信息和细粒度血管细节。在我们的模型中,我们首先提出了一个通道分裂注意(CSA)模块,以促进多尺度特征聚合和全局信息的获取。然后,我们引入残差特征融合(RFF)模块,在多尺度融合过程中通过残差融合和增强特征映射来减少信息丢失。此外,我们建立了一个轻量级的设计,使用自适应倒置残差编码器与不同的核大小,以提高计算效率。使用5个公开的眼底数据集(DRIVE、CHASEDB1、STARE、HRF、LES-AV)来测试我们的模型。实验结果表明,CSA-Net在2.39 M参数下,ACC值可达0.9830,AUC值可达0.9948,达到了最先进的性能。消融研究验证了单个模块的有效性。本文提出的CSA-Net在分割精度和模型复杂度之间取得了很好的平衡。在多次视网膜血管分割基准测试中,以较少的参数获得了具有竞争力或更好的性能。
{"title":"CSA-Net: A lightweight channel split attention network with residual feature fusion for retinal vessel segmentation","authors":"MinShan Jiang,&nbsp;Hongkai Liu,&nbsp;Shuai Huang,&nbsp;Jihui Mao,&nbsp;Yongfei Zhu,&nbsp;Xuedian Zhang","doi":"10.1016/j.bspc.2026.109834","DOIUrl":"10.1016/j.bspc.2026.109834","url":null,"abstract":"<div><div>Automatic retinal vessel segmentation is vital for clinical assessment and therapeutic intervention. Extracting global and local features from fundus images remains a significant challenge for current methods. To address this, we propose a lightweight channel split attention network (CSA-Net), which integrates channel split attention and residual feature fusion, and can effectively capture global context information and fine-grained vascular details. In our model, we first suggest a channel split attention (CSA) module to facilitate multiscale feature aggregation and acquisition of global information. Then, we introduce a residual feature fusion (RFF) module to reduce information loss by incorporating residuals and enhancing feature maps during the multiscale fusion process. In addition, we setup a lightweight design using adaptive inverted residual encoders with varied kernel sizes to increase the computational efficiency. Five publicly available fundus datasets (DRIVE, CHASEDB1, STARE, HRF, LES-AV) were used to test our model. Experimental results demonstrate that CSA-Net achieves state-of-the-art performance, with ACC values up to 0.9830 and AUC values of 0.9948 with only 2.39 M parameters. Ablation studies validate the effectiveness of individual modules. The proposed CSA-Net achieves a good balance between segmentation accuracy and model complexity. In multiple retinal vascular segmentation benchmark tests, it achieves competitive or better performance with fewer parameters.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109834"},"PeriodicalIF":4.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic regulation of brain network and muscle activity in upper limb force generation among older adults: A temporal dynamic graph Fourier transform approach 老年人上肢力量产生中脑网络和肌肉活动的动态调节:时间动态图傅立叶变换方法
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-13 DOI: 10.1016/j.bspc.2026.109829
Mingxia Zhang , Huijing Hu , Di Ao , Li Yan , Qinghua Huang , Zhengxiang Zhang , Le Li
This study investigates the neural mechanisms underlying age-related declines in motor control by proposing a novel Temporal Dynamic Graph Fourier Transform (TDGFT) method. TDGFT integrates graph signal processing with dynamic brain networks analysis to characterize time-varying corticomuscular interactions in the spectral domain, thereby linking global and local brain connectivity patterns to motor behavior. Integrating functional near-infrared spectroscopy (fNIRS) and electromyography (EMG), we systematically examine the dynamic regulation of brain network and muscle activity in older adults and younger adults during elbow flexion tasks at 30% and 70% of maximum voluntary contraction (MVC). Sixteen older adults and sixteen younger adults are recruited for the study. Our findings reveal that older adults exhibit weaker dynamic regulation of brain regions during high-load tasks, accompanied by significantly increased constraints of structural brain networks on functional activity, reflecting a decline in cognitive control. Additionally, older adults rely on multi-regional brain coordination for motor control during low-intensity tasks, while reducing cognitive load to enhance motor efficiency during high-intensity tasks. By providing an interpretable spectral representation of corticomuscular dynamics, TDGFT advances the understanding of how aging reshapes motor-related brain connectivity. These findings may help identify changes of age-related motor decline and facilitate the design of individualized motor rehabilitation strategies for older adults.
本研究通过提出一种新的时间动态图傅立叶变换(TDGFT)方法来研究运动控制能力与年龄相关衰退的神经机制。TDGFT将图形信号处理与动态脑网络分析相结合,以表征谱域中随时间变化的皮质肌肉相互作用,从而将全球和局部大脑连接模式与运动行为联系起来。结合功能性近红外光谱(fNIRS)和肌电图(EMG),我们系统地研究了老年人和年轻人在最大自愿收缩(MVC)的30%和70%时屈肘任务时脑网络和肌肉活动的动态调节。这项研究招募了16名老年人和16名年轻人。我们的研究结果表明,老年人在高负荷任务中表现出较弱的大脑区域动态调节,伴随着大脑结构网络对功能活动的限制显著增加,反映了认知控制的下降。此外,老年人在低强度任务中依靠多区域大脑协调来控制运动,而在高强度任务中减少认知负荷以提高运动效率。通过提供皮质肌肉动力学的可解释的光谱表示,TDGFT促进了对衰老如何重塑运动相关大脑连接的理解。这些发现可能有助于识别与年龄相关的运动衰退的变化,并促进老年人个性化运动康复策略的设计。
{"title":"Dynamic regulation of brain network and muscle activity in upper limb force generation among older adults: A temporal dynamic graph Fourier transform approach","authors":"Mingxia Zhang ,&nbsp;Huijing Hu ,&nbsp;Di Ao ,&nbsp;Li Yan ,&nbsp;Qinghua Huang ,&nbsp;Zhengxiang Zhang ,&nbsp;Le Li","doi":"10.1016/j.bspc.2026.109829","DOIUrl":"10.1016/j.bspc.2026.109829","url":null,"abstract":"<div><div>This study investigates the neural mechanisms underlying age-related declines in motor control by proposing a novel Temporal Dynamic Graph Fourier Transform (TDGFT) method. TDGFT integrates graph signal processing with dynamic brain networks analysis to characterize time-varying corticomuscular interactions in the spectral domain, thereby linking global and local brain connectivity patterns to motor behavior. Integrating functional near-infrared spectroscopy (fNIRS) and electromyography (EMG), we systematically examine the dynamic regulation of brain network and muscle activity in older adults and younger adults during elbow flexion tasks at 30% and 70% of maximum voluntary contraction (MVC). Sixteen older adults and sixteen younger adults are recruited for the study. Our findings reveal that older adults exhibit weaker dynamic regulation of brain regions during high-load tasks, accompanied by significantly increased constraints of structural brain networks on functional activity, reflecting a decline in cognitive control. Additionally, older adults rely on multi-regional brain coordination for motor control during low-intensity tasks, while reducing cognitive load to enhance motor efficiency during high-intensity tasks. By providing an interpretable spectral representation of corticomuscular dynamics, TDGFT advances the understanding of how aging reshapes motor-related brain connectivity. These findings may help identify changes of age-related motor decline and facilitate the design of individualized motor rehabilitation strategies for older adults.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109829"},"PeriodicalIF":4.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEDP-SegResnet for human eyeball and lens segmentation SEDP-SegResnet用于人眼和晶状体分割
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-13 DOI: 10.1016/j.bspc.2026.109838
Li Ning , Yepei Qin , Wendong Zhao , Yangjiarui Yu , Qingcheng Yang , Chenxi Guo , Xuedian Zhang , Hui Chen , Yinghong Ji , Pei Ma
Accurate segmentation and quantification of the eyeball and lens from MRI images are crucial for clinical diagnosis and treatment planning of ocular diseases. Traditional methods for analyzing eye structures in MRI have drawbacks including low segmentation accuracy and reliance on laborious, time-consuming manual processes. To solve these problems, we propose a SEDP-SegResnet model for segmentation of the eyeball and lens structures from 3D MRI images. The framework takes SegResnet as its backbone network and incorporates a 3D-SE layer to handle deep features from decoder, 3D-SE layer assigns different weight information to the feature map channels through squeeze and excitation mechanism. Moreover, skip connections in the U-shaped architecture model are replaced with Dynamic Deep Feature Prefusion (DDFP) modules. The DDFP can achieve in-depth fusion of encoder and decoder features based on global information, thereby enhancing 3D image context comprehension of the model. The performance of SEDP-SegResnet is evaluated through a series of experiments using a proprietary dataset of orbital MRI scans. The results show that SEDP-SegResnet outperforms current mainstream 3D deep-learning-based segmentation models across multiple evaluation metrics including the Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). The model achieves robust performances in segmenting margin of eyeballs and blur-edge lenses. SEDP-SegResnet achieves a DSC of 96.81% for eyeball segmentation and 90.57% for lens segmentation, superior than a variety of commonly used segmentation models. It provides a more accurate, automated and robust method for the segmentation and quantification of eyeball and lens in MRI, offering an advanced computer-aided diagnosis tool.
MRI图像中眼球和晶状体的准确分割和量化对眼科疾病的临床诊断和治疗计划至关重要。传统的MRI眼睛结构分析方法存在分割精度低、依赖于费力、耗时的人工处理等缺点。为了解决这些问题,我们提出了一种SEDP-SegResnet模型,用于从3D MRI图像中分割眼球和晶状体结构。该框架以SegResnet为骨干网络,结合3D-SE层处理来自解码器的深度特征,3D-SE层通过挤压和激励机制为特征图通道分配不同的权重信息。此外,用动态深度特征预融合(Dynamic Deep Feature Prefusion, DDFP)模块代替u型结构模型中的跳过连接。DDFP可以基于全局信息实现编码器和解码器特征的深度融合,从而增强模型对三维图像上下文的理解能力。SEDP-SegResnet的性能通过使用专有的眼眶MRI扫描数据集进行一系列实验来评估。结果表明,SEDP-SegResnet在多个评估指标上优于当前主流的基于3D深度学习的分割模型,包括Dice Similarity Coefficient (DSC)和Intersection over Union (IoU)。该模型在眼球边缘分割和模糊边缘镜头分割方面取得了较好的效果。SEDP-SegResnet对眼球分割的DSC为96.81%,对晶体分割的DSC为90.57%,优于多种常用分割模型。它为MRI中眼球和晶状体的分割和定量提供了一种更加准确、自动化和稳健的方法,提供了一种先进的计算机辅助诊断工具。
{"title":"SEDP-SegResnet for human eyeball and lens segmentation","authors":"Li Ning ,&nbsp;Yepei Qin ,&nbsp;Wendong Zhao ,&nbsp;Yangjiarui Yu ,&nbsp;Qingcheng Yang ,&nbsp;Chenxi Guo ,&nbsp;Xuedian Zhang ,&nbsp;Hui Chen ,&nbsp;Yinghong Ji ,&nbsp;Pei Ma","doi":"10.1016/j.bspc.2026.109838","DOIUrl":"10.1016/j.bspc.2026.109838","url":null,"abstract":"<div><div>Accurate segmentation and quantification of the eyeball and lens from MRI images are crucial for clinical diagnosis and treatment planning of ocular diseases. Traditional methods for analyzing eye structures in MRI have drawbacks including low segmentation accuracy and reliance on laborious, time-consuming manual processes. To solve these problems, we propose a SEDP-SegResnet model for segmentation of the eyeball and lens structures from 3D MRI images. The framework takes SegResnet as its backbone network and incorporates a 3D-SE layer to handle deep features from decoder, 3D-SE layer assigns different weight information to the feature map channels through squeeze and excitation mechanism. Moreover, skip connections in the U-shaped architecture model are replaced with Dynamic Deep Feature Prefusion (DDFP) modules. The DDFP can achieve in-depth fusion of encoder and decoder features based on global information, thereby enhancing 3D image context comprehension of the model. The performance of SEDP-SegResnet is evaluated through a series of experiments using a proprietary dataset of orbital MRI scans. The results show that SEDP-SegResnet outperforms current mainstream 3D deep-learning-based segmentation models across multiple evaluation metrics including the Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). The model achieves robust performances in segmenting margin of eyeballs and blur-edge lenses. SEDP-SegResnet achieves a DSC of 96.81% for eyeball segmentation and 90.57% for lens segmentation, superior than a variety of commonly used segmentation models. It provides a more accurate, automated and robust method for the segmentation and quantification of eyeball and lens in MRI, offering an advanced computer-aided diagnosis tool.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109838"},"PeriodicalIF":4.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced deep learning model for breast cancer histopathological grading based on Selective Kernel network 基于选择性核网络的乳腺癌组织病理学分级的增强深度学习模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-13 DOI: 10.1016/j.bspc.2026.109833
Yuandi Sun
<div><h3>Background</h3><div>Breast cancer is one of the most common malignant tumors in women worldwide. Its early detection and accurate grading are crucial for developing individualized treatment plans and improving patient prognosis. Pathological image grading is a key step in breast cancer diagnosis, but due to the high heterogeneity of tumor cell and tissue morphology, traditional manual image reading methods have subjective bias and low efficiency. Therefore, developing an automated and accurate breast cancer pathological image grading model has important clinical value for improving diagnostic efficiency and accuracy.</div></div><div><h3>Methods</h3><div>This study proposed a deep learning model based on the combination of DenseNet and Selective Kernel Block (SKBlock) − SKDenseNet for the automatic grading of breast cancer pathology images. DenseNet enhances feature reuse and gradient propagation efficiency through dense connection mechanism, while SKBlock realizes dynamic extraction and fusion of pathological features of different scales through multi-scale convolution operation and channel attention mechanism. The model was trained on the TCGA dataset and independently tested on the CHTN dataset to evaluate the generalization ability and stability of the model in cross-center tasks. The model parameters were optimized, and the classification performance was evaluated by accuracy (ACC), precision (PRE), recall (REC) and F1 score (F1), and the discriminability and interpretability of the model were analyzed by confusion matrix and activation heat map .</div></div><div><h3>Results</h3><div>The experimental results on the test set (CHTN dataset) showed that SKDenseNet significantly outperformed the baseline model in all key classification indicators. The average accuracy of SKDenseNet was 86.58%, the average precision was 87.91%, the average recall was 87.97%, and the F1 score was 86.71%, which were 7.84, 3.18, 6.24, and 7.32 percentage points higher than DenseNet121, respectively. The confusion matrix showed that SKDenseNet showed good discrimination and stability in the classification tasks of high- and low-grade breast cancer and stromal tissue. In addition, SKDenseNet also has the highest AUC, reaching 0.9693. The activation heat map generated by Grad-CAM further verified that the key areas of the model’s attention in the pathological images were highly consistent with the actual pathological features (such as nuclear morphology and glandular duct structure), which enhanced the interpretability of the model.</div></div><div><h3>Conclusion</h3><div>SKDenseNet model proposed in this study combines the global feature expression ability of DenseNet with the dynamic receptive field adjustment mechanism of SKBlock, and shows excellent classification performance and cross-center adaptability in the task of breast cancer pathology image grading . The model can reduce the misdiagnosis rate and missed diagnosis rate while maintaining high accurac
背景乳腺癌是世界范围内女性最常见的恶性肿瘤之一。早期发现和准确分级对制定个体化治疗方案和改善患者预后至关重要。病理图像分级是乳腺癌诊断的关键步骤,但由于肿瘤细胞和组织形态的高度异质性,传统的人工图像读取方法存在主观偏差,效率较低。因此,开发一种自动化、准确的乳腺癌病理图像分级模型对于提高诊断效率和准确率具有重要的临床价值。方法提出了一种基于DenseNet和选择性核块(SKBlock)−SKDenseNet相结合的深度学习模型,用于乳腺癌病理图像的自动分级。DenseNet通过密集连接机制提高特征重用和梯度传播效率,SKBlock通过多尺度卷积运算和通道关注机制实现不同尺度病理特征的动态提取和融合。在TCGA数据集上对模型进行训练,并在CHTN数据集上进行独立测试,以评估模型在跨中心任务中的泛化能力和稳定性。对模型参数进行优化,通过准确率(ACC)、精密度(PRE)、召回率(REC)和F1分数(F1)评价分类性能,并通过混淆矩阵和激活热图分析模型的可分辨性和可解释性。结果在测试集(CHTN数据集)上的实验结果表明,SKDenseNet在所有关键分类指标上都明显优于基线模型。SKDenseNet的平均准确率为86.58%,平均准确率为87.91%,平均召回率为87.97%,F1得分为86.71%,分别比DenseNet121高7.84、3.18、6.24和7.32个百分点。混淆矩阵显示,SKDenseNet在高分级和低分级乳腺癌及间质组织的分类任务中具有良好的鉴别性和稳定性。此外,SKDenseNet的AUC也最高,达到0.9693。通过Grad-CAM生成的激活热图进一步验证了病理图像中模型关注的关键区域与实际病理特征(如核形态、腺管结构等)高度一致,增强了模型的可解释性。结论本研究提出的skdensenet模型结合了DenseNet的全局特征表达能力和SKBlock的动态感受野调节机制,在乳腺癌病理图像分级任务中表现出优异的分类性能和跨中心适应性。该模型在保持较高准确率的同时降低了误诊率和漏诊率,具有良好的推广能力和临床应用潜力。
{"title":"An enhanced deep learning model for breast cancer histopathological grading based on Selective Kernel network","authors":"Yuandi Sun","doi":"10.1016/j.bspc.2026.109833","DOIUrl":"10.1016/j.bspc.2026.109833","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background&lt;/h3&gt;&lt;div&gt;Breast cancer is one of the most common malignant tumors in women worldwide. Its early detection and accurate grading are crucial for developing individualized treatment plans and improving patient prognosis. Pathological image grading is a key step in breast cancer diagnosis, but due to the high heterogeneity of tumor cell and tissue morphology, traditional manual image reading methods have subjective bias and low efficiency. Therefore, developing an automated and accurate breast cancer pathological image grading model has important clinical value for improving diagnostic efficiency and accuracy.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;This study proposed a deep learning model based on the combination of DenseNet and Selective Kernel Block (SKBlock) − SKDenseNet for the automatic grading of breast cancer pathology images. DenseNet enhances feature reuse and gradient propagation efficiency through dense connection mechanism, while SKBlock realizes dynamic extraction and fusion of pathological features of different scales through multi-scale convolution operation and channel attention mechanism. The model was trained on the TCGA dataset and independently tested on the CHTN dataset to evaluate the generalization ability and stability of the model in cross-center tasks. The model parameters were optimized, and the classification performance was evaluated by accuracy (ACC), precision (PRE), recall (REC) and F1 score (F1), and the discriminability and interpretability of the model were analyzed by confusion matrix and activation heat map .&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;The experimental results on the test set (CHTN dataset) showed that SKDenseNet significantly outperformed the baseline model in all key classification indicators. The average accuracy of SKDenseNet was 86.58%, the average precision was 87.91%, the average recall was 87.97%, and the F1 score was 86.71%, which were 7.84, 3.18, 6.24, and 7.32 percentage points higher than DenseNet121, respectively. The confusion matrix showed that SKDenseNet showed good discrimination and stability in the classification tasks of high- and low-grade breast cancer and stromal tissue. In addition, SKDenseNet also has the highest AUC, reaching 0.9693. The activation heat map generated by Grad-CAM further verified that the key areas of the model’s attention in the pathological images were highly consistent with the actual pathological features (such as nuclear morphology and glandular duct structure), which enhanced the interpretability of the model.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;div&gt;SKDenseNet model proposed in this study combines the global feature expression ability of DenseNet with the dynamic receptive field adjustment mechanism of SKBlock, and shows excellent classification performance and cross-center adaptability in the task of breast cancer pathology image grading . The model can reduce the misdiagnosis rate and missed diagnosis rate while maintaining high accurac","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109833"},"PeriodicalIF":4.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RePrediction of chemical characteristics of individuals with HIV infection using an image processing-centric optimized deep learning-based CNN model 使用以图像处理为中心优化的基于深度学习的CNN模型重新预测HIV感染者的化学特征
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-13 DOI: 10.1016/j.bspc.2026.109816
Yusuf Alaca , Nursel Karaoğlan , Erdal Başaran , Yüksel Çelik
Accurate prediction of HIV-related chemical properties is of critical importance for computational drug discovery and bioinformatics applications. In this study, an image processing–centric deep learning framework is proposed to predict HIV chemical activity using molecular images automatically generated from SMILES representations. Multi-scale deep features are extracted using two complementary convolutional neural network architectures, namely Xception and ResNet50, and subsequently fused to capture both low-level structural patterns and high-level molecular representations. The main novelty of this work lies in the integration of CNN-based molecular image feature extraction with the Manta Ray Foraging Optimization (MRFO) algorithm. The MRFO algorithm is employed to perform optimization-driven feature selection and classifier hyperparameter tuning, aiming to improve both predictive accuracy and generalization capability. The optimized feature set is finally classified using a support vector machine (SVM), enabling robust discrimination between active and inactive HIV-related compounds. Experimental evaluations conducted on the benchmark HIV SMILES dataset demonstrate that the proposed framework achieves superior and stable performance, reaching an accuracy of 81.16% and a ROC-AUC of 0.87, outperforming several state-of-the-art machine learning and deep learning approaches reported in the literature. These results confirm that combining molecular image representations with optimization-guided deep learning provides an effective and reliable strategy for HIV chemical property prediction.
准确预测hiv相关的化学性质对于计算药物发现和生物信息学应用至关重要。在这项研究中,提出了一个以图像处理为中心的深度学习框架,利用smile表示自动生成的分子图像来预测HIV的化学活性。使用两个互补的卷积神经网络架构(即Xception和ResNet50)提取多尺度深度特征,并随后融合以捕获低级结构模式和高级分子表示。本工作的主要新颖之处在于将基于cnn的分子图像特征提取与蝠鲼觅食优化(MRFO)算法相结合。采用MRFO算法进行优化驱动的特征选择和分类器超参数调优,以提高预测精度和泛化能力。最后使用支持向量机(SVM)对优化后的特征集进行分类,从而实现对活性和非活性hiv相关化合物的鲁棒区分。在基准HIV SMILES数据集上进行的实验评估表明,所提出的框架具有优越且稳定的性能,准确率达到81.16%,ROC-AUC为0.87,优于文献中报道的几种最先进的机器学习和深度学习方法。这些结果证实,将分子图像表示与优化引导的深度学习相结合,为HIV化学性质预测提供了一种有效可靠的策略。
{"title":"RePrediction of chemical characteristics of individuals with HIV infection using an image processing-centric optimized deep learning-based CNN model","authors":"Yusuf Alaca ,&nbsp;Nursel Karaoğlan ,&nbsp;Erdal Başaran ,&nbsp;Yüksel Çelik","doi":"10.1016/j.bspc.2026.109816","DOIUrl":"10.1016/j.bspc.2026.109816","url":null,"abstract":"<div><div>Accurate prediction of HIV-related chemical properties is of critical importance for computational drug discovery and bioinformatics applications. In this study, an image processing–centric deep learning framework is proposed to predict HIV chemical activity using molecular images automatically generated from SMILES representations. Multi-scale deep features are extracted using two complementary convolutional neural network architectures, namely Xception and ResNet50, and subsequently fused to capture both low-level structural patterns and high-level molecular representations. The main novelty of this work lies in the integration of CNN-based molecular image feature extraction with the Manta Ray Foraging Optimization (MRFO) algorithm. The MRFO algorithm is employed to perform optimization-driven feature selection and classifier hyperparameter tuning, aiming to improve both predictive accuracy and generalization capability. The optimized feature set is finally classified using a support vector machine (SVM), enabling robust discrimination between active and inactive HIV-related compounds. Experimental evaluations conducted on the benchmark HIV SMILES dataset demonstrate that the proposed framework achieves superior and stable performance, reaching an accuracy of 81.16% and a ROC-AUC of 0.87, outperforming several state-of-the-art machine learning and deep learning approaches reported in the literature. These results confirm that combining molecular image representations with optimization-guided deep learning provides an effective and reliable strategy for HIV chemical property prediction.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109816"},"PeriodicalIF":4.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble-MIL: deep learning-based ensemble framework for biomarker prediction from histopathological images in colorectal cancer 集成- mil:基于深度学习的集成框架,用于从结直肠癌的组织病理学图像中预测生物标志物
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-13 DOI: 10.1016/j.bspc.2026.109759
Geng-Yun Tien , Yu-Chia Chen , Liang-Chuan Lai , Tzu-Pin Lu , Mong-Hsun Tsai , Eric Y. Chuang , Hsiang-Han Chen
Recent studies have explored histopathological whole slide images (WSIs) for predicting colorectal cancer (CRC) biomarkers, aiming to create cost-effective and efficient diagnostic tools. However, achieving strong predictive performance and generalizability across datasets remains a challenge. Here, we introduce a deep learning-based ensemble framework, Ensemble-MIL, designed to robustly predict key CRC biomarkers, including BRAF V600E, KRAS mutations, and MSI-H status, with improved cross-dataset performance. We employed two independent CRC datasets: TCGA-COAD for model training and internal evaluation, and CPTAC-COAD as an external test set to assess generalizability. All WSIs were preprocessed and divided into small image patches. A tumor detection model was applied to identify tumor regions, and patch-level; features were extracted via SimCLR, a contrastive learning method. These features were utilized to train three multiple instance learning (MIL) models: Att-MIL, Tran-MIL, and GNN-MIL. The models were then integrated into the final Ensemble-MIL framework. In internal testing with TCGA-COAD, the proposed method achieved area under the curve (AUC) scores of 0.90, 0.87, and 0.64 for MSI-H, BRAF, and KRAS, respectively. In external testing on CPTAC-COAD, it achieved AUCs of 0.78, 0.76, and 0.61 for MSI-H, BRAF, and KRAS, respectively, outperforming previous results. This framework offers a scalable and effective solution for image-based biomarker screening and demonstrates strong potential for clinical application, particularly in resource-limited settings. The code is available at https://github.com/chenh2lab/Ensemble-MIL.
最近的研究探索了组织病理学全切片图像(WSIs)预测结直肠癌(CRC)生物标志物的方法,旨在创造经济有效的诊断工具。然而,实现强大的预测性能和跨数据集的通用性仍然是一个挑战。在这里,我们引入了一个基于深度学习的集成框架,ensemble - mil,旨在稳健地预测关键的CRC生物标志物,包括BRAF V600E、KRAS突变和MSI-H状态,并提高了跨数据集的性能。我们使用了两个独立的CRC数据集:TCGA-COAD用于模型训练和内部评估,CPTAC-COAD作为外部测试集来评估泛化性。对所有wsi进行预处理并分割成小图像块。采用肿瘤检测模型对肿瘤区域、斑块级进行识别;通过对比学习方法SimCLR提取特征。这些特征被用来训练三种多实例学习(MIL)模型:at -MIL, trans -MIL和GNN-MIL。然后将这些模型集成到最终的Ensemble-MIL框架中。在TCGA-COAD内部测试中,该方法对MSI-H、BRAF和KRAS的曲线下面积(AUC)得分分别为0.90、0.87和0.64。在CPTAC-COAD的外部测试中,MSI-H、BRAF和KRAS的auc分别为0.78、0.76和0.61,优于之前的结果。该框架为基于图像的生物标志物筛选提供了一种可扩展且有效的解决方案,并显示出强大的临床应用潜力,特别是在资源有限的环境中。代码可在https://github.com/chenh2lab/Ensemble-MIL上获得。
{"title":"Ensemble-MIL: deep learning-based ensemble framework for biomarker prediction from histopathological images in colorectal cancer","authors":"Geng-Yun Tien ,&nbsp;Yu-Chia Chen ,&nbsp;Liang-Chuan Lai ,&nbsp;Tzu-Pin Lu ,&nbsp;Mong-Hsun Tsai ,&nbsp;Eric Y. Chuang ,&nbsp;Hsiang-Han Chen","doi":"10.1016/j.bspc.2026.109759","DOIUrl":"10.1016/j.bspc.2026.109759","url":null,"abstract":"<div><div>Recent studies have explored histopathological whole slide images (WSIs) for predicting colorectal cancer (CRC) biomarkers, aiming to create cost-effective and efficient diagnostic tools. However, achieving strong predictive performance and generalizability across datasets remains a challenge. Here, we introduce a deep learning-based ensemble framework, Ensemble-MIL, designed to robustly predict key CRC biomarkers, including BRAF V600E, KRAS mutations, and MSI-H status, with improved cross-dataset performance. We employed two independent CRC datasets: TCGA-COAD for model training and internal evaluation, and CPTAC-COAD as an external test set to assess generalizability. All WSIs were preprocessed and divided into small image patches. A tumor detection model was applied to identify tumor regions, and patch-level; features were extracted via SimCLR, a contrastive learning method. These features were utilized to train three multiple instance learning (MIL) models: Att-MIL, Tran-MIL, and GNN-MIL. The models were then integrated into the final Ensemble-MIL framework. In internal testing with TCGA-COAD, the proposed method achieved area under the curve (AUC) scores of 0.90, 0.87, and 0.64 for MSI-H, BRAF, and KRAS, respectively. In external testing on CPTAC-COAD, it achieved AUCs of 0.78, 0.76, and 0.61 for MSI-H, BRAF, and KRAS, respectively, outperforming previous results. This framework offers a scalable and effective solution for image-based biomarker screening and demonstrates strong potential for clinical application, particularly in resource-limited settings. The code is available at <span><span>https://github.com/chenh2lab/Ensemble-MIL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109759"},"PeriodicalIF":4.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing equivalence in raw accelerometer outputs across different brands using shaker table validation: a comparative analysis with filtering and linear regression techniques 评估等效的原始加速度计输出跨不同品牌使用振动台验证:与过滤和线性回归技术的比较分析
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-12 DOI: 10.1016/j.bspc.2026.109627
Hannah J. Coyle-Asbil , Bernadette Murphy , Lori Ann Vallis
This study had two major aims, first to determine whether the raw acceleration data from ActiGraph versus non-ActiGraph accelerometers were equivalent and, second to apply and compare different equivalency approaches: 1) linear regression, 2) lowpass 15 Hz, 3) lowpass 20 Hz, and 4) lowpass 30 Hz filters. ActiGraph GT3X+ (n = 8), ActiGraph wGT3X-BT (n = 10), ActiGraph GT9X (n = 8; primary and GT9X2), OPAL (n = 6) and GENEActiv (n = 5) accelerometers were affixed to a multi-axis shaker table and sinusoidal oscillations were introduced, spanning the entire dynamic range (±0.005 G to ± 8 G). The averages of the trials were compared according to the different techniques. Linear regression models were fitted to align the non-ActiGraph to the ActiGraph data, and lowpass filters were applied to the non-Actigraph data. Equivalency was assessed using two one sided t-tests of equivalence. The results indicated either statistically insignificant or negligible mean differences at the lower frequency oscillations, however at higher oscillations the recordings were significantly distinct and vastly varied across the different techniques. For example, the mean difference between the wGT3X-BT − GA at trial 28 unprocessed was −2788.09 mg, −394.93 mg for the linear regression, 1358.56 mg for the 15 Hz filter, and 182.94 mg for both the 20 Hz and 30 Hz filters. In conclusion, there are minor differences between ActiGraph and non-ActiGraph accelerometers at low frequencies; however, at high frequencies, 20 Hz low-pass filters are effective in improving equivalency. Our findings provide insight into device equivalency and important guidance for researchers to consider when harmonizing data across accelerometer devices.
这项研究有两个主要目的,首先是确定来自ActiGraph和非ActiGraph加速度计的原始加速度数据是否等效,其次是应用和比较不同的等效方法:1)线性回归,2)低通15hz, 3)低通20hz和4)低通30hz滤波器。将ActiGraph GT3X+ (n = 8)、ActiGraph wGT3X-BT (n = 10)、ActiGraph GT9X (n = 8; primary和GT9X2)、OPAL (n = 6)和GENEActiv (n = 5)加速度计固定在多轴激振台上,引入正弦振荡,整个动态范围(±0.005 G至±8g)。根据不同的技术比较试验的平均值。拟合线性回归模型使非ActiGraph数据与ActiGraph数据对齐,并对非ActiGraph数据应用低通滤波器。使用等效性的两个单侧t检验来评估等效性。结果表明,在较低的频率振荡中,统计上不显著或可以忽略不计的平均差异,然而在较高的振荡中,不同技术的记录显着不同且差异很大。例如,试验28未处理的wgt3g - bt - GA之间的平均差异为- 2788.09 mg,线性回归为- 394.93 mg, 15 Hz滤波器为1358.56 mg, 20 Hz和30 Hz滤波器为182.94 mg。总之,在低频下,ActiGraph和非ActiGraph加速度计之间存在微小差异;然而,在高频下,20hz低通滤波器可以有效地提高等效性。我们的研究结果为研究人员在协调加速度计设备之间的数据时提供了对设备等效性的见解和重要指导。
{"title":"Assessing equivalence in raw accelerometer outputs across different brands using shaker table validation: a comparative analysis with filtering and linear regression techniques","authors":"Hannah J. Coyle-Asbil ,&nbsp;Bernadette Murphy ,&nbsp;Lori Ann Vallis","doi":"10.1016/j.bspc.2026.109627","DOIUrl":"10.1016/j.bspc.2026.109627","url":null,"abstract":"<div><div>This study had two major aims, first to determine whether the raw acceleration data from ActiGraph versus non-ActiGraph accelerometers were equivalent and, second to apply and compare different equivalency approaches: 1) linear regression, 2) lowpass 15 Hz, 3) lowpass 20 Hz, and 4) lowpass 30 Hz filters. ActiGraph GT3X+ (n = 8), ActiGraph wGT3X-BT (n = 10), ActiGraph GT9X (n = 8; primary and GT9X2), OPAL (n = 6) and GENEActiv (n = 5) accelerometers were affixed to a multi-axis shaker table and sinusoidal oscillations were introduced, spanning the entire dynamic range (±0.005 G to ± 8 G). The averages of the trials were compared according to the different techniques. Linear regression models were fitted to align the non-ActiGraph to the ActiGraph data, and lowpass filters were applied to the non-Actigraph data. Equivalency was assessed using two one sided t-tests of equivalence. The results indicated either statistically insignificant or negligible mean differences at the lower frequency oscillations, however at higher oscillations the recordings were significantly distinct and vastly varied across the different techniques. For example, the mean difference between the wGT3X-BT − GA at trial 28 unprocessed was −2788.09 m<em>g</em>, −394.93 m<em>g</em> for the linear regression, 1358.56 m<em>g</em> for the 15 Hz filter, and 182.94 m<em>g</em> for both the 20 Hz and 30 Hz filters. In conclusion, there are minor differences between ActiGraph and non-ActiGraph accelerometers at low frequencies; however, at high frequencies, 20 Hz low-pass filters are effective in improving equivalency. Our findings provide insight into device equivalency and important guidance for researchers to consider when harmonizing data across accelerometer devices.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109627"},"PeriodicalIF":4.9,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PEEG-HAR: A novel pain-evoked EEG extraction method guided by the adaptive localization of high-activation-rate pain-related EEG sources peg - har:一种基于高激活率疼痛相关脑电图源自适应定位的疼痛诱发脑电图提取新方法
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-12 DOI: 10.1016/j.bspc.2026.109725
Wenjia Gao, Dan Liu, Qisong Wang, Yongping Zhao, Jinwei Sun
Laser-evoked potentials (LEPs) are widely recognized as optimal for pain assessment, but raw EEG signals are often contaminated by noise and background activity, making LEPs extraction challenging. Existing wavelet-based methods construct templates using all trials. However, since LEPs exhibit variations across different pain intensities, the use of all trials may result in the loss of discriminative features essential for distinguishing between varying levels of pain intensity. In this study, EEG source signals are obtained using electrophysiological source imaging, and high-activation-rate pain-related EEG sources are identified based on their activation characteristics. A within-subject pain intensity model based on a support vector machine is then developed to link recorded EEG signals with historical EEG samples. The model guides the selection of suitable EEG segments to construct a pain-specific template, which is subsequently used to reconstruct the recorded EEG signals by exploiting the time–frequency distribution of pain-related wavelet coefficients, thereby facilitating more effective extraction of LEPs. Experiments on real EEG recordings confirm that the proposed method can extract signals that more closely reflect genuine pain-evoked EEG activity, thereby enhancing the representation of pain and significantly improving subsequent classification performance, with binary accuracy increasing from 59.61% to 81.13% and three-class accuracy from 39.22% to 64.23%. The method addresses the challenge of insufficient pain expression in raw signals and provides a data foundation for developing objective pain biomarkers.
激光诱发电位(LEPs)被广泛认为是疼痛评估的最佳方法,但原始脑电图信号经常受到噪声和背景活动的污染,使得LEPs的提取具有挑战性。现有的基于小波的方法使用所有试验来构建模板。然而,由于lep在不同的疼痛强度中表现出差异,因此使用所有试验可能会导致丧失区分不同疼痛强度水平所必需的判别特征。本研究采用电生理源成像技术获取脑电源信号,并根据其激活特征对高激活率疼痛相关脑电源进行识别。然后建立了基于支持向量机的受试者疼痛强度模型,将记录的脑电信号与历史脑电信号样本联系起来。该模型引导选择合适的脑电信号片段构建痛觉特异性模板,然后利用痛觉相关小波系数的时频分布对记录的脑电信号进行重构,从而更有效地提取lep。真实脑电记录实验证实,该方法能够提取更贴近真实疼痛诱发脑电活动的信号,从而增强疼痛表征,显著提高后续分类性能,二值准确率从59.61%提高到81.13%,三级准确率从39.22%提高到64.23%。该方法解决了原始信号中疼痛表达不足的挑战,为开发客观的疼痛生物标志物提供了数据基础。
{"title":"PEEG-HAR: A novel pain-evoked EEG extraction method guided by the adaptive localization of high-activation-rate pain-related EEG sources","authors":"Wenjia Gao,&nbsp;Dan Liu,&nbsp;Qisong Wang,&nbsp;Yongping Zhao,&nbsp;Jinwei Sun","doi":"10.1016/j.bspc.2026.109725","DOIUrl":"10.1016/j.bspc.2026.109725","url":null,"abstract":"<div><div>Laser-evoked potentials (LEPs) are widely recognized as optimal for pain assessment, but raw EEG signals are often contaminated by noise and background activity, making LEPs extraction challenging. Existing wavelet-based methods construct templates using all trials. However, since LEPs exhibit variations across different pain intensities, the use of all trials may result in the loss of discriminative features essential for distinguishing between varying levels of pain intensity. In this study, EEG source signals are obtained using electrophysiological source imaging, and high-activation-rate pain-related EEG sources are identified based on their activation characteristics. A within-subject pain intensity model based on a support vector machine is then developed to link recorded EEG signals with historical EEG samples. The model guides the selection of suitable EEG segments to construct a pain-specific template, which is subsequently used to reconstruct the recorded EEG signals by exploiting the time–frequency distribution of pain-related wavelet coefficients, thereby facilitating more effective extraction of LEPs. Experiments on real EEG recordings confirm that the proposed method can extract signals that more closely reflect genuine pain-evoked EEG activity, thereby enhancing the representation of pain and significantly improving subsequent classification performance, with binary accuracy increasing from 59.61% to 81.13% and three-class accuracy from 39.22% to 64.23%. The method addresses the challenge of insufficient pain expression in raw signals and provides a data foundation for developing objective pain biomarkers.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109725"},"PeriodicalIF":4.9,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing deep learning reliability in MRI images of Alzheimer’s disease using pairwise probability differential reliability quantification 利用两两概率差分可靠性量化增强阿尔茨海默病MRI图像的深度学习可靠性
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-12 DOI: 10.1016/j.bspc.2026.109717
Junaidul Islam , Isack Farady , Chia-Chen Kuo , Fu-Yu Lin , Chih-Yang Lin
In domains where safety is crucial, such as medical imaging, achieving a high degree of classification accuracy is inadequate unless it is complemented by predictions that are both reliable and interpretable. Traditional measures of uncertainty frequently yield global assessments that obscure localized fluctuations in model confidence, a factor that can be pivotal when differentiating between closely related clinical conditions. This paper presents Pairwise Probability Differential Reliability Quantification (PPDRQ), an innovative framework that assesses the reliability of predictions made by deep neural networks through the evaluation of pairwise discrepancies among class probability estimates. By incorporating PPDRQ into a custom loss function enhanced with a triplet loss to refine the feature space. The proposed approach incentivizes the network to generate outputs that are both more distinct and trustworthy. Comprehensive experiments conducted on MRI-based Alzheimer’s disease diagnosis indicate that models with PPDRQ exhibit enhanced reliability, improved interpretability, and superior performance, thereby offering significant insights for clinical decision-making.
在安全至关重要的领域,如医学成像,除非辅以既可靠又可解释的预测,否则实现高度的分类准确性是不够的。传统的不确定性测量方法经常产生的全球评估模糊了模型置信度的局部波动,而这一因素在区分密切相关的临床条件时可能是关键因素。本文提出了一种创新的框架——两两概率差分可靠性量化(PPDRQ),它通过评估类别概率估计之间的两两差异来评估深度神经网络预测的可靠性。通过将PPDRQ合并到自定义损失函数中,增强了三重损失,以改进特征空间。所提出的方法激励网络产生更独特和更值得信赖的输出。对基于mri的阿尔茨海默病诊断的综合实验表明,PPDRQ模型具有更高的可靠性、更好的可解释性和更好的性能,从而为临床决策提供了重要的见解。
{"title":"Enhancing deep learning reliability in MRI images of Alzheimer’s disease using pairwise probability differential reliability quantification","authors":"Junaidul Islam ,&nbsp;Isack Farady ,&nbsp;Chia-Chen Kuo ,&nbsp;Fu-Yu Lin ,&nbsp;Chih-Yang Lin","doi":"10.1016/j.bspc.2026.109717","DOIUrl":"10.1016/j.bspc.2026.109717","url":null,"abstract":"<div><div>In domains where safety is crucial, such as medical imaging, achieving a high degree of classification accuracy is inadequate unless it is complemented by predictions that are both reliable and interpretable. Traditional measures of uncertainty frequently yield global assessments that obscure localized fluctuations in model confidence, a factor that can be pivotal when differentiating between closely related clinical conditions. This paper presents Pairwise Probability Differential Reliability Quantification (PPDRQ), an innovative framework that assesses the reliability of predictions made by deep neural networks through the evaluation of pairwise discrepancies among class probability estimates. By incorporating PPDRQ into a custom loss function enhanced with a triplet loss to refine the feature space. The proposed approach incentivizes the network to generate outputs that are both more distinct and trustworthy. Comprehensive experiments conducted on MRI-based Alzheimer’s disease diagnosis indicate that models with PPDRQ exhibit enhanced reliability, improved interpretability, and superior performance, thereby offering significant insights for clinical decision-making.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109717"},"PeriodicalIF":4.9,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical image fusion for enhanced edge adaptive Level Set 基于增强边缘自适应水平集的医学图像融合
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-12 DOI: 10.1016/j.bspc.2026.109525
Jiao Du , Xiaoyu Yu , Chengxin Su , Qun Zhao
Brain tumors are serious disease, and lesion areas detected by medical imaging typically exhibit distinct edge and contrast information. The aim of medical image fusion method is to synthesize multiple-image information. However, existing methods, while effective in preserving rich information, often struggle to enhance edge and contrast, resulting in artifacts or noise that influencing image quality. In this paper, the input images are first enhanced, and then, based on the model of level set segmentation, an additive bias correction (ABC) level set method is used to adaptively decompose the image into a base layer, a strong edge layer, and a weak edge layer. The sum of the eigenvalues of the covariance matrices (COV) in different directions is used to obtain the weight map for the strong edge layer, while the covariance matrix between image channels is utilized to evaluate the correlation among channels, thereby calculating the weight for fusing the weak edge layer. The experimental results demonstrate that the proposed method achieves an approximately 5% increase in the objective evaluation metrics of Standard Deviation (SD) and Visual Information Fidelity (VIF).To help doctors better observe the characteristics of the lesions.
脑肿瘤是一种严重的疾病,医学成像检测到的病变区域通常具有明显的边缘和对比信息。医学图像融合方法的目的是综合多幅图像信息。然而,现有的方法虽然可以有效地保留丰富的信息,但往往难以增强边缘和对比度,从而导致影响图像质量的伪影或噪声。本文首先对输入图像进行增强,然后在水平集分割模型的基础上,采用加性偏置校正(ABC)水平集方法自适应地将图像分解为基层、强边缘层和弱边缘层。利用不同方向的协方差矩阵(COV)特征值和得到强边缘层的权值映射,利用图像通道间的协方差矩阵评价通道间的相关性,从而计算融合弱边缘层的权值。实验结果表明,该方法在标准偏差(SD)和视觉信息保真度(VIF)的客观评价指标上提高了约5%。帮助医生更好地观察病变特征。
{"title":"Medical image fusion for enhanced edge adaptive Level Set","authors":"Jiao Du ,&nbsp;Xiaoyu Yu ,&nbsp;Chengxin Su ,&nbsp;Qun Zhao","doi":"10.1016/j.bspc.2026.109525","DOIUrl":"10.1016/j.bspc.2026.109525","url":null,"abstract":"<div><div>Brain tumors are serious disease, and lesion areas detected by medical imaging typically exhibit distinct edge and contrast information. The aim of medical image fusion method is to synthesize multiple-image information. However, existing methods, while effective in preserving rich information, often struggle to enhance edge and contrast, resulting in artifacts or noise that influencing image quality. In this paper, the input images are first enhanced, and then, based on the model of level set segmentation, an additive bias correction (ABC) level set method is used to adaptively decompose the image into a base layer, a strong edge layer, and a weak edge layer. The sum of the eigenvalues of the covariance matrices (COV) in different directions is used to obtain the weight map for the strong edge layer, while the covariance matrix between image channels is utilized to evaluate the correlation among channels, thereby calculating the weight for fusing the weak edge layer. The experimental results demonstrate that the proposed method achieves an approximately 5% increase in the objective evaluation metrics of Standard Deviation (SD) and Visual Information Fidelity (VIF).To help doctors better observe the characteristics of the lesions.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109525"},"PeriodicalIF":4.9,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1