Pub Date : 2026-06-15Epub Date: 2026-02-11DOI: 10.1016/j.bspc.2026.109765
Dalin Wang , Xuemei Wu , Chao He
Low-dose CT reconstruction remains challenging because dose reduction amplifies noise and streak artifacts, while strong priors risk removing subtle anatomical details. Score-based diffusion models provide a flexible way to model CT image distributions, yet pixel-domain diffusion couples low-frequency structure and high-frequency texture within a single score function and the sampling process can be slow when data consistency is enforced with generic updates. We present a component-wise score diffusion model that performs diffusion on wavelet subbands and interleaves reverse sampling with a momentum-accelerated OS-SART projection step. This design decouples structural and textural priors in the wavelet domain and enforces projection fidelity throughout sampling. Experiments on the AAPM-Mayo dataset show consistent improvements over competitive baselines in both low-dose full-view and sparse-view settings, achieving 41.03 dB PSNR and 0.965 SSIM at 10 percent dose and 39.26 dB PSNR at 96 views while reducing inference time relative to other score-based methods.
低剂量CT重建仍然具有挑战性,因为剂量降低会放大噪声和条纹伪影,而强先验可能会去除细微的解剖细节。基于分数的扩散模型提供了一种灵活的方法来模拟CT图像分布,然而像素域扩散在单个分数函数中耦合了低频结构和高频纹理,并且当使用通用更新强制数据一致性时,采样过程可能很慢。我们提出了一种基于分量的分数扩散模型,该模型在小波子带上进行扩散,并通过动量加速的OS-SART投影步骤交织反向采样。该设计解耦了小波域的结构和纹理先验,并在整个采样过程中增强了投影保真度。在AAPM-Mayo数据集上的实验表明,在低剂量全视图和稀疏视图设置下,与竞争基线相比,该方法在低剂量全视图和稀疏视图设置下均有一致的改进,在10%剂量下达到41.03 dB PSNR和0.965 SSIM,在96视图下达到39.26 dB PSNR,同时相对于其他基于分数的方法减少了推理时间。
{"title":"Component-wise score diffusion model with momentum-accelerated updates for low-dose CT reconstruction","authors":"Dalin Wang , Xuemei Wu , Chao He","doi":"10.1016/j.bspc.2026.109765","DOIUrl":"10.1016/j.bspc.2026.109765","url":null,"abstract":"<div><div>Low-dose CT reconstruction remains challenging because dose reduction amplifies noise and streak artifacts, while strong priors risk removing subtle anatomical details. Score-based diffusion models provide a flexible way to model CT image distributions, yet pixel-domain diffusion couples low-frequency structure and high-frequency texture within a single score function and the sampling process can be slow when data consistency is enforced with generic updates. We present a component-wise score diffusion model that performs diffusion on wavelet subbands and interleaves reverse sampling with a momentum-accelerated OS-SART projection step. This design decouples structural and textural priors in the wavelet domain and enforces projection fidelity throughout sampling. Experiments on the AAPM-Mayo dataset show consistent improvements over competitive baselines in both low-dose full-view and sparse-view settings, achieving 41.03 dB PSNR and 0.965 SSIM at 10 percent dose and 39.26 dB PSNR at 96 views while reducing inference time relative to other score-based methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109765"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-10DOI: 10.1016/j.bspc.2026.109710
Pei-Chun Su , Chao-Yi Chen , Chia-Hao Kuo , Wei-Chung Tsai , Hau-Tieng Wu
Objective
The widely used bandpass filter (BPF)-based algorithm for recovering sympathetic nerve activity (SNA) from the skin sympathetic nerve activity (SKNA −I) signal, recorded via electrocardiogram electrodes or subcutaneous sympathetic nerve activity (SCNA-I) in a lead I setup, has limitations. It excludes spectral information outside the BPF range and may retain artifacts, such as cardiac activity or pacemaker interference, in the recovered SNA (rSNA) signal. This study aims to develop an algorithm that recovers the full spectral SNA information as comprehensively as possible for evaluating the autonomic nervous system (ANS).
Methods
We propose a novel algorithm, S3 (SNA from Shrink and Subtraction), which integrates the optimal shrinkage algorithm (eOptShrink) with the template subtraction (TS) method, and make the Matlab code publicly available. The performance of S3 was evaluated against other algorithms using semi-real simulated SKNA-I data, a human SKNA-I database including subjects with pacemakers or atrial fibrillation (Af), and a mice SCNA-I database.
Results
The S3 algorithm demonstrated numerical efficiency and outperformed existing approaches, including traditional TS, BPF and other methods, in both time and frequency domains. Notably, in addition to the traditional 500–1000 Hz spectral band, S3 effectively recovers spectral information across the 50–300 Hz and 300–500 Hz frequency bands. All quantitative results are supported by the rSNA tracing for visual inspections.
Conclusion
S3 overcomes key limitations of existing methods and accurately recovers full-spectrum SNA from human SKNA-I, including cases with pacemaker and AF, as well as from mouse SCNA-I, with both theoretical justification and numerical validation. Since S3 can recover spectral information across the 50–300 Hz and 300–500 Hz frequency bands, and ECG signals in the homecare environments are typically sampled at 1–2 kHz, S3 is potentially suitable for home-based ANS evaluation.
Significance
S3 enables exploration of the entire SNA spectrum and shows strong potential for ANS evaluation in homecare settings.
目的广泛使用的基于带通滤波器(BPF)的算法从皮肤交感神经活动(SKNA -I)信号中恢复交感神经活动(SNA),这些信号是通过心电图电极或在导联I装置中记录的皮下交感神经活动(SCNA-I),但存在局限性。它排除了BPF范围外的频谱信息,并可能保留伪象,如心脏活动或起搏器干扰,在恢复的SNA (rSNA)信号中。本研究旨在开发一种尽可能全面地恢复全谱SNA信息的算法,用于评估自主神经系统(ANS)。方法我们提出了一种新的算法S3 (SNA from Shrink and Subtraction),它将最优收缩算法(eOptShrink)与模板减法(TS)方法相结合,并公开了Matlab代码。通过半真实的模拟SKNA-I数据、包括心脏起搏器或心房颤动(Af)受试者的人类SKNA-I数据库和小鼠SCNA-I数据库,对S3的性能与其他算法进行了评估。结果S3算法在时域和频域均优于传统TS、BPF等方法。值得注意的是,除了传统的500-1000 Hz频段外,S3还可以有效地恢复50-300 Hz和300-500 Hz频段的频谱信息。所有定量结果均由目视检查的rSNA追踪支持。结论s3克服了现有方法的主要局限性,准确地恢复了包括起搏器和心房颤动病例在内的人SCNA-I以及小鼠SCNA-I的全谱SNA,具有理论依据和数值验证。由于S3可以恢复50-300 Hz和300-500 Hz频段的频谱信息,并且家庭护理环境中的心电信号通常以1-2 kHz采样,因此S3可能适用于基于家庭的ANS评估。意义3能够探索整个SNA谱,并显示出在家庭护理环境中进行ANS评估的强大潜力。
{"title":"Sympathetic nerve activity recovery from the skin recording using the modern optimal shrinkage technique","authors":"Pei-Chun Su , Chao-Yi Chen , Chia-Hao Kuo , Wei-Chung Tsai , Hau-Tieng Wu","doi":"10.1016/j.bspc.2026.109710","DOIUrl":"10.1016/j.bspc.2026.109710","url":null,"abstract":"<div><h3>Objective</h3><div>The widely used bandpass filter (BPF)-based algorithm for recovering sympathetic nerve activity (SNA) from the skin sympathetic nerve activity (SKNA −I) signal, recorded via electrocardiogram electrodes or subcutaneous sympathetic nerve activity (SCNA-I) in a lead I setup, has limitations. It excludes spectral information outside the BPF range and may retain artifacts, such as cardiac activity or pacemaker interference, in the recovered SNA (rSNA) signal. This study aims to develop an algorithm that recovers the full spectral SNA information as comprehensively as possible for evaluating the autonomic nervous system (ANS).</div></div><div><h3>Methods</h3><div>We propose a novel algorithm, S3 (<em>SNA from Shrink and Subtraction</em>), which integrates the optimal shrinkage algorithm (eOptShrink) with the template subtraction (TS) method, and make the Matlab code publicly available. The performance of S3 was evaluated against other algorithms using semi-real simulated SKNA-I data, a human SKNA-I database including subjects with pacemakers or atrial fibrillation (Af), and a mice SCNA-I database.</div></div><div><h3>Results</h3><div>The S3 algorithm demonstrated numerical efficiency and outperformed existing approaches, including traditional TS, BPF and other methods, in both time and frequency domains. Notably, in addition to the traditional 500–1000 Hz spectral band, S3 effectively recovers spectral information across the 50–300 Hz and 300–500 Hz frequency bands. All quantitative results are supported by the rSNA tracing for visual inspections.</div></div><div><h3>Conclusion</h3><div>S3 overcomes key limitations of existing methods and accurately recovers full-spectrum SNA from human SKNA-I, including cases with pacemaker and AF, as well as from mouse SCNA-I, with both theoretical justification and numerical validation. Since S3 can recover spectral information across the 50–300 Hz and 300–500 Hz frequency bands, and ECG signals in the homecare environments are typically sampled at 1–2 kHz, S3 is potentially suitable for home-based ANS evaluation.</div></div><div><h3>Significance</h3><div>S3 enables exploration of the entire SNA spectrum and shows strong potential for ANS evaluation in homecare settings.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109710"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-09DOI: 10.1016/j.bspc.2026.109635
Shisheng Chen , Wenjing Pan , Tongyao Chen , Xinxin Xie , Yi Zhang
Melanoma is a highly aggressive skin malignancy, and its early detection is critical for reducing mortality. With the rapid progress of deep learning in medical imaging, convolutional neural networks (CNNs) have become powerful tools for automated pathological image analysis. This study aimed to systematically evaluate the performance, interpretability, and potential clinical utility of different deep learning models in classifying melanoma on H&E-stained pathology images. A total of 312 clinical H&E whole-slide images (210 normal skin and 102 melanoma) were acquired and preprocessed through resizing, normalization, and data augmentation. Four CNN architectures—ResNet, VGG, MobileNetV2, and DenseNet121—were constructed for classification, and five-fold cross-validation was used for performance evaluation based on accuracy, sensitivity, specificity, F1-score, and AUC. Grad-CAM was further applied for model interpretability, with pathological verification by experienced dermatopathologists. All four models successfully differentiated melanoma from normal skin tissue, with ResNet achieving the highest mean accuracy (96.10%) and the best F1-score and AUC. VGG exhibited strong stability, while MobileNetV2 and DenseNet121 provided higher computational efficiency but slightly lower diagnostic performance. Statistical analysis confirmed that ResNet outperformed the other models significantly (p < 0.05). Grad-CAM visualization demonstrated that the highlighted regions corresponded closely to key histopathological features of melanoma, indicating that the model’s decision-making process is pathologically plausible.
{"title":"Accuracy enhancement in melanoma diagnosis: A comparative study of residual networks and visual geometry group architectures","authors":"Shisheng Chen , Wenjing Pan , Tongyao Chen , Xinxin Xie , Yi Zhang","doi":"10.1016/j.bspc.2026.109635","DOIUrl":"10.1016/j.bspc.2026.109635","url":null,"abstract":"<div><div>Melanoma is a highly aggressive skin malignancy, and its early detection is critical for reducing mortality. With the rapid progress of deep learning in medical imaging, convolutional neural networks (CNNs) have become powerful tools for automated pathological image analysis. This study aimed to systematically evaluate the performance, interpretability, and potential clinical utility of different deep learning models in classifying melanoma on H&E-stained pathology images. A total of 312 clinical H&E whole-slide images (210 normal skin and 102 melanoma) were acquired and preprocessed through resizing, normalization, and data augmentation. Four CNN architectures—ResNet, VGG, MobileNetV2, and DenseNet121—were constructed for classification, and five-fold cross-validation was used for performance evaluation based on accuracy, sensitivity, specificity, F1-score, and AUC. Grad-CAM was further applied for model interpretability, with pathological verification by experienced dermatopathologists. All four models successfully differentiated melanoma from normal skin tissue, with ResNet achieving the highest mean accuracy (96.10%) and the best F1-score and AUC. VGG exhibited strong stability, while MobileNetV2 and DenseNet121 provided higher computational efficiency but slightly lower diagnostic performance. Statistical analysis confirmed that ResNet outperformed the other models significantly (<em>p</em> < 0.05). Grad-CAM visualization demonstrated that the highlighted regions corresponded closely to key histopathological features of melanoma, indicating that the model’s decision-making process is pathologically plausible.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109635"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-11DOI: 10.1016/j.bspc.2026.109778
Beaulah Jeyavathana R , Kalaivani Chellappan , M. Sai Ganeshan
Tuberculosis (TB) is considered an airborne disease, causing a high death rate globally by affecting the lungs. The early detection of TB remains a challenge owing to the lack of screening facilities. The availability of public datasets, the advancement of artificial intelligence (AI), computerized systems have enabled the automatic diagnosis of tuberculosis using chest X-rays. The existing AI algorithms utilize complicated architectures; therefore, the procedure is time-consuming and costly. To overcome these limitations, Research proposes a novel lightweight ensemble deep learning (DL) model for detecting and localizing TB from Chest X-Ray (CXR) images. The proposed research gathered the CXR images from the Kaggle repository. The images are standardized before being input to the DL models to ensure enhanced learning and stability. Further, make the image dimensions suited for DL models, the images are resized. The lung regions are segmented from the pre-processed images using a spatial attention based residual U-net model (SA-Res-UNet) to ensure accurate detection of TB. Finally, the CXR images are classified as normal and TB using the ensemble model. The proposed model has an ensemble of custom convolutional neural network (CNN), MobileNetV2, and Swin Transformer (ST) models. Individual model predictions are combined based on majority voting. Finally, the classified images are explained and interpreted by providing visualization through self-attention-based class activation mapping (SA-CAM). The experiments are conducted in the Python programming language. The proposed model attained 99% accuracy in detecting TB disease. The proposed model’s exceptional results demonstrate its efficacy in TB detection, allowing for practical application.
{"title":"Attention-assisted ensemble CNN–MobileNetV2–transformer architecture for automated TB diagnosis","authors":"Beaulah Jeyavathana R , Kalaivani Chellappan , M. Sai Ganeshan","doi":"10.1016/j.bspc.2026.109778","DOIUrl":"10.1016/j.bspc.2026.109778","url":null,"abstract":"<div><div>Tuberculosis (TB) is considered an airborne disease, causing a high death rate globally by affecting the lungs. The early detection of TB remains a challenge owing to the lack of screening facilities. The availability of public datasets, the advancement of artificial intelligence (AI), computerized systems have enabled the automatic diagnosis of tuberculosis using chest X-rays. The existing AI algorithms utilize complicated architectures; therefore, the procedure is time-consuming and costly. To overcome these limitations, Research proposes a novel lightweight ensemble deep learning (DL) model for detecting and localizing TB from Chest X-Ray (CXR) images. The proposed research gathered the CXR images from the Kaggle repository. The images are standardized before being input to the DL models to ensure enhanced learning and stability. Further, make the image dimensions suited for DL models, the images are resized. The lung regions are segmented from the pre-processed images using a spatial attention based residual U-net model (SA-Res-UNet) to ensure accurate detection of TB. Finally, the CXR images are classified as normal and TB using the ensemble model. The proposed model has an ensemble of custom convolutional neural network (CNN), MobileNetV2, and Swin Transformer (ST) models. Individual model predictions are combined based on majority voting. Finally, the classified images are explained and interpreted by providing visualization through self-attention-based class activation mapping (SA-CAM). The experiments are conducted in the Python programming language. The proposed model attained 99% accuracy in detecting TB disease. The proposed model’s exceptional results demonstrate its efficacy in TB detection, allowing for practical application.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109778"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-09DOI: 10.1016/j.bspc.2026.109741
Ziad Elshaer , Ahmed Jamal , Essam A. Rashed
Tumor-infiltrating lymphocytes (TILs) assessment in melanoma histopathology is critical for predicting immunotherapy response and improving patient outcomes, yet current automated segmentation methods are severely constrained by limited datasets and pronounced class imbalance. We present a novel dual-generator adversarial framework that revolutionizes synthetic histopathology data generation by decomposing the complex synthesis problem into two specialized sequential tasks: controllable mask generation with user-specified class distributions, followed by high-fidelity histopathology image synthesis. This innovative approach enables precise dataset augmentation with any desired number of tissue classes per image, fundamentally addressing the scarcity of balanced training data. Leveraging the PUMA Grand Challenge dataset, we systematically generated two complementary datasets and evaluated them using a custom U-Net architecture that integrates a powerful MedSAM encoder with a specialized decoder optimized for fine-grained tissue segmentation. Our dual-GAN framework demonstrates exceptional capability in generating photorealistic histopathology images while maintaining precise control over tissue class distributions and spatial relationships. The proposed architecture achieved outstanding performance with an F1 score of 0.91 on the PUMA dataset and new data from the three-class per-image dataset, significantly advancing the state-of-the-art in melanoma tissue segmentation. This scalable framework establishes a new paradigm for computational pathology, enabling robust TIL assessment and enhanced clinical decision-making in melanoma management.
黑色素瘤组织病理学中肿瘤浸润淋巴细胞(til)的评估对于预测免疫治疗反应和改善患者预后至关重要,但目前的自动分割方法受到有限的数据集和明显的分类不平衡的严重限制。我们提出了一种新的双生成器对抗框架,通过将复杂的合成问题分解为两个专门的顺序任务,彻底改变了合成组织病理学数据的生成:具有用户指定类分布的可控掩膜生成,然后是高保真的组织病理学图像合成。这种创新的方法可以通过每张图像的任意数量的组织类来精确地增强数据集,从根本上解决了平衡训练数据的稀缺性。利用PUMA Grand Challenge数据集,我们系统地生成了两个互补的数据集,并使用定制的U-Net架构对它们进行了评估,该架构集成了强大的MedSAM编码器和针对细粒度组织分割优化的专用解码器。我们的双gan框架在生成逼真的组织病理学图像同时保持对组织类分布和空间关系的精确控制方面表现出卓越的能力。该架构在PUMA数据集和来自每幅图像三级数据集的新数据上取得了0.91的F1分数,显著推进了黑色素瘤组织分割的最新技术。这个可扩展的框架为计算病理学建立了一个新的范例,使TIL评估和增强黑色素瘤管理的临床决策成为可能。
{"title":"Synthetic histopathology with controllable class distribution: A dual-GAN framework for melanoma segmentation","authors":"Ziad Elshaer , Ahmed Jamal , Essam A. Rashed","doi":"10.1016/j.bspc.2026.109741","DOIUrl":"10.1016/j.bspc.2026.109741","url":null,"abstract":"<div><div>Tumor-infiltrating lymphocytes (TILs) assessment in melanoma histopathology is critical for predicting immunotherapy response and improving patient outcomes, yet current automated segmentation methods are severely constrained by limited datasets and pronounced class imbalance. We present a novel dual-generator adversarial framework that revolutionizes synthetic histopathology data generation by decomposing the complex synthesis problem into two specialized sequential tasks: controllable mask generation with user-specified class distributions, followed by high-fidelity histopathology image synthesis. This innovative approach enables precise dataset augmentation with any desired number of tissue classes per image, fundamentally addressing the scarcity of balanced training data. Leveraging the PUMA Grand Challenge dataset, we systematically generated two complementary datasets and evaluated them using a custom U-Net architecture that integrates a powerful MedSAM encoder with a specialized decoder optimized for fine-grained tissue segmentation. Our dual-GAN framework demonstrates exceptional capability in generating photorealistic histopathology images while maintaining precise control over tissue class distributions and spatial relationships. The proposed architecture achieved outstanding performance with an F1 score of 0.91 on the PUMA dataset and new data from the three-class per-image dataset, significantly advancing the state-of-the-art in melanoma tissue segmentation. This scalable framework establishes a new paradigm for computational pathology, enabling robust TIL assessment and enhanced clinical decision-making in melanoma management.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109741"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-07DOI: 10.1016/j.bspc.2026.109754
Qingfeng Tang , Huihui Hu , Chao Tao , Pengcheng Ding , Guowei Dai , Guangjun Wang , Xiaojuan Hu , Benyue Su , Jiatuo Xu , Hui An
Although concatenating knowledge features (KF) and data features (DF) of photoplethysmography (PPG) can improve the predictive performance of blood pressure monitoring models, this approach inevitably increases the dimensionality of feature space. To address this limitation, we propose an innovative feature extraction method that deeply integrate KF and DF, rather than simply concatenating them.
Our method employs functional data analysis to extract DF by treating PPG as continuously functional curve. Subsequently, the distribution patterns of KF are thoroughly analyzed to construct a KF-based constrained space, which serves as a guide during DF extraction, to yield novel data-knowledge features (DKF).
The experimental results on blood pressure prediction showed that, without the need for additional dimensions, 9-dimensional DKF delivered superior predictive performance compared to both 9-dimensional DF and 8-dimensional KF. Specifically, for systolic blood pressure prediction, DKF reduces the mean absolute error (MAE) to 11.41, outperforming KF (MAE=12.11) and DF (MAE=13.24). Similarly, for diastolic blood pressure, DKF achieves an MAE of 7.27, lower than that of KF (7.41) and DF (7.84).
The proposed feature extraction method effectively overcomes the drawbacks of feature concatenation, offering a novel and effective approach to extracting low-dimensional, highly discriminative features from PPG for accurate blood pressure estimation.
{"title":"Data-knowledge feature fusion for PPG-based blood pressure prediction: Low-dimensional extraction via functional data analysis and knowledge constraint","authors":"Qingfeng Tang , Huihui Hu , Chao Tao , Pengcheng Ding , Guowei Dai , Guangjun Wang , Xiaojuan Hu , Benyue Su , Jiatuo Xu , Hui An","doi":"10.1016/j.bspc.2026.109754","DOIUrl":"10.1016/j.bspc.2026.109754","url":null,"abstract":"<div><div>Although concatenating knowledge features (KF) and data features (DF) of photoplethysmography (PPG) can improve the predictive performance of blood pressure monitoring models, this approach inevitably increases the dimensionality of feature space. To address this limitation, we propose an innovative feature extraction method that deeply integrate KF and DF, rather than simply concatenating them.</div><div>Our method employs functional data analysis to extract DF by treating PPG as continuously functional curve. Subsequently, the distribution patterns of KF are thoroughly analyzed to construct a KF-based constrained space, which serves as a guide during DF extraction, to yield novel data-knowledge features (DKF).</div><div>The experimental results on blood pressure prediction showed that, without the need for additional dimensions, 9-dimensional DKF delivered superior predictive performance compared to both 9-dimensional DF and 8-dimensional KF. Specifically, for systolic blood pressure prediction, DKF reduces the mean absolute error (MAE) to 11.41, outperforming KF (MAE=12.11) and DF (MAE=13.24). Similarly, for diastolic blood pressure, DKF achieves an MAE of 7.27, lower than that of KF (7.41) and DF (7.84).</div><div>The proposed feature extraction method effectively overcomes the drawbacks of feature concatenation, offering a novel and effective approach to extracting low-dimensional, highly discriminative features from PPG for accurate blood pressure estimation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109754"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146191946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-06DOI: 10.1016/j.bspc.2026.109733
Chao Zhang , Lei Yang , Sai Zhang , Hongliang Duan , Jingjing Guo
Deep learning has made remarkable progress across various domains, particularly in medical image segmentation. However, a persistent challenge remains in balancing accuracy and computational efficiency, as current state-of-the-art models often sacrifice one aspect to enhance the other. Here, we propose RA2M-UNet, a novel network that addresses this trade-off through key innovations: (1) a feature fusion module that integrates multi-scale dilated convolutions with 2D selective scan module (2D-SSM); (2) an enhanced 2D-SSM for better spatial and semantic dependency capture; (3) parameter-efficient structural re-parameterization; (4) multi-output supervision for further refined segmentation. Comprehensive experiments demonstrate that our approach outperforms existing methods while maintaining parameter efficiency, effectively resolving the accuracy-efficiency dilemma in medical image segmentation.
{"title":"RA2M-UNet: Efficient medical image segmentation via reparameterized convolution, dual-domain attention and 2D state–space modeling","authors":"Chao Zhang , Lei Yang , Sai Zhang , Hongliang Duan , Jingjing Guo","doi":"10.1016/j.bspc.2026.109733","DOIUrl":"10.1016/j.bspc.2026.109733","url":null,"abstract":"<div><div>Deep learning has made remarkable progress across various domains, particularly in medical image segmentation. However, a persistent challenge remains in balancing accuracy and computational efficiency, as current state-of-the-art models often sacrifice one aspect to enhance the other. Here, we propose RA2M-UNet, a novel network that addresses this trade-off through key innovations: (1) a feature fusion module that integrates multi-scale dilated convolutions with 2D selective scan module (2D-SSM); (2) an enhanced 2D-SSM for better spatial and semantic dependency capture; (3) parameter-efficient structural re-parameterization; (4) multi-output supervision for further refined segmentation. Comprehensive experiments demonstrate that our approach outperforms existing methods while maintaining parameter efficiency, effectively resolving the accuracy-efficiency dilemma in medical image segmentation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109733"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain–computer interfaces (BCIs) play a pivotal role in facilitating human–machine interaction and elucidating brain mechanisms, with motor imagery (MI) being one of the most widely studied paradigms due to its substantial potential. However, inherent inter-subject variability in physiological structures often constrains the accuracy of MI decoding models. To address this challenge, we construct a streamlined graph convolutional network (GCN) and develop an MI decoding model, termed GCN-multiDA. Specifically, the model employs a GCN to capture spatial dependencies in EEG signals and incorporates a graph pruning strategy based on the task-frequency index (TF), region-of-interest index (ROI), and topological index (Topo) to streamline the network. This design preserves neurophysiological relevance while enhancing decoding accuracy and reducing model complexity. Furthermore, drawing inspiration from multi-source personalized domain adaptation, we introduce a domain bias assessment measurement (DBAM) to align cross-domain feature distributions and mitigate inter-domain discrepancies, along with a classifier alignment module to enforce prediction consistency across domains, thereby enabling robust MI classification. Comprehensive experiments conducted on four datasets, including BCI competition IV 2a and 2b, OpenBMI, and PhysioNet, demonstrate that GCN-multiDA consistently outperforms baseline models, improving mean accuracy by 2.66%, 2.53%, 1.32%, and 3.55%, respectively, and achieving the best performance in terms of Kappa and rRMSE metrics. Ablation and sensitivity analyses further confirm that the pruning algorithm contributes substantially to performance improvements across all datasets.
脑机接口(bci)在促进人机交互和阐明脑机制方面发挥着关键作用,其中运动意象(MI)因其巨大的潜力而成为研究最广泛的范式之一。然而,生理结构中固有的主体间可变性往往限制了MI解码模型的准确性。为了解决这一挑战,我们构建了一个流线型图卷积网络(GCN),并开发了一个MI解码模型,称为GCN- multida。具体而言,该模型采用GCN捕获EEG信号中的空间依赖关系,并结合基于任务频率指数(TF)、感兴趣区域指数(ROI)和拓扑指数(Topo)的图修剪策略来简化网络。该设计保留了神经生理学相关性,同时提高了解码精度并降低了模型复杂性。此外,从多源个性化领域自适应中获得灵感,我们引入了一个领域偏差评估测量(DBAM)来校准跨领域的特征分布并减轻领域间的差异,以及一个分类器校准模块来强制跨领域的预测一致性,从而实现鲁棒的MI分类。在BCI competition IV 2a和2b、OpenBMI和PhysioNet四个数据集上进行的综合实验表明,GCN-multiDA持续优于基线模型,平均准确率分别提高了2.66%、2.53%、1.32%和3.55%,并且在Kappa和rRMSE指标方面取得了最佳性能。消融和敏感性分析进一步证实,修剪算法对所有数据集的性能改进都有很大的贡献。
{"title":"GCN-multiDA: A multi-source personalized domain adaptation model based on a novel streamlined GCN for motor imagery classification","authors":"Zhenxi Zhao , Yingyu Cao , Hongbin Yu , Huixian Yu , Junfen Huang","doi":"10.1016/j.bspc.2026.109773","DOIUrl":"10.1016/j.bspc.2026.109773","url":null,"abstract":"<div><div>Brain–computer interfaces (BCIs) play a pivotal role in facilitating human–machine interaction and elucidating brain mechanisms, with motor imagery (MI) being one of the most widely studied paradigms due to its substantial potential. However, inherent inter-subject variability in physiological structures often constrains the accuracy of MI decoding models. To address this challenge, we construct a streamlined graph convolutional network (GCN) and develop an MI decoding model, termed GCN-multiDA. Specifically, the model employs a GCN to capture spatial dependencies in EEG signals and incorporates a graph pruning strategy based on the task-frequency index (TF), region-of-interest index (ROI), and topological index (Topo) to streamline the network. This design preserves neurophysiological relevance while enhancing decoding accuracy and reducing model complexity. Furthermore, drawing inspiration from multi-source personalized domain adaptation, we introduce a domain bias assessment measurement (DBAM) to align cross-domain feature distributions and mitigate inter-domain discrepancies, along with a classifier alignment module to enforce prediction consistency across domains, thereby enabling robust MI classification. Comprehensive experiments conducted on four datasets, including BCI competition IV 2a and 2b, OpenBMI, and PhysioNet, demonstrate that GCN-multiDA consistently outperforms baseline models, improving mean accuracy by 2.66%, 2.53%, 1.32%, and 3.55%, respectively, and achieving the best performance in terms of Kappa and rRMSE metrics. Ablation and sensitivity analyses further confirm that the pruning algorithm contributes substantially to performance improvements across all datasets.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109773"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-13DOI: 10.1016/j.bspc.2026.109838
Li Ning , Yepei Qin , Wendong Zhao , Yangjiarui Yu , Qingcheng Yang , Chenxi Guo , Xuedian Zhang , Hui Chen , Yinghong Ji , Pei Ma
Accurate segmentation and quantification of the eyeball and lens from MRI images are crucial for clinical diagnosis and treatment planning of ocular diseases. Traditional methods for analyzing eye structures in MRI have drawbacks including low segmentation accuracy and reliance on laborious, time-consuming manual processes. To solve these problems, we propose a SEDP-SegResnet model for segmentation of the eyeball and lens structures from 3D MRI images. The framework takes SegResnet as its backbone network and incorporates a 3D-SE layer to handle deep features from decoder, 3D-SE layer assigns different weight information to the feature map channels through squeeze and excitation mechanism. Moreover, skip connections in the U-shaped architecture model are replaced with Dynamic Deep Feature Prefusion (DDFP) modules. The DDFP can achieve in-depth fusion of encoder and decoder features based on global information, thereby enhancing 3D image context comprehension of the model. The performance of SEDP-SegResnet is evaluated through a series of experiments using a proprietary dataset of orbital MRI scans. The results show that SEDP-SegResnet outperforms current mainstream 3D deep-learning-based segmentation models across multiple evaluation metrics including the Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). The model achieves robust performances in segmenting margin of eyeballs and blur-edge lenses. SEDP-SegResnet achieves a DSC of 96.81% for eyeball segmentation and 90.57% for lens segmentation, superior than a variety of commonly used segmentation models. It provides a more accurate, automated and robust method for the segmentation and quantification of eyeball and lens in MRI, offering an advanced computer-aided diagnosis tool.
MRI图像中眼球和晶状体的准确分割和量化对眼科疾病的临床诊断和治疗计划至关重要。传统的MRI眼睛结构分析方法存在分割精度低、依赖于费力、耗时的人工处理等缺点。为了解决这些问题,我们提出了一种SEDP-SegResnet模型,用于从3D MRI图像中分割眼球和晶状体结构。该框架以SegResnet为骨干网络,结合3D-SE层处理来自解码器的深度特征,3D-SE层通过挤压和激励机制为特征图通道分配不同的权重信息。此外,用动态深度特征预融合(Dynamic Deep Feature Prefusion, DDFP)模块代替u型结构模型中的跳过连接。DDFP可以基于全局信息实现编码器和解码器特征的深度融合,从而增强模型对三维图像上下文的理解能力。SEDP-SegResnet的性能通过使用专有的眼眶MRI扫描数据集进行一系列实验来评估。结果表明,SEDP-SegResnet在多个评估指标上优于当前主流的基于3D深度学习的分割模型,包括Dice Similarity Coefficient (DSC)和Intersection over Union (IoU)。该模型在眼球边缘分割和模糊边缘镜头分割方面取得了较好的效果。SEDP-SegResnet对眼球分割的DSC为96.81%,对晶体分割的DSC为90.57%,优于多种常用分割模型。它为MRI中眼球和晶状体的分割和定量提供了一种更加准确、自动化和稳健的方法,提供了一种先进的计算机辅助诊断工具。
{"title":"SEDP-SegResnet for human eyeball and lens segmentation","authors":"Li Ning , Yepei Qin , Wendong Zhao , Yangjiarui Yu , Qingcheng Yang , Chenxi Guo , Xuedian Zhang , Hui Chen , Yinghong Ji , Pei Ma","doi":"10.1016/j.bspc.2026.109838","DOIUrl":"10.1016/j.bspc.2026.109838","url":null,"abstract":"<div><div>Accurate segmentation and quantification of the eyeball and lens from MRI images are crucial for clinical diagnosis and treatment planning of ocular diseases. Traditional methods for analyzing eye structures in MRI have drawbacks including low segmentation accuracy and reliance on laborious, time-consuming manual processes. To solve these problems, we propose a SEDP-SegResnet model for segmentation of the eyeball and lens structures from 3D MRI images. The framework takes SegResnet as its backbone network and incorporates a 3D-SE layer to handle deep features from decoder, 3D-SE layer assigns different weight information to the feature map channels through squeeze and excitation mechanism. Moreover, skip connections in the U-shaped architecture model are replaced with Dynamic Deep Feature Prefusion (DDFP) modules. The DDFP can achieve in-depth fusion of encoder and decoder features based on global information, thereby enhancing 3D image context comprehension of the model. The performance of SEDP-SegResnet is evaluated through a series of experiments using a proprietary dataset of orbital MRI scans. The results show that SEDP-SegResnet outperforms current mainstream 3D deep-learning-based segmentation models across multiple evaluation metrics including the Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). The model achieves robust performances in segmenting margin of eyeballs and blur-edge lenses. SEDP-SegResnet achieves a DSC of 96.81% for eyeball segmentation and 90.57% for lens segmentation, superior than a variety of commonly used segmentation models. It provides a more accurate, automated and robust method for the segmentation and quantification of eyeball and lens in MRI, offering an advanced computer-aided diagnosis tool.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109838"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-15Epub Date: 2026-02-13DOI: 10.1016/j.bspc.2026.109833
Yuandi Sun
<div><h3>Background</h3><div>Breast cancer is one of the most common malignant tumors in women worldwide. Its early detection and accurate grading are crucial for developing individualized treatment plans and improving patient prognosis. Pathological image grading is a key step in breast cancer diagnosis, but due to the high heterogeneity of tumor cell and tissue morphology, traditional manual image reading methods have subjective bias and low efficiency. Therefore, developing an automated and accurate breast cancer pathological image grading model has important clinical value for improving diagnostic efficiency and accuracy.</div></div><div><h3>Methods</h3><div>This study proposed a deep learning model based on the combination of DenseNet and Selective Kernel Block (SKBlock) − SKDenseNet for the automatic grading of breast cancer pathology images. DenseNet enhances feature reuse and gradient propagation efficiency through dense connection mechanism, while SKBlock realizes dynamic extraction and fusion of pathological features of different scales through multi-scale convolution operation and channel attention mechanism. The model was trained on the TCGA dataset and independently tested on the CHTN dataset to evaluate the generalization ability and stability of the model in cross-center tasks. The model parameters were optimized, and the classification performance was evaluated by accuracy (ACC), precision (PRE), recall (REC) and F1 score (F1), and the discriminability and interpretability of the model were analyzed by confusion matrix and activation heat map .</div></div><div><h3>Results</h3><div>The experimental results on the test set (CHTN dataset) showed that SKDenseNet significantly outperformed the baseline model in all key classification indicators. The average accuracy of SKDenseNet was 86.58%, the average precision was 87.91%, the average recall was 87.97%, and the F1 score was 86.71%, which were 7.84, 3.18, 6.24, and 7.32 percentage points higher than DenseNet121, respectively. The confusion matrix showed that SKDenseNet showed good discrimination and stability in the classification tasks of high- and low-grade breast cancer and stromal tissue. In addition, SKDenseNet also has the highest AUC, reaching 0.9693. The activation heat map generated by Grad-CAM further verified that the key areas of the model’s attention in the pathological images were highly consistent with the actual pathological features (such as nuclear morphology and glandular duct structure), which enhanced the interpretability of the model.</div></div><div><h3>Conclusion</h3><div>SKDenseNet model proposed in this study combines the global feature expression ability of DenseNet with the dynamic receptive field adjustment mechanism of SKBlock, and shows excellent classification performance and cross-center adaptability in the task of breast cancer pathology image grading . The model can reduce the misdiagnosis rate and missed diagnosis rate while maintaining high accurac
{"title":"An enhanced deep learning model for breast cancer histopathological grading based on Selective Kernel network","authors":"Yuandi Sun","doi":"10.1016/j.bspc.2026.109833","DOIUrl":"10.1016/j.bspc.2026.109833","url":null,"abstract":"<div><h3>Background</h3><div>Breast cancer is one of the most common malignant tumors in women worldwide. Its early detection and accurate grading are crucial for developing individualized treatment plans and improving patient prognosis. Pathological image grading is a key step in breast cancer diagnosis, but due to the high heterogeneity of tumor cell and tissue morphology, traditional manual image reading methods have subjective bias and low efficiency. Therefore, developing an automated and accurate breast cancer pathological image grading model has important clinical value for improving diagnostic efficiency and accuracy.</div></div><div><h3>Methods</h3><div>This study proposed a deep learning model based on the combination of DenseNet and Selective Kernel Block (SKBlock) − SKDenseNet for the automatic grading of breast cancer pathology images. DenseNet enhances feature reuse and gradient propagation efficiency through dense connection mechanism, while SKBlock realizes dynamic extraction and fusion of pathological features of different scales through multi-scale convolution operation and channel attention mechanism. The model was trained on the TCGA dataset and independently tested on the CHTN dataset to evaluate the generalization ability and stability of the model in cross-center tasks. The model parameters were optimized, and the classification performance was evaluated by accuracy (ACC), precision (PRE), recall (REC) and F1 score (F1), and the discriminability and interpretability of the model were analyzed by confusion matrix and activation heat map .</div></div><div><h3>Results</h3><div>The experimental results on the test set (CHTN dataset) showed that SKDenseNet significantly outperformed the baseline model in all key classification indicators. The average accuracy of SKDenseNet was 86.58%, the average precision was 87.91%, the average recall was 87.97%, and the F1 score was 86.71%, which were 7.84, 3.18, 6.24, and 7.32 percentage points higher than DenseNet121, respectively. The confusion matrix showed that SKDenseNet showed good discrimination and stability in the classification tasks of high- and low-grade breast cancer and stromal tissue. In addition, SKDenseNet also has the highest AUC, reaching 0.9693. The activation heat map generated by Grad-CAM further verified that the key areas of the model’s attention in the pathological images were highly consistent with the actual pathological features (such as nuclear morphology and glandular duct structure), which enhanced the interpretability of the model.</div></div><div><h3>Conclusion</h3><div>SKDenseNet model proposed in this study combines the global feature expression ability of DenseNet with the dynamic receptive field adjustment mechanism of SKBlock, and shows excellent classification performance and cross-center adaptability in the task of breast cancer pathology image grading . The model can reduce the misdiagnosis rate and missed diagnosis rate while maintaining high accurac","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"119 ","pages":"Article 109833"},"PeriodicalIF":4.9,"publicationDate":"2026-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}