首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
YOLO-MFDS: Medical small object detection algorithm based on multi-feature fusion YOLO-MFDS:基于多特征融合的医疗小目标检测算法
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-27 DOI: 10.1016/j.bspc.2025.109410
Tianjiao Feng , Yuanbo Shi , Xiaofeng Wang , Haoran Zhao , Weipeng Chao
Multiclass cell detection plays a crucial role in various biomedical applications, particularly in cell biology. Although the development of YOLO object detection models has advanced real-time detection capabilities, challenges such as heterogeneous staining protocols, device variability, and object occlusion continue to hinder performance in medical imaging. To address these issues, we present YOLO-MFDS, a lightweight detector for multiclass cell detection, which is built upon YOLOv11n. To handle staining heterogeneity, device variability, and occlusion, YOLO-MFDS combines four complementary components: DLK-SF for dynamic large-kernel perception with saliency-guided fusion, CSA-Rep for short cross-stage aggregation paths through re-parameterisation, CARAFE for content-aware upsampling that preserves fine boundaries, and PSA-iEMA for low-cost channel–spatial reweighting with stabilised statistics. We also used CWDLoss distillation to align the channel-wise responses in dense and overlapping regions. On BCCD, YOLO-MFDS improves YOLOv11n by 5.5% mAP at IoU 0.5% and 6.8% mAP at IoU 0.5% to 0.95, and on Br35h, by 3.9% and 8.2%. The cross-dataset validation of BCCD and Br35h indicated good generalisation. The method is designed as a clinician-in-the-loop decision-support tool and can adapt, with modest domain adaptation, to additional leukaemia subtypes and tumour cytology. The source code and dataset splits are available at: https://github.com/Fengtj123/YOLO-MFDS.git.
多类细胞检测在各种生物医学应用中,特别是在细胞生物学中起着至关重要的作用。尽管YOLO目标检测模型的发展具有先进的实时检测能力,但异构染色方案、设备可变性和目标遮挡等挑战仍然阻碍了医学成像的性能。为了解决这些问题,我们提出了YOLO-MFDS,这是一种基于YOLOv11n构建的用于多类细胞检测的轻量级检测器。为了处理染色异质性、设备可变性和遮挡,YOLO-MFDS结合了四个互补组件:DLK-SF用于动态大核感知与显著性引导融合,CSA-Rep用于通过重新参数化的短跨阶段聚合路径,CARAFE用于内容感知上采样,保留了良好的边界,PSA-iEMA用于低成本通道空间重加权与稳定的统计。我们还使用CWDLoss蒸馏来对齐密集和重叠区域中的通道响应。在BCCD上,在IoU 0.5%和IoU 0.5%至0.95时,YOLO-MFDS分别使YOLOv11n的mAP提高了5.5%和6.8%,在Br35h上分别提高了3.9%和8.2%。BCCD和Br35h的跨数据集验证表明具有良好的通用性。该方法被设计为临床医生在循环决策支持工具,可以适应,适度的域适应,以额外的白血病亚型和肿瘤细胞学。源代码和数据集拆分可从https://github.com/Fengtj123/YOLO-MFDS.git获得。
{"title":"YOLO-MFDS: Medical small object detection algorithm based on multi-feature fusion","authors":"Tianjiao Feng ,&nbsp;Yuanbo Shi ,&nbsp;Xiaofeng Wang ,&nbsp;Haoran Zhao ,&nbsp;Weipeng Chao","doi":"10.1016/j.bspc.2025.109410","DOIUrl":"10.1016/j.bspc.2025.109410","url":null,"abstract":"<div><div>Multiclass cell detection plays a crucial role in various biomedical applications, particularly in cell biology. Although the development of YOLO object detection models has advanced real-time detection capabilities, challenges such as heterogeneous staining protocols, device variability, and object occlusion continue to hinder performance in medical imaging. To address these issues, we present YOLO-MFDS, a lightweight detector for multiclass cell detection, which is built upon YOLOv11n. To handle staining heterogeneity, device variability, and occlusion, YOLO-MFDS combines four complementary components: DLK-SF for dynamic large-kernel perception with saliency-guided fusion, CSA-Rep for short cross-stage aggregation paths through re-parameterisation, CARAFE for content-aware upsampling that preserves fine boundaries, and PSA-iEMA for low-cost channel–spatial reweighting with stabilised statistics. We also used CWDLoss distillation to align the channel-wise responses in dense and overlapping regions. On BCCD, YOLO-MFDS improves YOLOv11n by 5.5% mAP at IoU 0.5% and 6.8% mAP at IoU 0.5% to 0.95, and on Br35h, by 3.9% and 8.2%. The cross-dataset validation of BCCD and Br35h indicated good generalisation. The method is designed as a clinician-in-the-loop decision-support tool and can adapt, with modest domain adaptation, to additional leukaemia subtypes and tumour cytology. The source code and dataset splits are available at: <span><span>https://github.com/Fengtj123/YOLO-MFDS.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109410"},"PeriodicalIF":4.9,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial domain recognition for multi-slice spatial transcriptomics based on self encoder adversarial training 基于自编码器对抗训练的多片空间转录组学空间域识别
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-27 DOI: 10.1016/j.bspc.2025.109414
Xueqin Zhang , Xuemei Peng , Huitong Zhu , Weihong Ding , Yunlan Zhou , Zhichao Wu
Recent advances in spatial transcriptomics technology have facilitated the generation of increasingly diverse datasets, offering enhanced opportunities to explore organizational structure and function in a spatial context. However, the effective integration and analysis of such data remain challenging. To effectively integrate multi-slice information, we propose STBCGAE, an adversarial autoencoder-based framework for spatial domain identification in multi-slice spatial transcriptomics data. STBCGAE employs mutual nearest neighbor and nearest spot iterative algorithms to align the spatial positions across multiple slices, simultaneously establishing more precise cross-slice spatial relationships through the construction of a 3D neighbor map. To generate more effective feature embeddings, STBCGAE integrates batch information, gene expression and spatial information using a graph neural network-based autoencoder so that the model can effectively differentiate between technical variants and biological signals. Moreover, to eliminate batch effects, we introduce a batch classifier to train against the encoder. Finally, spatial clustering is performed using the Mclust method to identify spatial domains with expression profiles. By performing extensive experiments on multiple datasets, we demonstrate the capability of STBCGAE to effectively integrate multiple batches of samples in a variety of scenarios, significantly improving the accuracy of multi-slice spatial domain recognition.
空间转录组学技术的最新进展促进了日益多样化的数据集的产生,为探索空间背景下的组织结构和功能提供了更好的机会。然而,有效整合和分析这些数据仍然具有挑战性。为了有效整合多片段信息,我们提出了一种基于对抗性自编码器的STBCGAE框架,用于多片段空间转录组学数据的空间域识别。STBCGAE采用相互最近邻和最近邻迭代算法对多个切片的空间位置进行对齐,同时通过构建三维邻居图建立更精确的横切片空间关系。为了生成更有效的特征嵌入,STBCGAE使用基于图神经网络的自编码器集成了批信息、基因表达和空间信息,使模型能够有效区分技术变体和生物信号。此外,为了消除批次效应,我们引入了一个批次分类器来针对编码器进行训练。最后,使用Mclust方法进行空间聚类,以识别具有表达谱的空间域。通过在多个数据集上进行大量实验,我们证明了STBCGAE能够在各种场景下有效地整合多批次样本,显著提高了多片空间域识别的准确性。
{"title":"Spatial domain recognition for multi-slice spatial transcriptomics based on self encoder adversarial training","authors":"Xueqin Zhang ,&nbsp;Xuemei Peng ,&nbsp;Huitong Zhu ,&nbsp;Weihong Ding ,&nbsp;Yunlan Zhou ,&nbsp;Zhichao Wu","doi":"10.1016/j.bspc.2025.109414","DOIUrl":"10.1016/j.bspc.2025.109414","url":null,"abstract":"<div><div>Recent advances in spatial transcriptomics technology have facilitated the generation of increasingly diverse datasets, offering enhanced opportunities to explore organizational structure and function in a spatial context. However, the effective integration and analysis of such data remain challenging. To effectively integrate multi-slice information, we propose STBCGAE, an adversarial autoencoder-based framework for spatial domain identification in multi-slice spatial transcriptomics data. STBCGAE employs mutual nearest neighbor and nearest spot iterative algorithms to align the spatial positions across multiple slices, simultaneously establishing more precise cross-slice spatial relationships through the construction of a 3D neighbor map. To generate more effective feature embeddings, STBCGAE integrates batch information, gene expression and spatial information using a graph neural network-based autoencoder so that the model can effectively differentiate between technical variants and biological signals. Moreover, to eliminate batch effects, we introduce a batch classifier to train against the encoder. Finally, spatial clustering is performed using the Mclust method to identify spatial domains with expression profiles. By performing extensive experiments on multiple datasets, we demonstrate the capability of STBCGAE to effectively integrate multiple batches of samples in a variety of scenarios, significantly improving the accuracy of multi-slice spatial domain recognition.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109414"},"PeriodicalIF":4.9,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal MRI–EEG fusion for brain–computer interface applications using a lightweight CNN and attention in offline Parkinson’s disease diagnosis 基于轻量级CNN和注意力的多模态MRI-EEG融合脑机接口在离线帕金森病诊断中的应用
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-27 DOI: 10.1016/j.bspc.2025.109432
Hongbo Guo , Shuotian Li , Guojun Long , Qiqi Liang , Yiran Wang
Parkinson’s disease (PD) diagnosis lacks objective biomarkers, leading to subjectivity and delayed treatment. This work aims to improve diagnostic accuracy through a multimodal neuroimaging–EEG framework. Method: We designed a compact CNN-based pipeline that integrates structural MRI (sMRI), functional MRI (fMRI), and electroencephalography (EEG). Modality-specific encoders were fused with a lightweight attention head and optimized using Bayesian methods. Experiments used PPMI MRI/clinical data and OpenNeuro PD EEG datasets with subject-wise train/validation/test splits. Results: The model achieved an accuracy of 0.85 and an F1-score of 0.86, outperforming single-modality baselines and traditional machine-learning methods. Frequency-domain attention enhanced β-band features, while branch masking enabled robust handling of missing modalities. Conclusion: The framework provided interpretable EEG and MRI markers with efficient offline inference. The proposed multimodal CNN demonstrates offline feasibility for PD diagnosis, improving robustness, interpretability, and diagnostic efficiency compared to conventional methods. Significance: This study introduces a scalable, lightweight neuroimaging–EEG fusion strategy compatible with brain–computer interface (BCI) pipelines. It not only enhances PD diagnostics but also provides a methodological foundation for personalized care and future applications in other neurological diseases.
帕金森病(PD)的诊断缺乏客观的生物标志物,导致主观性和延迟治疗。这项工作旨在通过多模态神经成像-脑电图框架提高诊断准确性。方法:我们设计了一个紧凑的基于cnn的管道,集成了结构MRI (sMRI)、功能MRI (fMRI)和脑电图(EEG)。将模态特定编码器与轻量级注意头融合,并使用贝叶斯方法进行优化。实验使用PPMI MRI/临床数据和OpenNeuro PD脑电图数据集,并进行受试者训练/验证/测试分割。结果:该模型的准确率为0.85,f1得分为0.86,优于单模态基线和传统的机器学习方法。频域注意力增强了β频带特征,而分支掩蔽能够鲁棒地处理缺失模态。结论:该框架提供了可解释的EEG和MRI标记,并具有有效的离线推断。与传统方法相比,所提出的多模态CNN证明了PD诊断的离线可行性,提高了鲁棒性、可解释性和诊断效率。意义:本研究引入了一种可扩展的、轻量级的、与脑机接口(BCI)管道兼容的神经成像-脑电融合策略。它不仅提高了帕金森病的诊断,而且为个性化护理和未来在其他神经系统疾病中的应用提供了方法学基础。
{"title":"Multimodal MRI–EEG fusion for brain–computer interface applications using a lightweight CNN and attention in offline Parkinson’s disease diagnosis","authors":"Hongbo Guo ,&nbsp;Shuotian Li ,&nbsp;Guojun Long ,&nbsp;Qiqi Liang ,&nbsp;Yiran Wang","doi":"10.1016/j.bspc.2025.109432","DOIUrl":"10.1016/j.bspc.2025.109432","url":null,"abstract":"<div><div>Parkinson’s disease (PD) diagnosis lacks objective biomarkers, leading to subjectivity and delayed treatment. This work aims to improve diagnostic accuracy through a multimodal neuroimaging–EEG framework. Method: We designed a compact CNN-based pipeline that integrates structural MRI (sMRI), functional MRI (fMRI), and electroencephalography (EEG). Modality-specific encoders were fused with a lightweight attention head and optimized using Bayesian methods. Experiments used PPMI MRI/clinical data and OpenNeuro PD EEG datasets with subject-wise train/validation/test splits. Results: The model achieved an accuracy of 0.85 and an F1-score of 0.86, outperforming single-modality baselines and traditional machine-learning methods. Frequency-domain attention enhanced β-band features, while branch masking enabled robust handling of missing modalities. Conclusion: The framework provided interpretable EEG and MRI markers with efficient offline inference. The proposed multimodal CNN demonstrates offline feasibility for PD diagnosis, improving robustness, interpretability, and diagnostic efficiency compared to conventional methods. Significance: This study introduces a scalable, lightweight neuroimaging–EEG fusion strategy compatible with brain–computer interface (BCI) pipelines. It not only enhances PD diagnostics but also provides a methodological foundation for personalized care and future applications in other neurological diseases.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109432"},"PeriodicalIF":4.9,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ISPSeg: Unsupervised domain adaptive fundus image segmentation via learnable image signal processing and progressive teacher 基于可学习图像信号处理和渐进式教师的无监督域自适应眼底图像分割
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-26 DOI: 10.1016/j.bspc.2025.109425
Qi Sun , Minfeng Wu , Aorui Gou , Yibo Fan
Cross-domain joint segmentation of Optic Disc (OD) and Optic Cup (OC) on fundus images is crucial for early glaucoma detection and treatment. However, domain shifts caused by differences in imaging devices and acquisition protocols across medical centers severely degrade the generalization ability of segmentation models, and re-annotation and re-training are labor-intensive and time-consuming. Unsupervised Domain Adaptation (UDA) addresses label scarcity by leveraging source domain labels, but existing methods often overlook the fundamental image signal processing (ISP) pipeline that is a primary source of domain gaps. This paper proposes a two-stage UDA method: In the warm-up phase, we introduce a multi-level alignment strategy including a learnable ISP module that aligns cross-domain style discrepancies by simulating the imaging process. Feature- and output-level alignments further promote semantics-aware learning of domain-invariant features. In the fine-tuning phase, we adopt a progressive mean-teacher strategy combined with a confidence-guided bidirectional CutMix augmentation, which facilitates consistency learning from pseudo-labels while mitigating the impact of noisy supervision, thereby improving cross-domain generalization. Experiments on two public fundus datasets show that our ISPSeg achieves an average improvement of 2.3% in Dice scores for optic disc (OD) and optic cup (OC) segmentation compared to state-of-the-art UDA methods, which demonstrates the clinical potential of ISPSeg for glaucoma diagnosis.
眼底图像上视盘(OD)和视杯(OC)的跨域联合分割对于青光眼的早期检测和治疗至关重要。然而,由于不同医疗中心成像设备和采集协议的差异导致的领域转移严重降低了分割模型的泛化能力,并且重新标注和重新训练是费时费力的。无监督域自适应(UDA)通过利用源域标签来解决标签稀缺性问题,但现有方法往往忽略了基本图像信号处理(ISP)管道,这是域缺口的主要来源。本文提出了一种两阶段的UDA方法:在热身阶段,我们引入了一个多级对齐策略,包括一个可学习的ISP模块,通过模拟成像过程来对齐跨域风格差异。特征级和输出级对齐进一步促进了对领域不变特征的语义感知学习。在微调阶段,我们采用渐进式均值-教师策略,结合信心引导的双向CutMix增强,促进了伪标签的一致性学习,同时减轻了噪声监督的影响,从而提高了跨域泛化。在两个公共眼底数据集上的实验表明,与最先进的UDA方法相比,ISPSeg在视盘(OD)和视杯(OC)分割方面的Dice评分平均提高了2.3%,这表明ISPSeg在青光眼诊断方面的临床潜力。
{"title":"ISPSeg: Unsupervised domain adaptive fundus image segmentation via learnable image signal processing and progressive teacher","authors":"Qi Sun ,&nbsp;Minfeng Wu ,&nbsp;Aorui Gou ,&nbsp;Yibo Fan","doi":"10.1016/j.bspc.2025.109425","DOIUrl":"10.1016/j.bspc.2025.109425","url":null,"abstract":"<div><div>Cross-domain joint segmentation of Optic Disc (OD) and Optic Cup (OC) on fundus images is crucial for early glaucoma detection and treatment. However, domain shifts caused by differences in imaging devices and acquisition protocols across medical centers severely degrade the generalization ability of segmentation models, and re-annotation and re-training are labor-intensive and time-consuming. Unsupervised Domain Adaptation (UDA) addresses label scarcity by leveraging source domain labels, but existing methods often overlook the fundamental image signal processing (ISP) pipeline that is a primary source of domain gaps. This paper proposes a two-stage UDA method: In the warm-up phase, we introduce a multi-level alignment strategy including a learnable ISP module that aligns cross-domain style discrepancies by simulating the imaging process. Feature- and output-level alignments further promote semantics-aware learning of domain-invariant features. In the fine-tuning phase, we adopt a progressive mean-teacher strategy combined with a confidence-guided bidirectional CutMix augmentation, which facilitates consistency learning from pseudo-labels while mitigating the impact of noisy supervision, thereby improving cross-domain generalization. Experiments on two public fundus datasets show that our ISPSeg achieves an average improvement of 2.3% in Dice scores for optic disc (OD) and optic cup (OC) segmentation compared to state-of-the-art UDA methods, which demonstrates the clinical potential of ISPSeg for glaucoma diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109425"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abdominal multi-organ lesion recognition via organ-specific feature perception and regionally enhanced feature learning 通过器官特异性特征感知和区域增强特征学习识别腹部多器官病变
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-26 DOI: 10.1016/j.bspc.2025.109436
Juanfei Li , Pazilya Yusantay , Kunru Wang , Huiyu Zhou , Shuiping Gou , Gen Li
Medical image classification technology holds significant clinical implications for the early screening, diagnosis, and treatment of diseases. However, most existing medical image classification models focus on single-organ analysis, which presents limitations in task generalization. Their scalability and broader applicability remain underexplored. This study aims to develop a generalizable multi-organ lesion classification framework to overcome the challenges of small lesion-to-background ratios, indistinct morphological boundaries, and heterogeneous lesion manifestations. This enables robust screening of pathological abnormalities across multiple abdominal organs.
We present OSRE-MLC, an innovative framework integrating two key components: (1) an organ-specific feature perception module that dynamically adapts to anatomical variations while preventing feature degradation, and (2) a region-specific enhancement module that optimizes discriminative lesion representation prior to classification. The framework integrates multi-organ abdominal CT images, including liver, kidney, and pancreas, from five separate datasets. The architecture uniquely combines multi-organ segmentation with attention-based feature refinement, enabling simultaneous organ localization and pathology characterization through an end-to-end trainable network.
Comprehensive evaluation on abdominal CT datasets has demonstrated OSRE-MLC’s superior performance, achieving 95.0% accuracy, 94.44% F1-score, 95.0% precision, and 93.96% recall in liver, kidney, and pancreas lesion screening, significantly outperforming existing methods. The proposed framework establishes a new paradigm for multi-organ pathological analysis by effectively addressing feature degradation and inter-organ variability. Its clinically interpretable architecture and robust performance demonstrate significant potential for improving diagnostic accuracy in complex abdominal imaging, offering promising applications in precision medicine and computer-aided diagnosis systems.
医学图像分类技术对疾病的早期筛查、诊断和治疗具有重要的临床意义。然而,现有的医学图像分类模型大多集中在单器官分析上,在任务泛化方面存在局限性。它们的可扩展性和更广泛的适用性仍未得到充分探索。本研究旨在建立一个可推广的多器官病变分类框架,以克服病变与背景比小、形态界限不清和病变表现异质性的挑战。这使得强大的筛选病理异常跨多个腹部器官。我们提出了OSRE-MLC,这是一个集成了两个关键组件的创新框架:(1)器官特异性特征感知模块,可动态适应解剖变化,同时防止特征退化;(2)区域特异性增强模块,可在分类前优化鉴别病变表征。该框架整合了来自五个独立数据集的多器官腹部CT图像,包括肝脏、肾脏和胰腺。该架构独特地将多器官分割与基于注意力的特征细化相结合,通过端到端可训练的网络实现器官定位和病理表征。对腹部CT数据集的综合评价表明,OSRE-MLC在肝、肾、胰腺病变筛查中准确率为95.0%,f1评分为94.44%,精密度为95.0%,召回率为93.96%,明显优于现有方法。提出的框架通过有效地解决特征退化和器官间变异性,建立了多器官病理分析的新范式。其临床可解释的结构和强大的性能显示了提高复杂腹部成像诊断准确性的巨大潜力,在精密医学和计算机辅助诊断系统中提供了有前途的应用。
{"title":"Abdominal multi-organ lesion recognition via organ-specific feature perception and regionally enhanced feature learning","authors":"Juanfei Li ,&nbsp;Pazilya Yusantay ,&nbsp;Kunru Wang ,&nbsp;Huiyu Zhou ,&nbsp;Shuiping Gou ,&nbsp;Gen Li","doi":"10.1016/j.bspc.2025.109436","DOIUrl":"10.1016/j.bspc.2025.109436","url":null,"abstract":"<div><div>Medical image classification technology holds significant clinical implications for the early screening, diagnosis, and treatment of diseases. However, most existing medical image classification models focus on single-organ analysis, which presents limitations in task generalization. Their scalability and broader applicability remain underexplored. This study aims to develop a generalizable multi-organ lesion classification framework to overcome the challenges of small lesion-to-background ratios, indistinct morphological boundaries, and heterogeneous lesion manifestations. This enables robust screening of pathological abnormalities across multiple abdominal organs.</div><div>We present OSRE-MLC, an innovative framework integrating two key components: (1) an organ-specific feature perception module that dynamically adapts to anatomical variations while preventing feature degradation, and (2) a region-specific enhancement module that optimizes discriminative lesion representation prior to classification. The framework integrates multi-organ abdominal CT images, including liver, kidney, and pancreas, from five separate datasets. The architecture uniquely combines multi-organ segmentation with attention-based feature refinement, enabling simultaneous organ localization and pathology characterization through an end-to-end trainable network.</div><div>Comprehensive evaluation on abdominal CT datasets has demonstrated OSRE-MLC’s superior performance, achieving 95.0% accuracy, 94.44% F1-score, 95.0% precision, and 93.96% recall in liver, kidney, and pancreas lesion screening, significantly outperforming existing methods. The proposed framework establishes a new paradigm for multi-organ pathological analysis by effectively addressing feature degradation and inter-organ variability. Its clinically interpretable architecture and robust performance demonstrate significant potential for improving diagnostic accuracy in complex abdominal imaging, offering promising applications in precision medicine and computer-aided diagnosis systems.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109436"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
sEMG-based gesture recognition using multi-domain feature fusion with a lightweight Vanilla network 基于表面肌电信号的手势识别,多域特征融合与轻量级Vanilla网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-24 DOI: 10.1016/j.bspc.2025.109430
Yazhou Li , Kairu Li , Xiaoxin Wang , Yixuan Sheng
Gesture recognition based on surface electromyography (sEMG) has emerged as a promising approach for human–machine interaction systems, particularly in applications such as prosthetic hand control. Nevertheless, achieving an optimal balance between computational complexity and classification accuracy remains a persistent challenge for recognition networks. Thus, this paper proposes a multi-domain feature fusion (MDFF) methodology coupled with a lightweight Vanilla network (LVNet) to reduce computational demands whilst maintaining satisfactory classification performance, thereby enabling its direct deployment on terminal devices with limited computing resources, such as laptops or intelligent prosthetic hands. The proposed MDFF-LVNet model establishes an end-to-end fully convolutional classification architecture: the MDFF extracts and fuses time-domain and time–frequency domain features, and the LVNet improves inference speed and recognition capability by using dynamic convolution and activation functions to train dual convolutional operations. Experimental results demonstrate that the MDFF-LVNet model achieves classification accuracies of 95.78%, 92.77%, floating-point operations of 4.52 GFLOPs and 8.05 GFLOPs, and inference time of 18.11 ms and 45.69 ms on public gesture datasets NinaPro DB2 and DB5, respectively. To evaluate its online recognition performance, experiments conducted on a bionic prosthetic hand using a self-constructed sEMG dataset of 6 gestures achieved an offline accuracy of 99.57%.
基于表面肌电图(sEMG)的手势识别已经成为人机交互系统的一种很有前途的方法,特别是在假手控制等应用中。然而,在计算复杂度和分类精度之间取得最佳平衡仍然是识别网络面临的一个持续挑战。因此,本文提出了一种多域特征融合(MDFF)方法,结合轻量级香草网络(LVNet)来减少计算需求,同时保持令人满意的分类性能,从而使其能够直接部署在计算资源有限的终端设备上,如笔记本电脑或智能假手。本文提出的MDFF-LVNet模型建立了端到端的全卷积分类架构:MDFF提取并融合时域和时频域特征,LVNet利用动态卷积和激活函数训练双卷积运算,提高了推理速度和识别能力。实验结果表明,MDFF-LVNet模型在公共手势数据集NinaPro DB2和DB5上的分类准确率分别为95.78%、92.77%,浮点运算次数分别为4.52 GFLOPs和8.05 GFLOPs,推理时间分别为18.11 ms和45.69 ms。为了评估其在线识别性能,使用自构建的6个手势的表面肌电信号数据集对仿生假手进行了实验,其离线准确率达到99.57%。
{"title":"sEMG-based gesture recognition using multi-domain feature fusion with a lightweight Vanilla network","authors":"Yazhou Li ,&nbsp;Kairu Li ,&nbsp;Xiaoxin Wang ,&nbsp;Yixuan Sheng","doi":"10.1016/j.bspc.2025.109430","DOIUrl":"10.1016/j.bspc.2025.109430","url":null,"abstract":"<div><div>Gesture recognition based on surface electromyography (sEMG) has emerged as a promising approach for human–machine interaction systems, particularly in applications such as prosthetic hand control. Nevertheless, achieving an optimal balance between computational complexity and classification accuracy remains a persistent challenge for recognition networks. Thus, this paper proposes a multi-domain feature fusion (MDFF) methodology coupled with a lightweight Vanilla network (LVNet) to reduce computational demands whilst maintaining satisfactory classification performance, thereby enabling its direct deployment on terminal devices with limited computing resources, such as laptops or intelligent prosthetic hands. The proposed MDFF-LVNet model establishes an end-to-end fully convolutional classification architecture: the MDFF extracts and fuses time-domain and time–frequency domain features, and the LVNet improves inference speed and recognition capability by using dynamic convolution and activation functions to train dual convolutional operations. Experimental results demonstrate that the MDFF-LVNet model achieves classification accuracies of 95.78%, 92.77%, floating-point operations of 4.52 GFLOPs and 8.05 GFLOPs, and inference time of 18.11 ms and 45.69 ms on public gesture datasets NinaPro DB2 and DB5, respectively. To evaluate its online recognition performance, experiments conducted on a bionic prosthetic hand using a self-constructed sEMG dataset of 6 gestures achieved an offline accuracy of 99.57%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109430"},"PeriodicalIF":4.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion recognition based on spatio-temporal connectivity of prefrontal EEG signals 基于脑电信号时空连通性的情绪识别
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-24 DOI: 10.1016/j.bspc.2025.109369
Feng Wu , Enhao Wang , Binqiang Xue , Yinhua Liu
Emotion recognition based on regional electroencephalography (EEG) signals can effectively mitigate practical deployment barriers, but limited data from sparse channels constrains the comprehensive representation of emotional states. This paper proposes an emotion recognition method based on functional connectivity of local brain regions. By simulating dynamic interactions in brain networks under emotional states, the method effectively enhances the representation of local information, which in turn improves recognition performance. Firstly, we introduce a cross-temporal connectivity feature modeling method based on spatial brain connectivity, and feature boosting maps are constructed by computing linear and nonlinear dependencies to improve connectivity representation. Secondly, Spatio-temporal Attention Transformer and Convolutional Neural Network are employed for encoding global and local temporal features, respectively. A feature fusion block is designed to integrate features from two encoders, fully leveraging their complementarity. Finally, the fused features are passed through a fully connected layer and subsequently fed into a softmax classifier for emotion classification. We conducted various experiments on two public EEG emotion datasets, SEED and DEAP. The results demonstrate that our method effectively captures emotional information from local brain regions, achieving significant recognition performance.
基于区域脑电图(EEG)信号的情绪识别可以有效地缓解实际部署障碍,但来自稀疏通道的有限数据限制了情绪状态的全面表征。提出了一种基于局部脑区功能连通性的情绪识别方法。该方法通过模拟情绪状态下大脑网络的动态交互,有效增强了局部信息的表征,从而提高了识别性能。首先,我们引入了一种基于空间脑连通性的跨时间连接特征建模方法,并通过计算线性和非线性依赖关系构建特征增强图来改善连接表示。其次,利用时空注意力转换器和卷积神经网络分别对全局和局部时间特征进行编码;一个特征融合块被设计用于集成两个编码器的特征,充分利用它们的互补性。最后,融合的特征通过一个完全连接的层,随后被送入softmax分类器进行情感分类。我们在两个公开的EEG情绪数据集SEED和DEAP上进行了各种实验。结果表明,该方法有效地捕获了局部大脑区域的情绪信息,取得了显著的识别效果。
{"title":"Emotion recognition based on spatio-temporal connectivity of prefrontal EEG signals","authors":"Feng Wu ,&nbsp;Enhao Wang ,&nbsp;Binqiang Xue ,&nbsp;Yinhua Liu","doi":"10.1016/j.bspc.2025.109369","DOIUrl":"10.1016/j.bspc.2025.109369","url":null,"abstract":"<div><div>Emotion recognition based on regional electroencephalography (EEG) signals can effectively mitigate practical deployment barriers, but limited data from sparse channels constrains the comprehensive representation of emotional states. This paper proposes an emotion recognition method based on functional connectivity of local brain regions. By simulating dynamic interactions in brain networks under emotional states, the method effectively enhances the representation of local information, which in turn improves recognition performance. Firstly, we introduce a cross-temporal connectivity feature modeling method based on spatial brain connectivity, and feature boosting maps are constructed by computing linear and nonlinear dependencies to improve connectivity representation. Secondly, Spatio-temporal Attention Transformer and Convolutional Neural Network are employed for encoding global and local temporal features, respectively. A feature fusion block is designed to integrate features from two encoders, fully leveraging their complementarity. Finally, the fused features are passed through a fully connected layer and subsequently fed into a softmax classifier for emotion classification. We conducted various experiments on two public EEG emotion datasets, SEED and DEAP. The results demonstrate that our method effectively captures emotional information from local brain regions, achieving significant recognition performance.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109369"},"PeriodicalIF":4.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Power efficient signal conversion and quality signal compression using LDS-ADC and hybrid DCT for biomedical signals 使用LDS-ADC和混合DCT进行生物医学信号的高效信号转换和高质量信号压缩
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-24 DOI: 10.1016/j.bspc.2025.109389
M. Radhika , P. Sivakumar , D. Somasundaram , T. Sivakami
This paper presents a power efficient signal conversion and compression on Electroencephalogram (EEG) and Electrocardiogram (ECG) signals using a low power dual slope analog to digital converter (LDS-ADC) and hybrid discrete cosine transform with improved emperor penguin optimization (hybrid DCT-IEPO). Initially, ECG and EEG signals are acquired and converted into a digital signal using a power efficient LDS-ADC. After the process of energy efficient signal conversion, ECG and EEG signal compression is performed. Here, the hybrid discrete cosine transform (hybrid DCT) is applied to converted digital signals, resulting in the number of coefficients. Subsequently, optimal coefficients are selected by the emperor penguin optimization algorithm. Finally, inverse hybrid DCT reconstructs the signal with the selected optimal coefficients. Thus, the reconstructed power efficient output signal is attained and it is utilized for various applications. The outcome of the proposed work is examined with the prevailing methods compression ratio and it is implemented in the MATLAB. Experimental results show that the proposed technique obtained better performance in compression ratio, percentage root mean square error difference (PRD), quality score (QS), and mean square error (MSE).
采用低功耗双斜率模数转换器(LDS-ADC)和改进帝企鹅优化的混合离散余弦变换(hybrid DCT-IEPO)对脑电图(EEG)和心电图(ECG)信号进行高效的信号转换和压缩。首先,心电和脑电图信号采集和转换成一个数字信号使用低功耗lds adc。经过节能信号转换后,进行心电和脑电信号压缩。在这里,混合离散余弦变换(hybrid DCT)应用于转换后的数字信号,导致系数的数量。然后用帝企鹅优化算法选择最优系数。最后,利用选取的最优系数对信号进行逆混合DCT重构。从而获得重构后的高效功率输出信号,可用于各种应用。用目前流行的压缩比方法对所提出的工作结果进行了检验,并在MATLAB中进行了实现。实验结果表明,该方法在压缩比、百分比均方根误差差(PRD)、质量分数(QS)和均方误差(MSE)方面都取得了较好的性能。
{"title":"Power efficient signal conversion and quality signal compression using LDS-ADC and hybrid DCT for biomedical signals","authors":"M. Radhika ,&nbsp;P. Sivakumar ,&nbsp;D. Somasundaram ,&nbsp;T. Sivakami","doi":"10.1016/j.bspc.2025.109389","DOIUrl":"10.1016/j.bspc.2025.109389","url":null,"abstract":"<div><div>This paper presents a power efficient signal conversion and compression on Electroencephalogram (EEG) and Electrocardiogram (ECG) signals using a low power dual slope analog to digital converter (LDS-ADC) and hybrid discrete cosine transform with improved emperor penguin optimization (hybrid DCT-IEPO). Initially, ECG and EEG signals are acquired and converted into a digital signal using a power efficient LDS-ADC. After the process of energy efficient signal conversion, ECG and EEG signal compression is performed. Here, the hybrid discrete cosine transform (hybrid DCT) is applied to converted digital signals, resulting in the number of coefficients. Subsequently, optimal coefficients are selected by the emperor penguin optimization algorithm. Finally, inverse hybrid DCT reconstructs the signal with the selected optimal coefficients. Thus, the reconstructed power efficient output signal is attained and it is utilized for various applications. The outcome of the proposed work is examined with the prevailing methods compression ratio and it is implemented in the MATLAB. Experimental results show that the proposed technique obtained better performance in compression ratio, percentage root mean square error difference (PRD), quality score (QS), and mean square error (MSE).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109389"},"PeriodicalIF":4.9,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced security of medical images through fractal box encryption and CNN-driven data hiding method 通过分形盒加密和cnn驱动的数据隐藏方法增强医学图像的安全性
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-23 DOI: 10.1016/j.bspc.2025.109427
Mostafa M. Abdel-Aziz , Nabil A. Lashin , Hanaa M. Hamza , Khalid M. Hosny
This study introduces a new data-hiding method that combines fractal box encryption (FBE), deep learning-driven feature extraction, and a modified embedding strategy utilizing the spread spectrum method. This approach enables the secure incorporation and extraction of hidden data within medical images. The procedure commences with preliminary preprocessing of the host image, followed by morphological restoration to augment its structural attributes. A pre-trained ResNet-50 model is utilized to extract sophisticated image features, which are then encrypted with fractal box encryption to augment security. The secret image is then integrated into the encrypted feature vector using a spread spectrum embedding technique, ensuring that the concealed data remains robust against typical image processing attacks. During decryption, the obscured image is effectively retrieved by associating the watermarked characteristics with a noise pattern. This method ensures secure data concealment through fractal encryption while preserving the integrity of the concealed image, rendering it tamper-resistant. The proposed method offers a robust and efficient solution for data concealment and validation in medical imaging, where the protection of integrity and confidentiality is paramount.
提出了一种新的数据隐藏方法,该方法将分形盒加密(FBE)、深度学习驱动的特征提取和利用扩频方法改进的嵌入策略相结合。这种方法可以安全地合并和提取医学图像中的隐藏数据。该程序从主机图像的初步预处理开始,然后进行形态恢复以增强其结构属性。利用预训练的ResNet-50模型提取复杂的图像特征,然后使用分形盒加密来增强安全性。然后使用扩频嵌入技术将秘密图像集成到加密的特征向量中,确保隐藏数据对典型的图像处理攻击保持鲁棒性。在解密过程中,通过将水印特征与噪声模式相关联来有效地检索被遮挡的图像。该方法通过分形加密确保了数据隐藏的安全性,同时保持了隐藏图像的完整性,使其具有抗篡改性。该方法为医学成像中数据隐藏和验证提供了一种鲁棒和高效的解决方案,其中完整性和机密性的保护是至关重要的。
{"title":"Enhanced security of medical images through fractal box encryption and CNN-driven data hiding method","authors":"Mostafa M. Abdel-Aziz ,&nbsp;Nabil A. Lashin ,&nbsp;Hanaa M. Hamza ,&nbsp;Khalid M. Hosny","doi":"10.1016/j.bspc.2025.109427","DOIUrl":"10.1016/j.bspc.2025.109427","url":null,"abstract":"<div><div>This study introduces a new data-hiding method that combines fractal box encryption (FBE), deep learning-driven feature extraction, and a modified embedding strategy utilizing the spread spectrum method. This approach enables the secure incorporation and extraction of hidden data within medical images. The procedure commences with preliminary preprocessing of the host image, followed by morphological restoration to augment its structural attributes. A pre-trained ResNet-50 model is utilized to extract sophisticated image features, which are then encrypted with fractal box encryption to augment security. The secret image is then integrated into the encrypted feature vector using a spread spectrum embedding technique, ensuring that the concealed data remains robust against typical image processing attacks. During decryption, the obscured image is effectively retrieved by associating the watermarked characteristics with a noise pattern. This method ensures secure data concealment through fractal encryption while preserving the integrity of the concealed image, rendering it tamper-resistant. The proposed method offers a robust and efficient solution for data concealment and validation in medical imaging, where the protection of integrity and confidentiality is paramount.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109427"},"PeriodicalIF":4.9,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalised oedema monitoring utilising a NIR hyperspectral camera in critically ill neonates: A feasibility study 利用近红外高光谱相机监测危重新生儿的全身水肿:可行性研究
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-22 DOI: 10.1016/j.bspc.2025.109444
Mariana Castro-Montano , Andy Petros , Ling Li , Enayetur Rahman , Simon Hannam , Grant Clow , Panayiotis A Kyriacou , Jim McLaughlin , Meha Qassem
Generalised oedema is common in neonatal intensive care units (NICUs), particularly in preterm and low-birth-weight infants. Characterised by tissue swelling from excess water accumulation, it can reflect systematic illness such as congestive heart failure, hepatic cirrhosis, nephrotic syndrome, sepsis, and acute kidney injury. Current clinical assessment methods, including formulas based on weight and fluid input/output and visual skin observation, lack accuracy and sensitivity, especially in critically ill infants. Techniques such as bioimpedance and ultrasound have been explored but are unsuitable for neonates and do not provide direct water content measurements. Spectroscopy, a non-invasive optical method, offers a promising solution by measuring tissue water content through light interactions in the Near Infrared (NIR) spectrum. This study investigates oedema in neonates using an NIR hyperspectral system in the NICU. Data was collected from 20 neonates, both with and without oedema over the course of three consecutive days. Spectral analysis revealed significant differences, notably at water absorption peaks around 1200 nm (p = 0.012). A Partial Least Squares Discriminant Analysis (PLS-DA) model effectively differentiated between oedematous and non-oedematous infants using spectral and standard clinical features, achieving 85.56 % recall and 100 % precision in testing. These findings suggest NIR spectroscopy combined with PLS-DA offers a reliable, non-contact method for early oedema detection in neonates, potentially enhancing monitoring and outcomes in the NICU.
全身性水肿在新生儿重症监护病房(NICUs)很常见,特别是在早产儿和低出生体重婴儿中。它的特征是组织肿胀,由过多的水分积累,可以反映全身性疾病,如充血性心力衰竭,肝硬化,肾病综合征,败血症,急性肾损伤。目前的临床评估方法,包括基于体重和液体输入/输出和视觉皮肤观察的配方,缺乏准确性和敏感性,特别是在危重婴儿中。已经探索了生物阻抗和超声波等技术,但不适合新生儿,也不能提供直接的含水量测量。光谱学是一种非侵入性的光学方法,通过近红外(NIR)光谱中的光相互作用来测量组织含水量,提供了一种很有前途的解决方案。本研究在新生儿重症监护室使用近红外高光谱系统研究新生儿水肿。收集了20名有或无水肿的新生儿连续三天的数据。光谱分析显示了显著的差异,特别是在1200 nm附近的吸水峰(p = 0.012)。偏最小二乘判别分析(PLS-DA)模型利用光谱和标准临床特征有效区分了水肿和非水肿婴儿,测试的召回率达到85.56%,准确率达到100%。这些发现表明,近红外光谱联合PLS-DA为新生儿早期水肿检测提供了一种可靠的、非接触的方法,有可能增强新生儿重症监护病房的监测和预后。
{"title":"Generalised oedema monitoring utilising a NIR hyperspectral camera in critically ill neonates: A feasibility study","authors":"Mariana Castro-Montano ,&nbsp;Andy Petros ,&nbsp;Ling Li ,&nbsp;Enayetur Rahman ,&nbsp;Simon Hannam ,&nbsp;Grant Clow ,&nbsp;Panayiotis A Kyriacou ,&nbsp;Jim McLaughlin ,&nbsp;Meha Qassem","doi":"10.1016/j.bspc.2025.109444","DOIUrl":"10.1016/j.bspc.2025.109444","url":null,"abstract":"<div><div>Generalised oedema is common in neonatal intensive care units (NICUs), particularly in preterm and low-birth-weight infants. Characterised by tissue swelling from excess water accumulation, it can reflect systematic illness such as congestive heart failure, hepatic cirrhosis, nephrotic syndrome, sepsis, and acute kidney injury. Current clinical assessment methods, including formulas based on weight and fluid input/output and visual skin observation, lack accuracy and sensitivity, especially in critically ill infants. Techniques such as bioimpedance and ultrasound have been explored but are unsuitable for neonates and do not provide direct water content measurements. Spectroscopy, a non-invasive optical method, offers a promising solution by measuring tissue water content through light interactions in the Near Infrared (NIR) spectrum. This study investigates oedema in neonates using an NIR hyperspectral system in the NICU. Data was collected from 20 neonates, both with and without oedema over the course of three consecutive days. Spectral analysis revealed significant differences, notably at water absorption peaks around 1200 nm (p = 0.012). A Partial Least Squares Discriminant Analysis (PLS-DA) model effectively differentiated between oedematous and non-oedematous infants using spectral and standard clinical features, achieving 85.56 % recall and 100 % precision in testing. These findings suggest NIR spectroscopy combined with PLS-DA offers a reliable, non-contact method for early oedema detection in neonates, potentially enhancing monitoring and outcomes in the NICU.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"115 ","pages":"Article 109444"},"PeriodicalIF":4.9,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1