首页 > 最新文献

Interdisciplinary Sciences: Computational Life Sciences最新文献

英文 中文
Automated Multi-grade Brain Tumor Classification Using Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network in MRI Images. 基于自适应层次优化马群BiLSTM融合网络的MRI图像自动多级脑肿瘤分类。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-06-18 DOI: 10.1007/s12539-025-00708-4
T Thanya, T Jeslin

Brain tumor classification using Magnetic Resonance Imaging (MRI) images is an important and emerging field of medical imaging and artificial intelligence in the current world. With advancements in technology, particularly in deep learning and machine learning, researchers and clinicians are leveraging these tools to create complex models that, using MRI data, can reliably detect and classify tumors in the brain. However, it has a number of drawbacks, including the intricacy of tumor types and grades, intensity variations in MRI data and tumors varying in severity. This paper proposes a Multi-Grade Hierarchical Classification Network Model (MGHCN) for the hierarchical classification of tumor grades in MRI images. The model's distinctive feature lies in its ability to categorize tumors into multiple grades, thereby capturing the hierarchical nature of tumor severity. To address variations in intensity levels across different MRI samples, an Improved Adaptive Intensity Normalization (IAIN) pre-processing step is employed. This step standardizes intensity values, effectively mitigating the impact of intensity variations and ensuring more consistent analyses. The model renders utilization of the Dual Tree Complex Wavelet Transform with Enhanced Trigonometric Features (DTCWT-ETF) for efficient feature extraction. DTCWT-ETF captures both spatial and frequency characteristics, allowing the model to distinguish between different tumor types more effectively. In the classification stage, the framework introduces the Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network (AHOHH-BiLSTM). This multi-grade classification model is designed with a comprehensive architecture, including distinct layers that enhance the learning process and adaptively refine parameters. The purpose of this study is to improve the precision of distinguishing different grades of tumors in MRI images. To evaluate the proposed MGHCN framework, a set of evaluation metrics is incorporated which includes precision, recall, and the F1-score. The structure employs BraTS Challenge 2021, Br35H, and BraTS Challenge 2023 datasets, a significant combination that ensures comprehensive training and evaluation. The MGHCN framework aims to enhance brain tumor classification in MRI images by utilizing these datasets along with a comprehensive set of evaluation metrics, providing a more thorough and sophisticated understanding of its capabilities and performance.

利用磁共振成像(MRI)图像进行脑肿瘤分类是当今世界医学影像学和人工智能的一个重要新兴领域。随着技术的进步,特别是在深度学习和机器学习方面,研究人员和临床医生正在利用这些工具创建复杂的模型,利用MRI数据,可以可靠地检测和分类大脑中的肿瘤。然而,它有许多缺点,包括肿瘤类型和分级的复杂性,MRI数据的强度变化以及肿瘤严重程度的不同。本文提出了一种用于MRI图像中肿瘤分级分级的多层分层分类网络模型(MGHCN)。该模型的独特之处在于能够将肿瘤分为多个等级,从而捕捉肿瘤严重程度的层次性。为了解决不同MRI样本中强度水平的变化,采用了改进的自适应强度归一化(IAIN)预处理步骤。这一步标准化了强度值,有效地减轻了强度变化的影响,并确保了更一致的分析。该模型利用增强三角特征的对偶树复小波变换(DTCWT-ETF)进行有效的特征提取。DTCWT-ETF同时捕获空间和频率特征,使模型能够更有效地区分不同的肿瘤类型。在分类阶段,该框架引入了自适应分层优化马群BiLSTM融合网络(AHOHH-BiLSTM)。该多级分类模型设计了一个全面的体系结构,包括不同的层,增强了学习过程并自适应地优化了参数。本研究的目的是为了提高MRI图像中不同级别肿瘤的区分精度。为了评估建议的MGHCN框架,纳入了一组评估指标,包括精度,召回率和f1分数。该结构采用BraTS Challenge 2021、Br35H和BraTS Challenge 2023数据集,这是一个重要的组合,可确保全面的训练和评估。MGHCN框架旨在通过利用这些数据集以及一套全面的评估指标来增强MRI图像中的脑肿瘤分类,从而对其功能和性能提供更彻底和更复杂的理解。
{"title":"Automated Multi-grade Brain Tumor Classification Using Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network in MRI Images.","authors":"T Thanya, T Jeslin","doi":"10.1007/s12539-025-00708-4","DOIUrl":"10.1007/s12539-025-00708-4","url":null,"abstract":"<p><p>Brain tumor classification using Magnetic Resonance Imaging (MRI) images is an important and emerging field of medical imaging and artificial intelligence in the current world. With advancements in technology, particularly in deep learning and machine learning, researchers and clinicians are leveraging these tools to create complex models that, using MRI data, can reliably detect and classify tumors in the brain. However, it has a number of drawbacks, including the intricacy of tumor types and grades, intensity variations in MRI data and tumors varying in severity. This paper proposes a Multi-Grade Hierarchical Classification Network Model (MGHCN) for the hierarchical classification of tumor grades in MRI images. The model's distinctive feature lies in its ability to categorize tumors into multiple grades, thereby capturing the hierarchical nature of tumor severity. To address variations in intensity levels across different MRI samples, an Improved Adaptive Intensity Normalization (IAIN) pre-processing step is employed. This step standardizes intensity values, effectively mitigating the impact of intensity variations and ensuring more consistent analyses. The model renders utilization of the Dual Tree Complex Wavelet Transform with Enhanced Trigonometric Features (DTCWT-ETF) for efficient feature extraction. DTCWT-ETF captures both spatial and frequency characteristics, allowing the model to distinguish between different tumor types more effectively. In the classification stage, the framework introduces the Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network (AHOHH-BiLSTM). This multi-grade classification model is designed with a comprehensive architecture, including distinct layers that enhance the learning process and adaptively refine parameters. The purpose of this study is to improve the precision of distinguishing different grades of tumors in MRI images. To evaluate the proposed MGHCN framework, a set of evaluation metrics is incorporated which includes precision, recall, and the F1-score. The structure employs BraTS Challenge 2021, Br35H, and BraTS Challenge 2023 datasets, a significant combination that ensures comprehensive training and evaluation. The MGHCN framework aims to enhance brain tumor classification in MRI images by utilizing these datasets along with a comprehensive set of evaluation metrics, providing a more thorough and sophisticated understanding of its capabilities and performance.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"77-100"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144325602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Cancer Survival Prediction by Fusing Semantic Labelling of Cell Types and Whole Slide Images. 融合细胞类型和整个幻灯片图像语义标记的可解释癌症生存预测。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-09-26 DOI: 10.1007/s12539-025-00744-0
Jinchao Chen, Pei Liu, Chen Chen, Ying Su, Jiajia Wang, Cheng Chen, Xiantao Ai, Xiaoyi Lv

Survival prediction involves multiple factors, such as histopathological image data and omics data, making it a typical multimodal task. In this work, we introduce semantic annotations for genes in different cell types based on cell biology knowledge, enabling the model to achieve interpretability at the cellular level. Since these cell type annotations are derived from the unique sites of origin for each cancer type, they can be more closely aligned with morphological features in whole slide images (WSIs) and address the issue of genomic annotation ambiguity. We then propose a multimodal fusion model, SurvTransformer, with multi-layer attention to fuse cell type tags (CTTs) and WSIs for survival prediction. Finally, through attention and integrated gradient attribution, the model provides biologically meaningful interpretable analysis at three different levels: cell type, gene, and histopathology image. Comparative experiments show that SurvTransformer achieves the highest consistency index across four cancer datasets. The survival curves generated are also statistically significant. Ablation experiments show that SurvTransformer outperforms models based on different labeling methods and attention representations. In terms of interpretability, case studies validate the effectiveness of SurvTransformer at three levels: cell type, gene, and histopathological image.

生存预测涉及多种因素,如组织病理图像数据和组学数据,使其成为一个典型的多模式任务。在这项工作中,我们基于细胞生物学知识为不同细胞类型的基因引入语义注释,使模型能够在细胞水平上实现可解释性。由于这些细胞类型注释来自每种癌症类型的独特起源位点,因此它们可以更紧密地与整个幻灯片图像(wsi)中的形态学特征对齐,并解决基因组注释歧义的问题。然后,我们提出了一个多模态融合模型SurvTransformer,多层关注融合细胞类型标签(CTTs)和wsi用于生存预测。最后,通过注意力和综合梯度归因,该模型在三个不同的水平上提供了具有生物学意义的可解释分析:细胞类型、基因和组织病理学图像。对比实验表明,SurvTransformer在4个癌症数据集上实现了最高的一致性指数。产生的生存曲线也具有统计学意义。消融实验表明,SurvTransformer优于基于不同标记方法和注意力表征的模型。在可解释性方面,案例研究从三个层面验证了SurvTransformer的有效性:细胞类型、基因和组织病理图像。
{"title":"Interpretable Cancer Survival Prediction by Fusing Semantic Labelling of Cell Types and Whole Slide Images.","authors":"Jinchao Chen, Pei Liu, Chen Chen, Ying Su, Jiajia Wang, Cheng Chen, Xiantao Ai, Xiaoyi Lv","doi":"10.1007/s12539-025-00744-0","DOIUrl":"10.1007/s12539-025-00744-0","url":null,"abstract":"<p><p>Survival prediction involves multiple factors, such as histopathological image data and omics data, making it a typical multimodal task. In this work, we introduce semantic annotations for genes in different cell types based on cell biology knowledge, enabling the model to achieve interpretability at the cellular level. Since these cell type annotations are derived from the unique sites of origin for each cancer type, they can be more closely aligned with morphological features in whole slide images (WSIs) and address the issue of genomic annotation ambiguity. We then propose a multimodal fusion model, SurvTransformer, with multi-layer attention to fuse cell type tags (CTTs) and WSIs for survival prediction. Finally, through attention and integrated gradient attribution, the model provides biologically meaningful interpretable analysis at three different levels: cell type, gene, and histopathology image. Comparative experiments show that SurvTransformer achieves the highest consistency index across four cancer datasets. The survival curves generated are also statistically significant. Ablation experiments show that SurvTransformer outperforms models based on different labeling methods and attention representations. In terms of interpretability, case studies validate the effectiveness of SurvTransformer at three levels: cell type, gene, and histopathological image.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"46-59"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145174198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
scRDiT: Generating Single-cell RNA-seq Data by Diffusion Transformers and Accelerating Sampling. scdit:通过扩散变压器和加速采样生成单细胞RNA-seq数据。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-02-21 DOI: 10.1007/s12539-025-00688-5
Shengze Dong, Zhuorui Cui, Ding Liu, Jinzhi Lei

Single-cell RNA sequencing (scRNA-seq) is a groundbreaking technology extensively utilized in biological research, facilitating the examination of gene expression at the individual cell level within a given tissue sample. While numerous tools have been developed for scRNA-seq data analysis, the challenge persists in capturing the distinct features of such data and replicating virtual datasets that share analogous statistical properties. Our study introduces a generative approach termed scRNA-seq Diffusion Transformer (scRDiT). This method generates virtual scRNA-seq data by leveraging a real dataset. The method is a neural network constructed based on Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs). This involves subjecting Gaussian noises to the real dataset through iterative noise-adding steps and ultimately restoring the noises to form scRNA-seq samples. This scheme allows us to learn data features from actual scRNA-seq samples during model training. Our experiments, conducted on two distinct scRNA-seq datasets, demonstrate superior performance. Additionally, the model sampling process is expedited by incorporating Denoising Diffusion Implicit Models (DDIMs). scRDiT presents a unified methodology empowering users to train neural network models with their unique scRNA-seq datasets, enabling the generation of numerous high-quality scRNA-seq samples.

单细胞RNA测序(scRNA-seq)是一项广泛应用于生物学研究的突破性技术,有助于在给定组织样本的单个细胞水平上检测基因表达。虽然已经开发了许多用于scRNA-seq数据分析的工具,但在捕获这些数据的独特特征和复制共享类似统计属性的虚拟数据集方面仍然存在挑战。我们的研究引入了一种称为scRNA-seq扩散转换器(scRDiT)的生成方法。该方法通过利用真实数据集生成虚拟scRNA-seq数据。该方法是基于去噪扩散概率模型(ddpm)和扩散变压器(DiTs)构建的神经网络。这涉及到通过迭代的噪声添加步骤对真实数据集施加高斯噪声,并最终恢复噪声以形成scRNA-seq样本。该方案允许我们在模型训练时从实际的scRNA-seq样本中学习数据特征。我们在两个不同的scRNA-seq数据集上进行的实验显示了优越的性能。此外,通过引入去噪扩散隐式模型(DDIMs)加快了模型采样过程。scRDiT提供了一种统一的方法,使用户能够使用其独特的scRNA-seq数据集训练神经网络模型,从而能够生成许多高质量的scRNA-seq样本。
{"title":"scRDiT: Generating Single-cell RNA-seq Data by Diffusion Transformers and Accelerating Sampling.","authors":"Shengze Dong, Zhuorui Cui, Ding Liu, Jinzhi Lei","doi":"10.1007/s12539-025-00688-5","DOIUrl":"10.1007/s12539-025-00688-5","url":null,"abstract":"<p><p>Single-cell RNA sequencing (scRNA-seq) is a groundbreaking technology extensively utilized in biological research, facilitating the examination of gene expression at the individual cell level within a given tissue sample. While numerous tools have been developed for scRNA-seq data analysis, the challenge persists in capturing the distinct features of such data and replicating virtual datasets that share analogous statistical properties. Our study introduces a generative approach termed scRNA-seq Diffusion Transformer (scRDiT). This method generates virtual scRNA-seq data by leveraging a real dataset. The method is a neural network constructed based on Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs). This involves subjecting Gaussian noises to the real dataset through iterative noise-adding steps and ultimately restoring the noises to form scRNA-seq samples. This scheme allows us to learn data features from actual scRNA-seq samples during model training. Our experiments, conducted on two distinct scRNA-seq datasets, demonstrate superior performance. Additionally, the model sampling process is expedited by incorporating Denoising Diffusion Implicit Models (DDIMs). scRDiT presents a unified methodology empowering users to train neural network models with their unique scRNA-seq datasets, enabling the generation of numerous high-quality scRNA-seq samples.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"314-325"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143468010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPCF-YOLO: An Efficient Feature Optimization Model for Real-Time Lung Nodule Detection. SPCF-YOLO:一种高效的肺结节实时检测特征优化模型。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-06-02 DOI: 10.1007/s12539-025-00720-8
Yawen Ren, Chenyang Shi, Donglin Zhu, Changjun Zhou

Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.

由于传统深度学习模型中特征集成的碎片化,在CT图像中准确检测肺结节仍然具有挑战性。本文提出了一种将分层特征融合与解剖上下文建模相结合的实时检测框架SPCF-YOLO。首先,空间到深度卷积(SPDConv)模块通过空间维度重组保持低分辨率图像的细粒度特征。其次,设计了共享特征金字塔卷积(SFPConv)模块,利用多扩张率卷积层动态提取多尺度上下文信息;结合小目标检测层旨在提高对小结节的灵敏度。这是通过结合改进的金字塔挤压注意(PSA)模块和改进的上下文变压器(CoTB)模块来实现的,它们增强了全局通道依赖性并减少了特征损失。该模型以每秒151帧的速度在LUNA16上达到82.8%的平均精度(mAP)和82.9%的F1分数(分别比YOLOv8提高17.5%和82.9%),显示了实时临床可行性。对SIIM-COVID-19的跨模态验证显示改善1.5%,证实了鲁棒泛化。
{"title":"SPCF-YOLO: An Efficient Feature Optimization Model for Real-Time Lung Nodule Detection.","authors":"Yawen Ren, Chenyang Shi, Donglin Zhu, Changjun Zhou","doi":"10.1007/s12539-025-00720-8","DOIUrl":"10.1007/s12539-025-00720-8","url":null,"abstract":"<p><p>Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"231-252"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144198998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Medical Image Segmentation Using Heterogeneous Complementary Correction Network and Confidence Contrastive Learning. 基于异构互补校正网络和置信度对比学习的半监督医学图像分割。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-07-11 DOI: 10.1007/s12539-025-00727-1
Lei Li, Miaosen Xue, Songyang Li, Zhuoli Dong, Tianli Liao, Peng Li

Semi-supervised medical image segmentation techniques have demonstrated significant potential and effectiveness in clinical diagnosis. The prevailing approaches using the mean-teacher (MT) framework achieve promising image segmentation results. However, due to the unreliability of the pseudo labels generated by the teacher model, existing methods still have some inherent limitations that must be considered and addressed. In this paper, we propose an innovative semi-supervised method for medical image segmentation by combining the heterogeneous complementary correction network and confidence contrastive learning (HC-CCL). Specifically, we develop a triple-branch framework by integrating a heterogeneous complementary correction (HCC) network into the MT framework. HCC serves as an auxiliary branch that corrects prediction errors in the student model and provides complementary information. To improve the capacity for feature learning in our proposed model, we introduce a confidence contrastive learning (CCL) approach with a novel sampling strategy. Furthermore, we develop a momentum style transfer (MST) method to narrow the gap between labeled and unlabeled data distributions. In addition, we introduce a Cutout-style augmentation for unsupervised learning to enhance performance. Three medical image datasets (including left atrial (LA) dataset, NIH pancreas dataset, Brats-2019 dataset) were employed to rigorously evaluate HC-CCL. Quantitative results demonstrate significant performance advantages over existing approaches, achieving state-of-the-art performance across all metrics. The implementation will be released at https://github.com/xxmmss/HC-CCL .

半监督医学图像分割技术在临床诊断中已显示出巨大的潜力和有效性。使用均值教师(MT)框架的主流方法取得了令人满意的图像分割效果。然而,由于教师模型生成的伪标签的不可靠性,现有的方法仍然存在一些必须考虑和解决的固有局限性。本文提出了一种结合异质互补校正网络和置信度对比学习(HC-CCL)的半监督医学图像分割方法。具体来说,我们通过将异质互补校正(HCC)网络整合到MT框架中开发了一个三分支框架。HCC作为一个辅助分支,可以纠正学生模型中的预测错误,并提供补充信息。为了提高我们提出的模型的特征学习能力,我们引入了一种新的采样策略的置信对比学习(CCL)方法。此外,我们开发了一种动量风格转移(MST)方法来缩小标记和未标记数据分布之间的差距。此外,我们为无监督学习引入了一种cut - out风格的增强方法来提高性能。采用3个医学图像数据集(左心房数据集、NIH胰腺数据集、Brats-2019数据集)对HC-CCL进行严格评估。定量结果显示了比现有方法显著的性能优势,在所有指标中实现了最先进的性能。实现将在https://github.com/xxmmss/HC-CCL上发布。
{"title":"Semi-supervised Medical Image Segmentation Using Heterogeneous Complementary Correction Network and Confidence Contrastive Learning.","authors":"Lei Li, Miaosen Xue, Songyang Li, Zhuoli Dong, Tianli Liao, Peng Li","doi":"10.1007/s12539-025-00727-1","DOIUrl":"10.1007/s12539-025-00727-1","url":null,"abstract":"<p><p>Semi-supervised medical image segmentation techniques have demonstrated significant potential and effectiveness in clinical diagnosis. The prevailing approaches using the mean-teacher (MT) framework achieve promising image segmentation results. However, due to the unreliability of the pseudo labels generated by the teacher model, existing methods still have some inherent limitations that must be considered and addressed. In this paper, we propose an innovative semi-supervised method for medical image segmentation by combining the heterogeneous complementary correction network and confidence contrastive learning (HC-CCL). Specifically, we develop a triple-branch framework by integrating a heterogeneous complementary correction (HCC) network into the MT framework. HCC serves as an auxiliary branch that corrects prediction errors in the student model and provides complementary information. To improve the capacity for feature learning in our proposed model, we introduce a confidence contrastive learning (CCL) approach with a novel sampling strategy. Furthermore, we develop a momentum style transfer (MST) method to narrow the gap between labeled and unlabeled data distributions. In addition, we introduce a Cutout-style augmentation for unsupervised learning to enhance performance. Three medical image datasets (including left atrial (LA) dataset, NIH pancreas dataset, Brats-2019 dataset) were employed to rigorously evaluate HC-CCL. Quantitative results demonstrate significant performance advantages over existing approaches, achieving state-of-the-art performance across all metrics. The implementation will be released at https://github.com/xxmmss/HC-CCL .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"211-230"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144608261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CPE-Pro: A Structure-Sensitive Deep Learning Method for Protein Representation and Origin Evaluation. CPE-Pro:一种结构敏感的蛋白质表示和起源评估的深度学习方法。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-06-08 DOI: 10.1007/s12539-025-00732-4
Wenrui Gou, Wenhui Ge, Yang Tan, Mingchen Li, Guisheng Fan, Huiqun Yu

Protein structures are fundamental to understanding their functions and interactions. With the continuous advancement of protein structure prediction methods, structure databases are rapidly expanding. Identifying the origin of protein structures is crucial for assessing the reliability of experimental resolution and computational prediction methods, as well as for guiding downstream biological research. Existing protein representation approaches often fail to capture subtle yet critical structural differences, posing challenges for precise structural traceability. To address this, we propose a structure-sensitive supervised deep learning model, Crystal vs Predicted Evaluator for Protein Structure (CPE-Pro), for the representation and origin evaluation of protein structures. CPE-Pro integrates a pre-trained protein Structural Sequence Language Model (SSLM) and Geometric Vector Perceptron-Graph Neural Network (GVP-GNN) to learn structure-aware protein representations and capture structural differences, enabling accurate classification across four origins of structural data. Preliminary results indicate that, compared to large-scale protein language models trained on extensive amino acid sequences, structural sequences enriched with local structural features enable the model to capture more informative protein characteristics, thereby enhancing and refining protein representations. Future research directions include extending the architecture to additional protein structure paradigms and developing evaluation methodologies for low-pLDDT predicted structures, providing more effective tools for protein structure analysis. The code, model weights, and all relevant materials are available at https://github.com/wr1102/CPE-Pro .

蛋白质结构是理解其功能和相互作用的基础。随着蛋白质结构预测方法的不断进步,蛋白质结构数据库也在迅速扩大。确定蛋白质结构的起源对于评估实验分辨率和计算预测方法的可靠性以及指导下游生物学研究至关重要。现有的蛋白质表征方法往往不能捕捉到微妙但关键的结构差异,这对精确的结构可追溯性提出了挑战。为了解决这个问题,我们提出了一个结构敏感的监督深度学习模型,晶体vs预测评估器蛋白质结构(CPE-Pro),用于蛋白质结构的表示和起源评估。CPE-Pro集成了预训练的蛋白质结构序列语言模型(SSLM)和几何向量感知器-图神经网络(GVP-GNN),以学习结构感知的蛋白质表示并捕获结构差异,从而实现跨四个结构数据来源的准确分类。初步结果表明,与广泛氨基酸序列训练的大规模蛋白质语言模型相比,富含局部结构特征的结构序列使模型能够捕获更多信息丰富的蛋白质特征,从而增强和精炼蛋白质表征。未来的研究方向包括将该结构扩展到更多的蛋白质结构范式,开发低plddt预测结构的评估方法,为蛋白质结构分析提供更有效的工具。代码、模型权重和所有相关材料可在https://github.com/wr1102/CPE-Pro上获得。
{"title":"CPE-Pro: A Structure-Sensitive Deep Learning Method for Protein Representation and Origin Evaluation.","authors":"Wenrui Gou, Wenhui Ge, Yang Tan, Mingchen Li, Guisheng Fan, Huiqun Yu","doi":"10.1007/s12539-025-00732-4","DOIUrl":"10.1007/s12539-025-00732-4","url":null,"abstract":"<p><p>Protein structures are fundamental to understanding their functions and interactions. With the continuous advancement of protein structure prediction methods, structure databases are rapidly expanding. Identifying the origin of protein structures is crucial for assessing the reliability of experimental resolution and computational prediction methods, as well as for guiding downstream biological research. Existing protein representation approaches often fail to capture subtle yet critical structural differences, posing challenges for precise structural traceability. To address this, we propose a structure-sensitive supervised deep learning model, Crystal vs Predicted Evaluator for Protein Structure (CPE-Pro), for the representation and origin evaluation of protein structures. CPE-Pro integrates a pre-trained protein Structural Sequence Language Model (SSLM) and Geometric Vector Perceptron-Graph Neural Network (GVP-GNN) to learn structure-aware protein representations and capture structural differences, enabling accurate classification across four origins of structural data. Preliminary results indicate that, compared to large-scale protein language models trained on extensive amino acid sequences, structural sequences enriched with local structural features enable the model to capture more informative protein characteristics, thereby enhancing and refining protein representations. Future research directions include extending the architecture to additional protein structure paradigms and developing evaluation methodologies for low-pLDDT predicted structures, providing more effective tools for protein structure analysis. The code, model weights, and all relevant materials are available at https://github.com/wr1102/CPE-Pro .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"195-210"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144247736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuroPpred-MSN: A Neuropeptide Prediction Model Based on Multi-feature Fusion and Siamese Networks. 基于多特征融合和暹罗网络的神经肽预测模型NeuroPpred-MSN。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-06-03 DOI: 10.1007/s12539-025-00730-6
Jian Wen, Minyu Chen, Yongqi Shen, Honghong Wang, Zhuoyu Wei, Lichuan Gu, Xiaolei Zhu

The discovery of neuropeptides offers numerous opportunities for identifying novel drugs and targets to treat a variety of diseases. While various computational methods have been proposed, there remains potential for further performance improvement. In this work, we introduce NeuroPpred-MSN, an innovative and efficient neuropeptide prediction model that leverages multi-feature fusion and Siamese networks. To comprehensively represent the information of neuropeptides, the peptide sequences are encoded by four encoding schemes (token embedding, word2vec embedding, protein language embedding, and handcrafted features). Then, the token embedding and word2vector embedding are fed to a Siamese network channel. In the other channel of the model, peptide sequences and their secondary structure sequences are fed into ProtT5-XL-UniRef50 model to generate the embedding features, while handcrafted encoding techniques are used to extract the physicochemical information. Then the two kinds of features are fused and fed into a bidirectional gated recurrent unit (Bi-GRU) network for further processing. Ultimately, the outputs of the two channels are integrated into a fully connected layer, thereby facilitating the generation of the final prediction. The results on the independent test set indicate that NeuroPpred-MSN exhibits superior predictive performance, with an area under the receiver operating characteristic curve (AUROC) of 98.3%, exceeding the performance of other state-of-the-art predictors. Specifically, compared to other optimal results, this model exhibits improvements of 1.52% in accuracy (ACC), 1.52% in F1 score (F1), 3.2% in Matthews correlation coefficient (MCC), and 1.55% in AUROC. The model was further evaluated on imbalanced datasets, where it achieved the highest values in AUROC, ACC, MCC, sensitivity (SN), and F1, further demonstrating its robustness and generalization. The model can be accessed at the following GitHub repository: https://github.com/wenjean/NeuroPpred-MSN .

神经肽的发现为确定治疗各种疾病的新药和靶点提供了许多机会。虽然已经提出了各种计算方法,但仍有进一步改进性能的潜力。在这项工作中,我们介绍了NeuroPpred-MSN,这是一种利用多特征融合和暹罗网络的创新高效神经肽预测模型。为了全面表达神经肽的信息,对肽序列采用四种编码方案(token嵌入、word2vec嵌入、蛋白质语言嵌入和手工特征)进行编码。然后,将令牌嵌入和word2vector嵌入送入暹罗网络通道。在模型的另一个通道中,将肽序列及其二级结构序列输入到ProtT5-XL-UniRef50模型中生成嵌入特征,并使用手工编码技术提取理化信息。然后将这两种特征融合并送入双向门控循环单元(Bi-GRU)网络进行进一步处理。最终,将两个通道的输出集成到一个完全连接的层中,从而便于最终预测的生成。在独立测试集上的结果表明,NeuroPpred-MSN表现出优越的预测性能,其接受者工作特征曲线下面积(AUROC)为98.3%,超过了其他最先进的预测器。具体而言,与其他优化结果相比,该模型的准确率(ACC)提高了1.52%,F1评分(F1)提高了1.52%,马修斯相关系数(MCC)提高了3.2%,AUROC提高了1.55%。在不平衡数据集上对该模型进行了进一步的评估,结果表明该模型在AUROC、ACC、MCC、灵敏度(SN)和F1上均达到最高值,进一步证明了该模型的稳健性和泛化性。该模型可以在以下GitHub存储库中访问:https://github.com/wenjean/NeuroPpred-MSN。
{"title":"NeuroPpred-MSN: A Neuropeptide Prediction Model Based on Multi-feature Fusion and Siamese Networks.","authors":"Jian Wen, Minyu Chen, Yongqi Shen, Honghong Wang, Zhuoyu Wei, Lichuan Gu, Xiaolei Zhu","doi":"10.1007/s12539-025-00730-6","DOIUrl":"10.1007/s12539-025-00730-6","url":null,"abstract":"<p><p>The discovery of neuropeptides offers numerous opportunities for identifying novel drugs and targets to treat a variety of diseases. While various computational methods have been proposed, there remains potential for further performance improvement. In this work, we introduce NeuroPpred-MSN, an innovative and efficient neuropeptide prediction model that leverages multi-feature fusion and Siamese networks. To comprehensively represent the information of neuropeptides, the peptide sequences are encoded by four encoding schemes (token embedding, word2vec embedding, protein language embedding, and handcrafted features). Then, the token embedding and word2vector embedding are fed to a Siamese network channel. In the other channel of the model, peptide sequences and their secondary structure sequences are fed into ProtT5-XL-UniRef50 model to generate the embedding features, while handcrafted encoding techniques are used to extract the physicochemical information. Then the two kinds of features are fused and fed into a bidirectional gated recurrent unit (Bi-GRU) network for further processing. Ultimately, the outputs of the two channels are integrated into a fully connected layer, thereby facilitating the generation of the final prediction. The results on the independent test set indicate that NeuroPpred-MSN exhibits superior predictive performance, with an area under the receiver operating characteristic curve (AUROC) of 98.3%, exceeding the performance of other state-of-the-art predictors. Specifically, compared to other optimal results, this model exhibits improvements of 1.52% in accuracy (ACC), 1.52% in F1 score (F1), 3.2% in Matthews correlation coefficient (MCC), and 1.55% in AUROC. The model was further evaluated on imbalanced datasets, where it achieved the highest values in AUROC, ACC, MCC, sensitivity (SN), and F1, further demonstrating its robustness and generalization. The model can be accessed at the following GitHub repository: https://github.com/wenjean/NeuroPpred-MSN .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"326-340"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144208492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMFCL: Predicting miRNA-Disease Associations Through Adaptive Multi-source Modality Fusion and Contrastive Learning. AMFCL:通过适应性多源模式融合和对比学习预测mirna与疾病的关联。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-06-02 DOI: 10.1007/s12539-025-00724-4
Yanfang Yang, Shuang Wang, Wenyue Kang, Cuina Jiao, Yinglian Gao, Jinxing Liu

Dysregulation of microRNAs (miRNAs) is a cause of progression in numerous diseases. Uncovering miRNA-disease associations (MDAs) is essential for discovering new biomarkers. Nonetheless, in contrast to conventional biological approaches, advanced computational approaches are typically more rapid and cost-effective. However, most computational methods still face several challenges: (i) integrating multi-source information (MSI); (ii) optimizing feature fusion; (iii) mitigating over-smoothing in graph-based models. This paper introduces a novel model, AMFCL. To encapsulate the miRNA-disease relationships, three types of networks are first constructed. After that, the node representations are learned via multi-layer graph sample and aggregate (GraphSAGE). An adaptive fusion mechanism (AFM) dynamically assigns weights to feature representations to optimize the fusion process. Additionally, a residual connection is used to combat the over-smoothing effect that occurs in graph-based models. The robustness of miRNA and disease embeddings is improved by contrastive learning (CL). Lastly, a multi-layer perceptron (MLP) has all feature embeddings fed into it for the computation of MDA scores. The corresponding experimental results show remarkable improvements in AMFCL compared to advanced models. Moreover, relevant case studies systematically validate the approach's effectiveness in identifying unknown MDAs.

microRNAs (miRNAs)的失调是许多疾病进展的一个原因。揭示mirna -疾病关联(mda)对于发现新的生物标志物至关重要。尽管如此,与传统的生物方法相比,先进的计算方法通常更快,成本效益更高。然而,大多数计算方法仍然面临一些挑战:(1)集成多源信息(MSI);(ii)优化特征融合;(iii)减轻基于图的模型的过度平滑。本文介绍了一种新颖的AMFCL模型。为了概括mirna与疾病的关系,首先构建了三种类型的网络。然后,通过多层图样本和聚合(GraphSAGE)学习节点表示。一种自适应融合机制(AFM)为特征表示动态分配权重,以优化融合过程。此外,残余连接用于对抗在基于图的模型中出现的过度平滑效应。对比学习(CL)提高了miRNA和疾病嵌入的鲁棒性。最后,多层感知器(MLP)将所有特征嵌入到其中用于计算MDA分数。相应的实验结果表明,与先进模型相比,AMFCL有了显著的改进。此外,相关案例研究系统地验证了该方法在识别未知mda方面的有效性。
{"title":"AMFCL: Predicting miRNA-Disease Associations Through Adaptive Multi-source Modality Fusion and Contrastive Learning.","authors":"Yanfang Yang, Shuang Wang, Wenyue Kang, Cuina Jiao, Yinglian Gao, Jinxing Liu","doi":"10.1007/s12539-025-00724-4","DOIUrl":"10.1007/s12539-025-00724-4","url":null,"abstract":"<p><p>Dysregulation of microRNAs (miRNAs) is a cause of progression in numerous diseases. Uncovering miRNA-disease associations (MDAs) is essential for discovering new biomarkers. Nonetheless, in contrast to conventional biological approaches, advanced computational approaches are typically more rapid and cost-effective. However, most computational methods still face several challenges: (i) integrating multi-source information (MSI); (ii) optimizing feature fusion; (iii) mitigating over-smoothing in graph-based models. This paper introduces a novel model, AMFCL. To encapsulate the miRNA-disease relationships, three types of networks are first constructed. After that, the node representations are learned via multi-layer graph sample and aggregate (GraphSAGE). An adaptive fusion mechanism (AFM) dynamically assigns weights to feature representations to optimize the fusion process. Additionally, a residual connection is used to combat the over-smoothing effect that occurs in graph-based models. The robustness of miRNA and disease embeddings is improved by contrastive learning (CL). Lastly, a multi-layer perceptron (MLP) has all feature embeddings fed into it for the computation of MDA scores. The corresponding experimental results show remarkable improvements in AMFCL compared to advanced models. Moreover, relevant case studies systematically validate the approach's effectiveness in identifying unknown MDAs.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"165-179"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144198995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensing Compound Substructures Combined with Molecular Fingerprinting to Predict Drug-Target Interactions. 传感化合物亚结构结合分子指纹技术预测药物-靶标相互作用。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-04-03 DOI: 10.1007/s12539-025-00698-3
Wanhua Huang, Xuecong Tian, Ying Su, Sizhe Zhang, Chen Chen, Cheng Chen

Identification of drug-target interactions (DTIs) is critical for drug discovery and drug repositioning. However, most DTI methods that extract features from drug molecules and protein entities neglect specific substructure information of pharmacological responses, which leads to poor predictive performance. Moreover, most existing methods are based on molecular graphs or molecular descriptors to obtain abstract representations of molecules, but combining the two feature learning methods for DTI prediction remains unexplored. Therefore, a new ASCS-DTI framework for DTI prediction is proposed, which utilizes a substructure attention mechanism to flexibly capture substructures of compounds at different grain sizes, allowing the important substructure information of each molecule to be learned. Additionally, the framework combines three different molecular fingerprinting information to comprehensively characterize molecular representations. A stacked convolutional coding module processes the sequence information of target proteins in a multi-scale and multi-level view. Finally, multi-modal fusion of molecular graph features and molecular fingerprint features, along with multi-modal information encoding of DTIs, is performed by the feature fusion module. The method outperforms six advanced baseline models on different benchmark datasets: Biosnap, BindingDB, and Human, with a significant improvement in performance, particularly in maintaining strong results across different experimental settings.

药物-靶点相互作用(DTI)的鉴定对于药物发现和药物重新定位至关重要。然而,大多数从药物分子和蛋白质实体中提取特征的 DTI 方法都忽略了药理反应的特定亚结构信息,导致预测性能不佳。此外,现有的大多数方法都是基于分子图或分子描述符来获得分子的抽象表征,但将这两种特征学习方法结合起来用于 DTI 预测仍有待探索。因此,本文提出了一种用于 DTI 预测的全新 ASCS-DTI 框架,该框架利用亚结构关注机制灵活捕捉不同晶粒尺寸化合物的亚结构,从而学习到每个分子的重要亚结构信息。此外,该框架还结合了三种不同的分子指纹信息,以全面描述分子表征。堆叠卷积编码模块以多尺度和多层次的视角处理目标蛋白质的序列信息。最后,特征融合模块对分子图特征和分子指纹特征以及 DTI 的多模态信息编码进行多模态融合。该方法在不同基准数据集上的表现优于六种先进的基线模型:该方法在 Biosnap、BindingDB 和 Human 等不同基准数据集上的表现优于六种高级基线模型,性能显著提高,尤其是在不同实验环境下都能保持强劲的结果。
{"title":"Sensing Compound Substructures Combined with Molecular Fingerprinting to Predict Drug-Target Interactions.","authors":"Wanhua Huang, Xuecong Tian, Ying Su, Sizhe Zhang, Chen Chen, Cheng Chen","doi":"10.1007/s12539-025-00698-3","DOIUrl":"10.1007/s12539-025-00698-3","url":null,"abstract":"<p><p>Identification of drug-target interactions (DTIs) is critical for drug discovery and drug repositioning. However, most DTI methods that extract features from drug molecules and protein entities neglect specific substructure information of pharmacological responses, which leads to poor predictive performance. Moreover, most existing methods are based on molecular graphs or molecular descriptors to obtain abstract representations of molecules, but combining the two feature learning methods for DTI prediction remains unexplored. Therefore, a new ASCS-DTI framework for DTI prediction is proposed, which utilizes a substructure attention mechanism to flexibly capture substructures of compounds at different grain sizes, allowing the important substructure information of each molecule to be learned. Additionally, the framework combines three different molecular fingerprinting information to comprehensively characterize molecular representations. A stacked convolutional coding module processes the sequence information of target proteins in a multi-scale and multi-level view. Finally, multi-modal fusion of molecular graph features and molecular fingerprint features, along with multi-modal information encoding of DTIs, is performed by the feature fusion module. The method outperforms six advanced baseline models on different benchmark datasets: Biosnap, BindingDB, and Human, with a significant improvement in performance, particularly in maintaining strong results across different experimental settings.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"357-371"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143772098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Task Deep Learning Approach for Simultaneous Sleep Staging and Apnea Detection for Elderly People. 基于多任务深度学习的老年人同步睡眠分期和呼吸暂停检测方法。
IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 Epub Date: 2025-06-05 DOI: 10.1007/s12539-025-00721-7
Lei Shi, Ranran Gui, Li Wang, Peng Li, Qunfeng Niu
{"title":"A Multi-Task Deep Learning Approach for Simultaneous Sleep Staging and Apnea Detection for Elderly People.","authors":"Lei Shi, Ranran Gui, Li Wang, Peng Li, Qunfeng Niu","doi":"10.1007/s12539-025-00721-7","DOIUrl":"10.1007/s12539-025-00721-7","url":null,"abstract":"","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"341-356"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144234005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Interdisciplinary Sciences: Computational Life Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1