Pub Date : 2026-03-23DOI: 10.1007/s12539-026-00824-9
Inayatul Haq, Haomin Liang, Zheng Gong, Zehong Xia, Wei Zhang, Rashid Khan, Faizan Ahmad, Yan Kang, Bingding Huang
Glomerular crescent lesions are critical indicators of severe kidney injury and are closely associated with disease progression. However, their automated identification remains challenging due to limited annotated data, class imbalance, and subtle morphological variations. This study proposes a comprehensive deep learning (DL) framework for segmentation and classification of glomerular crescent lesions in histopathology images, with emphasis on robustness under limited data conditions. The ISICDM2024 Challenge dataset is used for evaluation. For segmentation, several baseline models are first evaluated, including DeepLabV3, U-Net, Transformer-based U-Net, and a feature pyramid network (FPN) with a ResNet-34 backbone. Similarly, for classification, multiple baseline models are evaluated, including EfficientNetV2-B0, ResNet-50, DenseNet-121, hybrid CNNs, CTransPath, and RetCCL. Motivated by the strong performance of FPN with ResNet-34 and DenseNet-121, two customized models are developed, namely CrescentSegNet for segmentation and CrescentDenseNet for classification. Comprehensive ablation studies are conducted, and interpretability and reliability are assessed using Grad-CAM, saliency mapping, uncertainty estimation, calibration analysis, and t-SNE. Cross-dataset evaluation on SICAPv2 and BreaKHis 400 × confirms strong generalization and robustness. The proposed framework achieves competitive performance while maintaining efficiency and interpretability.
{"title":"Exploring Deep Learning Models for Small Histopathology Datasets: Segmentation and Classification of Glomerular Crescent Lesions with Ablation, Interpretability, and Calibration Analyses.","authors":"Inayatul Haq, Haomin Liang, Zheng Gong, Zehong Xia, Wei Zhang, Rashid Khan, Faizan Ahmad, Yan Kang, Bingding Huang","doi":"10.1007/s12539-026-00824-9","DOIUrl":"https://doi.org/10.1007/s12539-026-00824-9","url":null,"abstract":"<p><p>Glomerular crescent lesions are critical indicators of severe kidney injury and are closely associated with disease progression. However, their automated identification remains challenging due to limited annotated data, class imbalance, and subtle morphological variations. This study proposes a comprehensive deep learning (DL) framework for segmentation and classification of glomerular crescent lesions in histopathology images, with emphasis on robustness under limited data conditions. The ISICDM2024 Challenge dataset is used for evaluation. For segmentation, several baseline models are first evaluated, including DeepLabV3, U-Net, Transformer-based U-Net, and a feature pyramid network (FPN) with a ResNet-34 backbone. Similarly, for classification, multiple baseline models are evaluated, including EfficientNetV2-B0, ResNet-50, DenseNet-121, hybrid CNNs, CTransPath, and RetCCL. Motivated by the strong performance of FPN with ResNet-34 and DenseNet-121, two customized models are developed, namely CrescentSegNet for segmentation and CrescentDenseNet for classification. Comprehensive ablation studies are conducted, and interpretability and reliability are assessed using Grad-CAM, saliency mapping, uncertainty estimation, calibration analysis, and t-SNE. Cross-dataset evaluation on SICAPv2 and BreaKHis 400 × confirms strong generalization and robustness. The proposed framework achieves competitive performance while maintaining efficiency and interpretability.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147503925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viruses are the most abundant biological entities on Earth, playing essential roles in shaping microbial communities, driving evolution, and maintaining ecosystem functions. Metagenomic sequencing has unveiled a vast landscape of uncharacterized viral "dark matter", comprising highly divergent sequences that elude traditional taxonomic approaches. Here, we develop PhaGCN_Cluster, a next-generation viral classification tool built upon a graph convolutional neural network (GCN) framework. By integrating protein-level sequence similarity and contig-level genomic features, PhaGCN_Cluster establishes a scalable knowledge graph-based analytical system. The optimized algorithm yields significant gains in computational efficiency, supporting accurate taxonomic assignment of up to 300,000 contigs per run. Compared with existing methods, PhaGCN_Cluster demonstrates superior classification accuracy and F1-scores, particularly under conditions of low sequence similarity, and exhibits strong robustness in detecting evolutionarily distant viruses. Notably, PhaGCN_Cluster incorporates an updated logic for assigning "_like" taxa, which enhances its capacity to accommodate novel viral groups while preserving high precision-though at the cost of a slight reduction in recall. By generating high-fidelity network graphs, PhaGCN_Cluster uncovers previously unrecognized clades and bridges evolutionary gaps between reference viruses and novel sequences, thereby providing critical insights into viral diversity and evolution. PhaGCN_Cluster represents an interpretable, efficient, and scalable solution for automated virus classification. The source code of PhaGCN_Cluster is available via https://github.com/xiahaolong/PhaGCN_Cluster .
{"title":"PhaGCN_Cluster: A Scalable and Robust Framework for Automated Classification and Discovery of Viral Dark Matter from Metagenomes.","authors":"Hao-Long Xia, Pei-Yu Liang, Wen-Guang Yuan, Xu-Dong Cao, Yanni Sun, Jing-Zhe Jiang, Li-Hong Yuan","doi":"10.1007/s12539-026-00820-z","DOIUrl":"https://doi.org/10.1007/s12539-026-00820-z","url":null,"abstract":"<p><p>Viruses are the most abundant biological entities on Earth, playing essential roles in shaping microbial communities, driving evolution, and maintaining ecosystem functions. Metagenomic sequencing has unveiled a vast landscape of uncharacterized viral \"dark matter\", comprising highly divergent sequences that elude traditional taxonomic approaches. Here, we develop PhaGCN_Cluster, a next-generation viral classification tool built upon a graph convolutional neural network (GCN) framework. By integrating protein-level sequence similarity and contig-level genomic features, PhaGCN_Cluster establishes a scalable knowledge graph-based analytical system. The optimized algorithm yields significant gains in computational efficiency, supporting accurate taxonomic assignment of up to 300,000 contigs per run. Compared with existing methods, PhaGCN_Cluster demonstrates superior classification accuracy and F1-scores, particularly under conditions of low sequence similarity, and exhibits strong robustness in detecting evolutionarily distant viruses. Notably, PhaGCN_Cluster incorporates an updated logic for assigning \"_like\" taxa, which enhances its capacity to accommodate novel viral groups while preserving high precision-though at the cost of a slight reduction in recall. By generating high-fidelity network graphs, PhaGCN_Cluster uncovers previously unrecognized clades and bridges evolutionary gaps between reference viruses and novel sequences, thereby providing critical insights into viral diversity and evolution. PhaGCN_Cluster represents an interpretable, efficient, and scalable solution for automated virus classification. The source code of PhaGCN_Cluster is available via https://github.com/xiahaolong/PhaGCN_Cluster .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147473483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-13DOI: 10.1007/s12539-026-00819-6
Jian Zhang, Pengli Lu, Fentang Gao
As a product of cellular metabolic activity, the level change of metabolites is closely related to the occurrence and development of diseases, so the prediction of metabolite-disease association is a key issue in biomedical research. Traditional methods face the challenges of insufficient long-range dependency modeling and poor interpretability. To address these challenges, we propose a dual-path dynamic contrastive learning framework integrating graph neural networks (GNN) and Mamba architectures, enhanced by fast Kolmogorov-Arnold networks (FastKAN) for metabolite-disease association prediction (GMC-DMA). First, we construct a multi-source heterogeneous network that contains similarity and known associations. Then, the residual graph convolutional Network (ResGCN) is designed to capture the local topological features, and the Mamba architecture is introduced to establish the selective state space model (SSM), which deals with the global dependency with linear time complexity and eliminates the over-smoothing problem of message passing. Then, the InfoNCE loss function is used to implement cross-modal contrast learning, and the sample imbalance problem is solved by the dynamic negative sampling strategy. Finally, the bilinear decoder enhanced by FastKAN outputs the correlation probability. A large number of experimental results show that the comprehensive performance of GMC-DMA is significantly better than that of the baseline methods, proving its effectiveness in predicting disease-related metabolites. In addition, the case studies have also confirmed that GMC-DMA has good reliability in discovering potential metabolites.
{"title":"GMC-DMA: GNN-Mamba Co-Contrastive Optimization for Disease-Metabolite Association Prediction.","authors":"Jian Zhang, Pengli Lu, Fentang Gao","doi":"10.1007/s12539-026-00819-6","DOIUrl":"https://doi.org/10.1007/s12539-026-00819-6","url":null,"abstract":"<p><p>As a product of cellular metabolic activity, the level change of metabolites is closely related to the occurrence and development of diseases, so the prediction of metabolite-disease association is a key issue in biomedical research. Traditional methods face the challenges of insufficient long-range dependency modeling and poor interpretability. To address these challenges, we propose a dual-path dynamic contrastive learning framework integrating graph neural networks (GNN) and Mamba architectures, enhanced by fast Kolmogorov-Arnold networks (FastKAN) for metabolite-disease association prediction (GMC-DMA). First, we construct a multi-source heterogeneous network that contains similarity and known associations. Then, the residual graph convolutional Network (ResGCN) is designed to capture the local topological features, and the Mamba architecture is introduced to establish the selective state space model (SSM), which deals with the global dependency with linear time complexity and eliminates the over-smoothing problem of message passing. Then, the InfoNCE loss function is used to implement cross-modal contrast learning, and the sample imbalance problem is solved by the dynamic negative sampling strategy. Finally, the bilinear decoder enhanced by FastKAN outputs the correlation probability. A large number of experimental results show that the comprehensive performance of GMC-DMA is significantly better than that of the baseline methods, proving its effectiveness in predicting disease-related metabolites. In addition, the case studies have also confirmed that GMC-DMA has good reliability in discovering potential metabolites.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147456861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-10DOI: 10.1007/s12539-026-00822-x
Xiang Chen, Wenfeng He, Junnan Yu, Zhaoyu Fang
The analysis of single-cell RNA sequencing (scRNA-seq) data is beset by formidable hurdles, including large feature space, widespread sparsity, noise contamination, and inter-batch variability, which collectively compromise the accuracy of cell clustering and subsequent downstream analyses. To overcome these obstacles, we present scCMA, a novel computational framework that synergistically combines a discriminative representation learning scheme with a masked reconstruction autoencoder architecture to generate stable and biologically meaningful cell embeddings. The contrastive module sharpens the distinction between cell types by maximizing similarities within types while minimizing them across types, thereby implicitly mitigating batch effects without requiring prior dataset information. Concurrently, the masked autoencoder learns to reconstruct randomly masked gene expression profiles, enabling the model to capture global transcriptional dependencies and identify rare biological features while diminishing the influence of noise and sparsity. Comprehensive evaluations on a diverse array of public datasets reveal that scCMA demonstrates superior performance in improved clustering precision, effectively corrects for batch differences without sacrificing biological variance, and exhibits remarkable proficiency in recognizing rare cellular subsets. Moreover, the embeddings generated by scCMA accurately reflect the temporal progression of cell development, facilitating the faithful modeling of cellular lineage progression.
{"title":"scCMA: A Contrastive Masked Autoencoder Framework for Robust Representation Learning of scRNA-seq Data.","authors":"Xiang Chen, Wenfeng He, Junnan Yu, Zhaoyu Fang","doi":"10.1007/s12539-026-00822-x","DOIUrl":"https://doi.org/10.1007/s12539-026-00822-x","url":null,"abstract":"<p><p>The analysis of single-cell RNA sequencing (scRNA-seq) data is beset by formidable hurdles, including large feature space, widespread sparsity, noise contamination, and inter-batch variability, which collectively compromise the accuracy of cell clustering and subsequent downstream analyses. To overcome these obstacles, we present scCMA, a novel computational framework that synergistically combines a discriminative representation learning scheme with a masked reconstruction autoencoder architecture to generate stable and biologically meaningful cell embeddings. The contrastive module sharpens the distinction between cell types by maximizing similarities within types while minimizing them across types, thereby implicitly mitigating batch effects without requiring prior dataset information. Concurrently, the masked autoencoder learns to reconstruct randomly masked gene expression profiles, enabling the model to capture global transcriptional dependencies and identify rare biological features while diminishing the influence of noise and sparsity. Comprehensive evaluations on a diverse array of public datasets reveal that scCMA demonstrates superior performance in improved clustering precision, effectively corrects for batch differences without sacrificing biological variance, and exhibits remarkable proficiency in recognizing rare cellular subsets. Moreover, the embeddings generated by scCMA accurately reflect the temporal progression of cell development, facilitating the faithful modeling of cellular lineage progression.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147432723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional spatial transcriptomics methods typically rely on the direct relationship between spatial location and gene expression data, but they often fail to capture the intricate structures embedded in spatial data. To address this limitation, we introduce SpatioFreq, an innovative approach that includes two fundamental tasks: spatial domain identification and cell type deconvolution. In the spatial domain identification task, the goal is to identify biologically meaningful functional regions through spatial clustering, thereby revealing the spatial organization of cells within tissues. In the first task, SpatioFreq utilizes the Laplacian matrix to extract frequency domain features, enabling detection of subtle structures and dynamic patterns within spatial data, thereby enhancing the accuracy of spatial clustering. Additionally, by incorporating graph self-supervised contrastive learning, SpatioFreq optimizes long-range dependencies within the spatial data, further improving spatial structure modeling. Contrastive learning is used in cell type deconvolution to refine the relationship between spatial position and single-cell embeddings, enhancing the accuracy of cell type distributions. The dual-task design of SpatioFreq enables information sharing between tasks and has been validated across various datasets. Comparative analysis with current mainstream methods demonstrates that SpatioFreq significantly improves both the accuracy and efficiency of spatial transcriptomics analysis. Notably, in the DCIS breast cancer dataset, SpatioFreq's spatial heterogeneity analysis uncovers complex interactions between tumor cells and their microenvironment. These findings provide new insights into potential therapeutic targets and offer valuable guidance for precision oncology.
{"title":"SpatioFreq: A Deep Learning Framework for Decoding Cellular and Tissue Landscapes Across Organisms Using Spatial Transcriptomics.","authors":"Zhenghui Wang, Ruoyan Dai, Mengqiu Wang, Zhiwei Zhang, Lixin Lei, Zhenxing Li, Kaitai Han, Zijun Wang, Qianjin Guo","doi":"10.1007/s12539-025-00811-6","DOIUrl":"https://doi.org/10.1007/s12539-025-00811-6","url":null,"abstract":"<p><p>Traditional spatial transcriptomics methods typically rely on the direct relationship between spatial location and gene expression data, but they often fail to capture the intricate structures embedded in spatial data. To address this limitation, we introduce SpatioFreq, an innovative approach that includes two fundamental tasks: spatial domain identification and cell type deconvolution. In the spatial domain identification task, the goal is to identify biologically meaningful functional regions through spatial clustering, thereby revealing the spatial organization of cells within tissues. In the first task, SpatioFreq utilizes the Laplacian matrix to extract frequency domain features, enabling detection of subtle structures and dynamic patterns within spatial data, thereby enhancing the accuracy of spatial clustering. Additionally, by incorporating graph self-supervised contrastive learning, SpatioFreq optimizes long-range dependencies within the spatial data, further improving spatial structure modeling. Contrastive learning is used in cell type deconvolution to refine the relationship between spatial position and single-cell embeddings, enhancing the accuracy of cell type distributions. The dual-task design of SpatioFreq enables information sharing between tasks and has been validated across various datasets. Comparative analysis with current mainstream methods demonstrates that SpatioFreq significantly improves both the accuracy and efficiency of spatial transcriptomics analysis. Notably, in the DCIS breast cancer dataset, SpatioFreq's spatial heterogeneity analysis uncovers complex interactions between tumor cells and their microenvironment. These findings provide new insights into potential therapeutic targets and offer valuable guidance for precision oncology.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147365268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing drug-target binding affinity (DTA) models still face two major challenges. First, current multimodal approaches often rely on fixed fusion strategies or single model architectures, which limits their ability to adaptively capture the complex and heterogeneous relationships between drugs and targets. Second, heavy dependence on a single learning algorithm reduces model robustness and generalization, resulting in persistently large prediction errors. We propose MLDTA, a multimodal framework for DTA prediction that integrates dynamic feature fusion and ensemble-inspired modeling principles. MLDTA jointly exploits structural information, Geary autocorrelation descriptors, and tripeptide composition to construct complementary drug and target representations. Instead of relying on a single predictor, five representative DTA models from the literature are incorporated as auxiliary predictive modules (APMs), enabling affinity prediction from multiple algorithmic perspectives. These APMs are integrated with the learned drug and target representations through a dynamic fusion mechanism based on attention modules, which adaptively learns the relative importance of different features and predictive signals, thereby enhancing cross-modal interaction and reducing dependence on any individual model. Evaluation on standard datasets indicates that our model surpasses existing methods. Case studies further highlight MLDTA's effectiveness in drug screening.
{"title":"MLDTA an Ensemble-Driven Multimodal Model with Dynamic Fusion for Drug-Target Affinity Prediction.","authors":"Xiaohan Mao, Peng Zhang, Xinyu Xu, Xinzhuang Zhang, Liang Cao, Min He, Zhenzhong Wang, Zhipeng Ke, Wei Xiao","doi":"10.1007/s12539-026-00813-y","DOIUrl":"https://doi.org/10.1007/s12539-026-00813-y","url":null,"abstract":"<p><p>Existing drug-target binding affinity (DTA) models still face two major challenges. First, current multimodal approaches often rely on fixed fusion strategies or single model architectures, which limits their ability to adaptively capture the complex and heterogeneous relationships between drugs and targets. Second, heavy dependence on a single learning algorithm reduces model robustness and generalization, resulting in persistently large prediction errors. We propose MLDTA, a multimodal framework for DTA prediction that integrates dynamic feature fusion and ensemble-inspired modeling principles. MLDTA jointly exploits structural information, Geary autocorrelation descriptors, and tripeptide composition to construct complementary drug and target representations. Instead of relying on a single predictor, five representative DTA models from the literature are incorporated as auxiliary predictive modules (APMs), enabling affinity prediction from multiple algorithmic perspectives. These APMs are integrated with the learned drug and target representations through a dynamic fusion mechanism based on attention modules, which adaptively learns the relative importance of different features and predictive signals, thereby enhancing cross-modal interaction and reducing dependence on any individual model. Evaluation on standard datasets indicates that our model surpasses existing methods. Case studies further highlight MLDTA's effectiveness in drug screening.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147354908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The medical field uses Magnetic Resonance Imaging (MRI) as an essential diagnostic tool which provides doctors non-invasive images of brain structures and pathological conditions. Brain tumor detection stands as a vital application that needs specific and effective approaches for both medical diagnosis and treatment procedures. The challenges from manual examination of MRI scans stem from inconsistent tumor features including heterogeneity and irregular dimensions which results in inaccurate assessments of tumor size. To address these challenges, this paper proposes an Automated Classification and Grading Diagnosis Model (ACGDM) using MRI images. Unlike conventional methods, ACGDM introduces a Multi-Scale Graph Neural Network (MSGNN), which dynamically captures hierarchical and multi-scale dependencies in MRI data, enabling more accurate feature representation and contextual analysis. Additionally, the Spatio-Temporal Transformer Attention Mechanism (STTAM) effectively models both spatial MRI patterns and temporal evolution by incorporating cross-frame dependencies, enhancing the model's sensitivity to subtle disease progression. By analyzing multi-modal MRI sequences, ACGDM dynamically adjusts its focus across spatial and temporal dimensions, enabling precise identification of salient features. Simulations are conducted using Python and standard libraries to evaluate the model on the BRATS 2018, 2019, 2020 datasets and the Br235H dataset, encompassing diverse MRI scans with expert annotations. Extensive experimentation demonstrates 99.8% accuracy in detecting various tumor types, showcasing its potential to revolutionize diagnostic practices and improve patient outcomes.
{"title":"Automated Brain Tumor Classification and Grading Using Multi-scale Graph Neural Network with Spatio-Temporal Transformer Attention Through MRI Scans.","authors":"Somya Srivastava, Parita Jain, Sanjay Kr Pandey, Gaurav Dubey, Nripendra Narayan Das","doi":"10.1007/s12539-025-00718-2","DOIUrl":"10.1007/s12539-025-00718-2","url":null,"abstract":"<p><p>The medical field uses Magnetic Resonance Imaging (MRI) as an essential diagnostic tool which provides doctors non-invasive images of brain structures and pathological conditions. Brain tumor detection stands as a vital application that needs specific and effective approaches for both medical diagnosis and treatment procedures. The challenges from manual examination of MRI scans stem from inconsistent tumor features including heterogeneity and irregular dimensions which results in inaccurate assessments of tumor size. To address these challenges, this paper proposes an Automated Classification and Grading Diagnosis Model (ACGDM) using MRI images. Unlike conventional methods, ACGDM introduces a Multi-Scale Graph Neural Network (MSGNN), which dynamically captures hierarchical and multi-scale dependencies in MRI data, enabling more accurate feature representation and contextual analysis. Additionally, the Spatio-Temporal Transformer Attention Mechanism (STTAM) effectively models both spatial MRI patterns and temporal evolution by incorporating cross-frame dependencies, enhancing the model's sensitivity to subtle disease progression. By analyzing multi-modal MRI sequences, ACGDM dynamically adjusts its focus across spatial and temporal dimensions, enabling precise identification of salient features. Simulations are conducted using Python and standard libraries to evaluate the model on the BRATS 2018, 2019, 2020 datasets and the Br235H dataset, encompassing diverse MRI scans with expert annotations. Extensive experimentation demonstrates 99.8% accuracy in detecting various tumor types, showcasing its potential to revolutionize diagnostic practices and improve patient outcomes.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"122-150"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144225370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Protein-protein interactions (PPIs) are essential therapeutic targets, yet their large and relatively flat interfaces hinder the development of small-molecule inhibitors. Traditional computational approaches rely heavily on existing chemical libraries or expert heuristics, restricting exploration of novel chemical space. To address these challenges, we present Hot2Mol, a generative deep learning framework for the de novo design of target-specific and drug-like PPI inhibitors. Hot2Mol captures crucial pharmacophoric features from hot-spot residues, allowing precise targeting of PPI interfaces while eliminating the need for known bioactive ligands. The framework integrates three main components: a conditional transformer for pharmacophore-guided, property-constrained molecular generation; an E(n)-equivariant graph neural network to ensure accurate spatial alignment with PPI hot-spot pharmacophores; a variational autoencoder to sample novel and diverse molecular structures. Comprehensive assessments demonstrate that Hot2Mol outperforms state-of-the-art models in binding affinity, drug-likeness, synthetic accessibility, novelty, and uniqueness. Molecular dynamics simulations further confirm the strong binding stability of generated compounds. Case studies underscore Hot2Mol's ability to design high-affinity and selective PPI inhibitors, highlighting its potential to accelerate rational PPI-targeted drug discovery.
{"title":"Hot-Spot-Guided Generative Deep Learning for Drug-Like PPI Inhibitor Design.","authors":"Heqi Sun, Jiayi Li, Yufang Zhang, Shenggeng Lin, Junwei Chen, Hong Tan, Ruixuan Wang, Xueying Mao, Jianwei Zhao, Rongpei Li, Dong-Qing Wei","doi":"10.1007/s12539-025-00756-w","DOIUrl":"10.1007/s12539-025-00756-w","url":null,"abstract":"<p><p>Protein-protein interactions (PPIs) are essential therapeutic targets, yet their large and relatively flat interfaces hinder the development of small-molecule inhibitors. Traditional computational approaches rely heavily on existing chemical libraries or expert heuristics, restricting exploration of novel chemical space. To address these challenges, we present Hot2Mol, a generative deep learning framework for the de novo design of target-specific and drug-like PPI inhibitors. Hot2Mol captures crucial pharmacophoric features from hot-spot residues, allowing precise targeting of PPI interfaces while eliminating the need for known bioactive ligands. The framework integrates three main components: a conditional transformer for pharmacophore-guided, property-constrained molecular generation; an E(n)-equivariant graph neural network to ensure accurate spatial alignment with PPI hot-spot pharmacophores; a variational autoencoder to sample novel and diverse molecular structures. Comprehensive assessments demonstrate that Hot2Mol outperforms state-of-the-art models in binding affinity, drug-likeness, synthetic accessibility, novelty, and uniqueness. Molecular dynamics simulations further confirm the strong binding stability of generated compounds. Case studies underscore Hot2Mol's ability to design high-affinity and selective PPI inhibitors, highlighting its potential to accelerate rational PPI-targeted drug discovery.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"180-194"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144953019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-07-30DOI: 10.1007/s12539-025-00743-1
Hüseyin Fırat, Hüseyin Üzen
Brain tumors (BT) can cause fatal outcomes by affecting body functions, making precise early detection via magnetic resonance imaging (MRI) examinations critical. The complex variations found in cells of BT may pose challenges in identifying the type of tumor and selecting the most suitable treatment strategy, potentially resulting in different assessments by doctors. As a result, in recent years, AI-powered diagnostic systems have been created to accurately and efficiently identify different types of BT using MRI images. Notably, state-of-the-art deep learning architectures, which have demonstrated efficacy in diverse domains, are now being employed effectively for classifying of brain MRI images. This research presents a hybrid model that integrates spatial attention mechanism (SAM) with ConvNeXt to classify three types of BT: meningioma, pituitary, and glioma. The hybrid model integrates ConvNeXt to enhance the receptive field, capturing information from a broader spatial context, crucial for recognizing tumor patterns spanning multiple pixels. SAM is applied after ConvNeXt, enabling the network to selectively focus on informative regions, thereby improving the model's ability to distinguish BT types and capture complex spatial relationships. Tested on BSF and Figshare datasets, the proposed model achieves a remarkable accuracy of 99.39% and 98.86%, respectively, outperforming the results of recent studies by achieving these results in fewer training periods. This hybrid model marks a major step forward in the automatic classification of BT, demonstrating superior performance in accuracy with efficient training.
{"title":"Classification of Brain Tumors in MRI Images with Brain-CNXSAMNet: Integrating Hybrid ConvNeXt and Spatial Attention Module Networks.","authors":"Hüseyin Fırat, Hüseyin Üzen","doi":"10.1007/s12539-025-00743-1","DOIUrl":"10.1007/s12539-025-00743-1","url":null,"abstract":"<p><p>Brain tumors (BT) can cause fatal outcomes by affecting body functions, making precise early detection via magnetic resonance imaging (MRI) examinations critical. The complex variations found in cells of BT may pose challenges in identifying the type of tumor and selecting the most suitable treatment strategy, potentially resulting in different assessments by doctors. As a result, in recent years, AI-powered diagnostic systems have been created to accurately and efficiently identify different types of BT using MRI images. Notably, state-of-the-art deep learning architectures, which have demonstrated efficacy in diverse domains, are now being employed effectively for classifying of brain MRI images. This research presents a hybrid model that integrates spatial attention mechanism (SAM) with ConvNeXt to classify three types of BT: meningioma, pituitary, and glioma. The hybrid model integrates ConvNeXt to enhance the receptive field, capturing information from a broader spatial context, crucial for recognizing tumor patterns spanning multiple pixels. SAM is applied after ConvNeXt, enabling the network to selectively focus on informative regions, thereby improving the model's ability to distinguish BT types and capture complex spatial relationships. Tested on BSF and Figshare datasets, the proposed model achieves a remarkable accuracy of 99.39% and 98.86%, respectively, outperforming the results of recent studies by achieving these results in fewer training periods. This hybrid model marks a major step forward in the automatic classification of BT, demonstrating superior performance in accuracy with efficient training.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":"1-21"},"PeriodicalIF":3.9,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144753272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}