Pub Date : 2025-10-22DOI: 10.1038/s43588-025-00877-8
Yubing Qian, Ji Chen
A recent study proposes using a single neural network to model and compute a wide range of solid-state materials, demonstrating exceptional transferability and substantially reduced computational costs — a breakthrough that could accelerate the design of next-generation materials in applications from efficient solar cells to room-temperature superconductors.
{"title":"Down to one network for computing crystalline materials","authors":"Yubing Qian, Ji Chen","doi":"10.1038/s43588-025-00877-8","DOIUrl":"10.1038/s43588-025-00877-8","url":null,"abstract":"A recent study proposes using a single neural network to model and compute a wide range of solid-state materials, demonstrating exceptional transferability and substantially reduced computational costs — a breakthrough that could accelerate the design of next-generation materials in applications from efficient solar cells to room-temperature superconductors.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 12","pages":"1098-1099"},"PeriodicalIF":18.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145350323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1038/s43588-025-00830-9
Han Chen, Christina V. Theodoris
The Large Perturbation Model (LPM) is a computational deep learning framework that predicts gene expression responses to chemical and genetic perturbations across diverse contexts. By modeling perturbation, readout, and context jointly, LPM enables in silico hypothesis generation and drug repurposing.
{"title":"Interpolating perturbations across contexts","authors":"Han Chen, Christina V. Theodoris","doi":"10.1038/s43588-025-00830-9","DOIUrl":"10.1038/s43588-025-00830-9","url":null,"abstract":"The Large Perturbation Model (LPM) is a computational deep learning framework that predicts gene expression responses to chemical and genetic perturbations across diverse contexts. By modeling perturbation, readout, and context jointly, LPM enables in silico hypothesis generation and drug repurposing.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"992-993"},"PeriodicalIF":18.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1038/s43588-025-00870-1
Djordje Miladinovic, Tobias Höppe, Mathieu Chevalley, Andreas Georgiou, Lachlan Stuart, Arash Mehrjou, Marcus Bantscheff, Bernhard Schölkopf, Patrick Schwab
Data generated in perturbation experiments link perturbations to the changes they elicit and therefore contain information relevant to numerous biological discovery tasks—from understanding the relationships between biological entities to developing therapeutics. However, these data encompass diverse perturbations and readouts, and the complex dependence of experimental outcomes on their biological context makes it challenging to integrate insights across experiments. Here we present the large perturbation model (LPM), a deep-learning model that integrates multiple, heterogeneous perturbation experiments by representing perturbation, readout and context as disentangled dimensions. LPM outperforms existing methods across multiple biological discovery tasks, including in predicting post-perturbation transcriptomes of unseen experiments, identifying shared molecular mechanisms of action between chemical and genetic perturbations, and facilitating the inference of gene–gene interaction networks. LPM learns meaningful joint representations of perturbations, readouts and contexts, enables the study of biological relationships in silico and could considerably accelerate the derivation of insights from pooled perturbation experiments. A large perturbation model that integrates diverse laboratory experiments is presented to predict biological responses to chemical or genetic perturbations and support various biological discovery tasks.
{"title":"In silico biological discovery with large perturbation models","authors":"Djordje Miladinovic, Tobias Höppe, Mathieu Chevalley, Andreas Georgiou, Lachlan Stuart, Arash Mehrjou, Marcus Bantscheff, Bernhard Schölkopf, Patrick Schwab","doi":"10.1038/s43588-025-00870-1","DOIUrl":"10.1038/s43588-025-00870-1","url":null,"abstract":"Data generated in perturbation experiments link perturbations to the changes they elicit and therefore contain information relevant to numerous biological discovery tasks—from understanding the relationships between biological entities to developing therapeutics. However, these data encompass diverse perturbations and readouts, and the complex dependence of experimental outcomes on their biological context makes it challenging to integrate insights across experiments. Here we present the large perturbation model (LPM), a deep-learning model that integrates multiple, heterogeneous perturbation experiments by representing perturbation, readout and context as disentangled dimensions. LPM outperforms existing methods across multiple biological discovery tasks, including in predicting post-perturbation transcriptomes of unseen experiments, identifying shared molecular mechanisms of action between chemical and genetic perturbations, and facilitating the inference of gene–gene interaction networks. LPM learns meaningful joint representations of perturbations, readouts and contexts, enables the study of biological relationships in silico and could considerably accelerate the derivation of insights from pooled perturbation experiments. A large perturbation model that integrates diverse laboratory experiments is presented to predict biological responses to chemical or genetic perturbations and support various biological discovery tasks.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"1029-1040"},"PeriodicalIF":18.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00870-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Structure-based molecule generation represents a notable advancement in artificial intelligence-driven drug design. However, progress in this field is constrained by the scarcity of structural data on protein–ligand complexes. Here we propose a latent variable approach that bridges the gap between ligand-only data and protein–ligand complexes, enabling target-aware generative models to explore a broader chemical space, thereby enhancing the quality of molecular generation. Inspired by quantum molecular simulations, we introduce ECloudGen, a generative model that leverages electron clouds as meaningful latent variables. ECloudGen incorporates techniques such as latent diffusion models, Llama architectures and a contrastive learning task, which organizes the chemical space into a structured and highly interpretable latent representation. Benchmark studies demonstrate that ECloudGen outperforms state-of-the-art methods by generating more potent binders with superior physiochemical properties and by covering a broader chemical space. The incorporation of electron clouds as latent variables not only improves generative performance but also introduces model-level interpretability, as illustrated in our case studies. This study presents ECloudGen, which uses latent diffusion to generate electron clouds from protein pockets and decodes them into molecules. The adopted two-stage training expands the chemical space accessible to generative drug design.
{"title":"ECloudGen: leveraging electron clouds as a latent variable to scale up structure-based molecular design","authors":"Odin Zhang, Jieyu Jin, Zhenxing Wu, Jintu Zhang, Po Yuan, Yuntao Yu, Haitao Lin, Haiyang Zhong, Xujun Zhang, Chenqing Hua, Weibo Zhao, Zhengshuo Zhang, Kejun Ying, Yufei Huang, Huifeng Zhao, Yu Kang, Peichen Pan, Jike Wang, Dong Guo, Shuangjia Zheng, Chang-Yu Hsieh, Tingjun Hou","doi":"10.1038/s43588-025-00886-7","DOIUrl":"10.1038/s43588-025-00886-7","url":null,"abstract":"Structure-based molecule generation represents a notable advancement in artificial intelligence-driven drug design. However, progress in this field is constrained by the scarcity of structural data on protein–ligand complexes. Here we propose a latent variable approach that bridges the gap between ligand-only data and protein–ligand complexes, enabling target-aware generative models to explore a broader chemical space, thereby enhancing the quality of molecular generation. Inspired by quantum molecular simulations, we introduce ECloudGen, a generative model that leverages electron clouds as meaningful latent variables. ECloudGen incorporates techniques such as latent diffusion models, Llama architectures and a contrastive learning task, which organizes the chemical space into a structured and highly interpretable latent representation. Benchmark studies demonstrate that ECloudGen outperforms state-of-the-art methods by generating more potent binders with superior physiochemical properties and by covering a broader chemical space. The incorporation of electron clouds as latent variables not only improves generative performance but also introduces model-level interpretability, as illustrated in our case studies. This study presents ECloudGen, which uses latent diffusion to generate electron clouds from protein pockets and decodes them into molecules. The adopted two-stage training expands the chemical space accessible to generative drug design.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"1017-1028"},"PeriodicalIF":18.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00888-5
Sophie Slaats
The recent computational model ‘BRyBI’ proposes that gamma, theta, and delta neural oscillations can guide the process of word recognition by providing temporal windows for the integration of bottom-up input with top-down information.
{"title":"How neural rhythms can guide word recognition","authors":"Sophie Slaats","doi":"10.1038/s43588-025-00888-5","DOIUrl":"10.1038/s43588-025-00888-5","url":null,"abstract":"The recent computational model ‘BRyBI’ proposes that gamma, theta, and delta neural oscillations can guide the process of word recognition by providing temporal windows for the integration of bottom-up input with top-down information.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"848-849"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00874-x
Renwen Zhang, Han Meng, Marion Neubronner, Yi-Chieh Lee
Large language models (LLMs) hold great potential for augmenting psychotherapy by enhancing accessibility, personalization and engagement. However, a systematic understanding of the roles that LLMs can play in psychotherapy remains underexplored. In this Perspective, we propose a taxonomy of LLM roles in psychotherapy that delineates six specific roles of LLMs across two key dimensions: artificial intelligence autonomy and emotional engagement. We discuss key computational and ethical challenges, such as emotion recognition, memory retention, privacy and emotional dependency, and offer recommendations to address these challenges. Large language models (LLMs) offer promising ways to enhance psychotherapy through greater accessibility, personalization and engagement. This Perspective introduces a typology that categorizes the roles of LLMs in psychotherapy along two critical dimensions: autonomy and emotional engagement.
{"title":"Computational and ethical considerations for using large language models in psychotherapy","authors":"Renwen Zhang, Han Meng, Marion Neubronner, Yi-Chieh Lee","doi":"10.1038/s43588-025-00874-x","DOIUrl":"10.1038/s43588-025-00874-x","url":null,"abstract":"Large language models (LLMs) hold great potential for augmenting psychotherapy by enhancing accessibility, personalization and engagement. However, a systematic understanding of the roles that LLMs can play in psychotherapy remains underexplored. In this Perspective, we propose a taxonomy of LLM roles in psychotherapy that delineates six specific roles of LLMs across two key dimensions: artificial intelligence autonomy and emotional engagement. We discuss key computational and ethical challenges, such as emotion recognition, memory retention, privacy and emotional dependency, and offer recommendations to address these challenges. Large language models (LLMs) offer promising ways to enhance psychotherapy through greater accessibility, personalization and engagement. This Perspective introduces a typology that categorizes the roles of LLMs in psychotherapy along two critical dimensions: autonomy and emotional engagement.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"854-862"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00882-x
Nicole Martinez-Martin
In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.
{"title":"Developing mental health AI tools that improve care across different groups and contexts","authors":"Nicole Martinez-Martin","doi":"10.1038/s43588-025-00882-x","DOIUrl":"10.1038/s43588-025-00882-x","url":null,"abstract":"In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"839-840"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00889-4
Gaole Dai, Rongyu Zhang, Qingpo Wuwu, Cheng-Ching Tseng, Yu Zhou, Shaokang Wang, Siyuan Qian, Ming Lu, Ali Ata Tuz, Matthias Gunzer, Tiejun Huang, Jianxu Chen, Shanghang Zhang
The rapid pace of innovation in biological microscopy has produced increasingly large images, putting pressure on data storage and impeding efficient data sharing, management and visualization. This trend necessitates new, efficient compression solutions, as traditional coder–decoder methods often struggle with the diversity of bioimages, leading to suboptimal results. Here we show an adaptive compression workflow based on implicit neural representation that addresses these challenges. Our approach enables application-specific compression, supports images of varying dimensionality and allows arbitrary pixel-wise decompression. On a wide range of real-world microscopy images, we demonstrate that our workflow achieves high, controllable compression ratios while preserving the critical details necessary for downstream scientific analysis. This study presents a flexible AI-based method for compressing microscopy images, achieving high compression while preserving details critical for analysis, with support for task-specific optimization and arbitrary-resolution decompression.
{"title":"Implicit neural image field for biological microscopy image compression","authors":"Gaole Dai, Rongyu Zhang, Qingpo Wuwu, Cheng-Ching Tseng, Yu Zhou, Shaokang Wang, Siyuan Qian, Ming Lu, Ali Ata Tuz, Matthias Gunzer, Tiejun Huang, Jianxu Chen, Shanghang Zhang","doi":"10.1038/s43588-025-00889-4","DOIUrl":"10.1038/s43588-025-00889-4","url":null,"abstract":"The rapid pace of innovation in biological microscopy has produced increasingly large images, putting pressure on data storage and impeding efficient data sharing, management and visualization. This trend necessitates new, efficient compression solutions, as traditional coder–decoder methods often struggle with the diversity of bioimages, leading to suboptimal results. Here we show an adaptive compression workflow based on implicit neural representation that addresses these challenges. Our approach enables application-specific compression, supports images of varying dimensionality and allows arbitrary pixel-wise decompression. On a wide range of real-world microscopy images, we demonstrate that our workflow achieves high, controllable compression ratios while preserving the critical details necessary for downstream scientific analysis. This study presents a flexible AI-based method for compressing microscopy images, achieving high compression while preserving details critical for analysis, with support for task-specific optimization and arbitrary-resolution decompression.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"1041-1050"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00889-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145276874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00879-6
Quentin J. M. Huys, Michael Browning
Computational psychiatry is increasingly delivering causal evidence by focusing on interventions research and clinical trials. Causal evidence could improve patient outcomes through improved precision, repurposing, novel interventions, scaling of psychotherapy and better translation to the clinic.
{"title":"Trials for computational psychiatry","authors":"Quentin J. M. Huys, Michael Browning","doi":"10.1038/s43588-025-00879-6","DOIUrl":"10.1038/s43588-025-00879-6","url":null,"abstract":"Computational psychiatry is increasingly delivering causal evidence by focusing on interventions research and clinical trials. Causal evidence could improve patient outcomes through improved precision, repurposing, novel interventions, scaling of psychotherapy and better translation to the clinic.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"841-843"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00894-7
Nature Computational Science presents a Focus that explores the field of computational psychiatry and its key challenges, from privacy concerns to the ethical use of artificial intelligence, offering new insights into the future of mental health care.
{"title":"Rethinking mental illness through a computational lens","authors":"","doi":"10.1038/s43588-025-00894-7","DOIUrl":"10.1038/s43588-025-00894-7","url":null,"abstract":"Nature Computational Science presents a Focus that explores the field of computational psychiatry and its key challenges, from privacy concerns to the ethical use of artificial intelligence, offering new insights into the future of mental health care.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"837-838"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00894-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}