Pub Date : 2025-11-24DOI: 10.1038/s43588-025-00937-z
Eva Portelance, Masoud Jasbi
{"title":"Publisher Correction: On the compatibility of generative AI and generative linguistics","authors":"Eva Portelance, Masoud Jasbi","doi":"10.1038/s43588-025-00937-z","DOIUrl":"10.1038/s43588-025-00937-z","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"6 1","pages":"109-109"},"PeriodicalIF":18.3,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00937-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145597461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1038/s43588-025-00913-7
Alex Murphy
{"title":"Viability of using LLMs as models of human language processing.","authors":"Alex Murphy","doi":"10.1038/s43588-025-00913-7","DOIUrl":"https://doi.org/10.1038/s43588-025-00913-7","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":18.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1038/s43588-025-00904-8
Kai Ruan, Yilong Xu, Ze-Feng Gao, Yang Liu, Yike Guo, Ji-Rong Wen, Hao Sun
Symbolic regression has a crucial role in modern scientific research owing to its capability of discovering concise and interpretable mathematical expressions from data. A key challenge lies in the search for parsimonious and generalizable mathematical formulas, in an infinite search space, while intending to fit the training data. Existing algorithms have faced a critical bottleneck of accuracy and efficiency over a decade when handling problems of complexity, which essentially hinders the pace of applying symbolic regression for scientific exploration across interdisciplinary domains. Here, to this end, we introduce parallel symbolic enumeration (PSE) to efficiently distill generic mathematical expressions from limited data. Experiments show that PSE achieves higher accuracy and faster computation compared with the state-of-the-art baseline algorithms across over 200 synthetic and experimental problem sets (for example, improving the recovery accuracy by up to 99% and reducing runtime by an order of magnitude). PSE represents an advance in accurate and efficient data-driven discovery of symbolic, interpretable models (for example, underlying physical laws), and improves the scalability of symbolic learning. In this work, the authors introduce parallel symbolic enumeration (PSE), a model that discovers physical laws from data with improved accuracy and speed. By evaluating millions of expressions in parallel and reusing computations, PSE outperforms the state-of-the-art methods.
{"title":"Discovering physical laws with parallel symbolic enumeration","authors":"Kai Ruan, Yilong Xu, Ze-Feng Gao, Yang Liu, Yike Guo, Ji-Rong Wen, Hao Sun","doi":"10.1038/s43588-025-00904-8","DOIUrl":"10.1038/s43588-025-00904-8","url":null,"abstract":"Symbolic regression has a crucial role in modern scientific research owing to its capability of discovering concise and interpretable mathematical expressions from data. A key challenge lies in the search for parsimonious and generalizable mathematical formulas, in an infinite search space, while intending to fit the training data. Existing algorithms have faced a critical bottleneck of accuracy and efficiency over a decade when handling problems of complexity, which essentially hinders the pace of applying symbolic regression for scientific exploration across interdisciplinary domains. Here, to this end, we introduce parallel symbolic enumeration (PSE) to efficiently distill generic mathematical expressions from limited data. Experiments show that PSE achieves higher accuracy and faster computation compared with the state-of-the-art baseline algorithms across over 200 synthetic and experimental problem sets (for example, improving the recovery accuracy by up to 99% and reducing runtime by an order of magnitude). PSE represents an advance in accurate and efficient data-driven discovery of symbolic, interpretable models (for example, underlying physical laws), and improves the scalability of symbolic learning. In this work, the authors introduce parallel symbolic enumeration (PSE), a model that discovers physical laws from data with improved accuracy and speed. By evaluating millions of expressions in parallel and reusing computations, PSE outperforms the state-of-the-art methods.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"6 1","pages":"53-66"},"PeriodicalIF":18.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00904-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1038/s43588-025-00931-5
We provide recommendations on how to write an effective point-by-point response document.
我们就如何撰写有效的逐点回应文件提供建议。
{"title":"How to respond to reviewers","authors":"","doi":"10.1038/s43588-025-00931-5","DOIUrl":"10.1038/s43588-025-00931-5","url":null,"abstract":"We provide recommendations on how to write an effective point-by-point response document.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"975-975"},"PeriodicalIF":18.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00931-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145555755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1038/s43588-025-00921-7
Zhaohui Dong, Luqi Yuan
A recent study proposes efficient numerical algorithms to reduce the required computational resources for solving the edge states in large-scale photonic or acoustic structures.
最近的一项研究提出了有效的数值算法来减少求解大规模光子或声学结构中边缘态所需的计算资源。
{"title":"Efficient methods for facilitating topological photonics and acoustics computation","authors":"Zhaohui Dong, Luqi Yuan","doi":"10.1038/s43588-025-00921-7","DOIUrl":"10.1038/s43588-025-00921-7","url":null,"abstract":"A recent study proposes efficient numerical algorithms to reduce the required computational resources for solving the edge states in large-scale photonic or acoustic structures.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 12","pages":"1106-1107"},"PeriodicalIF":18.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145575069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning has revolutionized chemical research by accelerating the discovery and understanding of complex chemical systems. However, polymer chemistry lacks a unified deep learning framework owing to the complexity of polymer structures. Existing self-supervised learning methods simplify polymers into repeating units and neglect their inherent periodicity, thereby limiting the models’ ability to generalize across tasks. To address this, we propose a periodicity-aware deep learning framework for polymers, PerioGT. In pre-training, a chemical knowledge-driven periodicity prior is constructed and incorporated into the model through contrastive learning. Then, periodicity prompts are learned in fine-tuning based on the prior. Additionally, a graph augmentation strategy is employed, which integrates additional conditions via virtual nodes to model complex chemical interactions. PerioGT achieves state-of-the-art performance on 16 downstream tasks. Wet-lab experiments highlight PerioGT’s potential in the real world, identifying two polymers with potent antimicrobial properties. Our results demonstrate that introducing the periodicity prior effectively enhances model performance. PerioGT is a self-supervised learning framework for polymer property prediction, integrating periodicity priors and additional conditions to enhance generalization under data scarcity and enable broad applicability.
{"title":"Periodicity-aware deep learning for polymers","authors":"Yuhui Wu, Cong Wang, Xintian Shen, Tianyi Zhang, Peng Zhang, Jian Ji","doi":"10.1038/s43588-025-00903-9","DOIUrl":"10.1038/s43588-025-00903-9","url":null,"abstract":"Deep learning has revolutionized chemical research by accelerating the discovery and understanding of complex chemical systems. However, polymer chemistry lacks a unified deep learning framework owing to the complexity of polymer structures. Existing self-supervised learning methods simplify polymers into repeating units and neglect their inherent periodicity, thereby limiting the models’ ability to generalize across tasks. To address this, we propose a periodicity-aware deep learning framework for polymers, PerioGT. In pre-training, a chemical knowledge-driven periodicity prior is constructed and incorporated into the model through contrastive learning. Then, periodicity prompts are learned in fine-tuning based on the prior. Additionally, a graph augmentation strategy is employed, which integrates additional conditions via virtual nodes to model complex chemical interactions. PerioGT achieves state-of-the-art performance on 16 downstream tasks. Wet-lab experiments highlight PerioGT’s potential in the real world, identifying two polymers with potent antimicrobial properties. Our results demonstrate that introducing the periodicity prior effectively enhances model performance. PerioGT is a self-supervised learning framework for polymer property prediction, integrating periodicity priors and additional conditions to enhance generalization under data scarcity and enable broad applicability.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 12","pages":"1214-1226"},"PeriodicalIF":18.3,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1038/s43588-025-00900-y
Arnab Bhattacharjee, Zaid Zada, Haocheng Wang, Bobbi Aubrey, Werner Doyle, Patricia Dugan, Daniel Friedman, Orrin Devinsky, Adeen Flinker, Peter J Ramadge, Uri Hasson, Ariel Goldstein, Samuel A Nastase
Recent research demonstrates that large language models can predict neural activity recorded via electrocorticography during natural language processing. To predict word-by-word neural activity, most prior work evaluates encoding models within individual electrodes and participants, limiting generalizability. Here we analyze electrocorticography data from eight participants listening to the same 30-min podcast. Using a shared response model, we estimate a common information space across participants. This shared space substantially enhances large language model-based encoding performance and enables denoising of individual brain responses by projecting back into participant-specific electrode spaces-yielding a 37% average improvement in encoding accuracy (from r = 0.188 to r = 0.257). The greatest gains occur in brain areas specialized for language comprehension, particularly the superior temporal gyrus and inferior frontal gyrus. Our findings highlight that estimating a shared space allows us to construct encoding models that better generalize across individuals.
{"title":"Aligning brains into a shared space improves their alignment with large language models.","authors":"Arnab Bhattacharjee, Zaid Zada, Haocheng Wang, Bobbi Aubrey, Werner Doyle, Patricia Dugan, Daniel Friedman, Orrin Devinsky, Adeen Flinker, Peter J Ramadge, Uri Hasson, Ariel Goldstein, Samuel A Nastase","doi":"10.1038/s43588-025-00900-y","DOIUrl":"https://doi.org/10.1038/s43588-025-00900-y","url":null,"abstract":"<p><p>Recent research demonstrates that large language models can predict neural activity recorded via electrocorticography during natural language processing. To predict word-by-word neural activity, most prior work evaluates encoding models within individual electrodes and participants, limiting generalizability. Here we analyze electrocorticography data from eight participants listening to the same 30-min podcast. Using a shared response model, we estimate a common information space across participants. This shared space substantially enhances large language model-based encoding performance and enables denoising of individual brain responses by projecting back into participant-specific electrode spaces-yielding a 37% average improvement in encoding accuracy (from r = 0.188 to r = 0.257). The greatest gains occur in brain areas specialized for language comprehension, particularly the superior temporal gyrus and inferior frontal gyrus. Our findings highlight that estimating a shared space allows us to construct encoding models that better generalize across individuals.</p>","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":18.3,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145552206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}