Pub Date : 2024-10-30DOI: 10.1038/s42256-024-00912-9
Sergios Gatidis, Marcel Früh, Matthias P. Fabritius, Sijing Gu, Konstantin Nikolaou, Christian La Fougère, Jin Ye, Junjun He, Yige Peng, Lei Bi, Jun Ma, Bo Wang, Jia Zhang, Yukun Huang, Lars Heiliger, Zdravko Marinov, Rainer Stiefelhagen, Jan Egger, Jens Kleesiek, Ludovic Sibille, Lei Xiang, Simone Bendazzoli, Mehdi Astaraki, Michael Ingrisch, Clemens C. Cyran, Thomas Küstner
Automated detection of tumour lesions on positron emission tomography–computed tomography (PET/CT) image data is a clinically relevant but highly challenging task. Progress in this field has been hampered in the past owing to the lack of publicly available annotated data and limited availability of platforms for inter-institutional collaboration. Here we describe the results of the autoPET challenge, a biomedical image analysis challenge aimed to motivate research in the field of automated PET/CT image analysis. The challenge task was the automated segmentation of metabolically active tumour lesions on whole-body 18F-fluorodeoxyglucose PET/CT. Challenge participants had access to a large publicly available annotated PET/CT dataset for algorithm training. All algorithms submitted to the final challenge phase were based on deep learning methods, mostly using three-dimensional U-Net architectures. Submitted algorithms were evaluated on a private test set composed of 150 PET/CT studies from two institutions. An ensemble model of the highest-ranking algorithms achieved favourable performance compared with individual algorithms. Algorithm performance was dependent on the quality and quantity of data and on algorithm design choices, such as tailored post-processing of predicted segmentations. Future iterations of this challenge will focus on generalization and clinical translation.
{"title":"Results from the autoPET challenge on fully automated lesion segmentation in oncologic PET/CT imaging","authors":"Sergios Gatidis, Marcel Früh, Matthias P. Fabritius, Sijing Gu, Konstantin Nikolaou, Christian La Fougère, Jin Ye, Junjun He, Yige Peng, Lei Bi, Jun Ma, Bo Wang, Jia Zhang, Yukun Huang, Lars Heiliger, Zdravko Marinov, Rainer Stiefelhagen, Jan Egger, Jens Kleesiek, Ludovic Sibille, Lei Xiang, Simone Bendazzoli, Mehdi Astaraki, Michael Ingrisch, Clemens C. Cyran, Thomas Küstner","doi":"10.1038/s42256-024-00912-9","DOIUrl":"https://doi.org/10.1038/s42256-024-00912-9","url":null,"abstract":"<p>Automated detection of tumour lesions on positron emission tomography–computed tomography (PET/CT) image data is a clinically relevant but highly challenging task. Progress in this field has been hampered in the past owing to the lack of publicly available annotated data and limited availability of platforms for inter-institutional collaboration. Here we describe the results of the autoPET challenge, a biomedical image analysis challenge aimed to motivate research in the field of automated PET/CT image analysis. The challenge task was the automated segmentation of metabolically active tumour lesions on whole-body <sup>18</sup>F-fluorodeoxyglucose PET/CT. Challenge participants had access to a large publicly available annotated PET/CT dataset for algorithm training. All algorithms submitted to the final challenge phase were based on deep learning methods, mostly using three-dimensional U-Net architectures. Submitted algorithms were evaluated on a private test set composed of 150 PET/CT studies from two institutions. An ensemble model of the highest-ranking algorithms achieved favourable performance compared with individual algorithms. Algorithm performance was dependent on the quality and quantity of data and on algorithm design choices, such as tailored post-processing of predicted segmentations. Future iterations of this challenge will focus on generalization and clinical translation.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"68 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142541712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1038/s42256-024-00915-6
Jialin He, Lei Xiong, Shaohui Shi, Chengyu Li, Kexuan Chen, Qianchen Fang, Jiuhong Nan, Ke Ding, Yuanhui Mao, Carles A. Boix, Xinyang Hu, Manolis Kellis, Jingyun Li, Xushen Xiong
Gene expression involves transcription and translation. Despite large datasets and increasingly powerful methods devoted to calculating genetic variants’ effects on transcription, discrepancy between messenger RNA and protein levels hinders the systematic interpretation of the regulatory effects of disease-associated variants. Accurate models of the sequence determinants of translation are needed to close this gap and to interpret disease-associated variants that act on translation. Here we present Translatomer, a multimodal transformer framework that predicts cell-type-specific translation from messenger RNA expression and gene sequence. We train the Translatomer on 33 tissues and cell lines, and show that the inclusion of sequence improves the prediction of ribosome profiling signal, indicating that the Translatomer captures sequence-dependent translational regulatory information. The Translatomer achieves accuracies of 0.72 to 0.80 for the de novo prediction of cell-type-specific ribosome profiling. We develop an in silico mutagenesis tool to estimate mutational effects on translation and demonstrate that variants associated with translation regulation are evolutionarily constrained, both in the human population and across species. In particular, we identify cell-type-specific translational regulatory mechanisms independent of the expression quantitative trait loci for 3,041 non-coding and synonymous variants associated with complex diseases, including Alzheimer’s disease, schizophrenia and congenital heart disease. The Translatomer accurately models the genetic underpinnings of translation, bridging the gap between messenger RNA and protein levels as well as providing valuable mechanistic insights for uninterpreted disease variants.
{"title":"Deep learning prediction of ribosome profiling with Translatomer reveals translational regulation and interprets disease variants","authors":"Jialin He, Lei Xiong, Shaohui Shi, Chengyu Li, Kexuan Chen, Qianchen Fang, Jiuhong Nan, Ke Ding, Yuanhui Mao, Carles A. Boix, Xinyang Hu, Manolis Kellis, Jingyun Li, Xushen Xiong","doi":"10.1038/s42256-024-00915-6","DOIUrl":"https://doi.org/10.1038/s42256-024-00915-6","url":null,"abstract":"<p>Gene expression involves transcription and translation. Despite large datasets and increasingly powerful methods devoted to calculating genetic variants’ effects on transcription, discrepancy between messenger RNA and protein levels hinders the systematic interpretation of the regulatory effects of disease-associated variants. Accurate models of the sequence determinants of translation are needed to close this gap and to interpret disease-associated variants that act on translation. Here we present Translatomer, a multimodal transformer framework that predicts cell-type-specific translation from messenger RNA expression and gene sequence. We train the Translatomer on 33 tissues and cell lines, and show that the inclusion of sequence improves the prediction of ribosome profiling signal, indicating that the Translatomer captures sequence-dependent translational regulatory information. The Translatomer achieves accuracies of 0.72 to 0.80 for the de novo prediction of cell-type-specific ribosome profiling. We develop an in silico mutagenesis tool to estimate mutational effects on translation and demonstrate that variants associated with translation regulation are evolutionarily constrained, both in the human population and across species. In particular, we identify cell-type-specific translational regulatory mechanisms independent of the expression quantitative trait loci for 3,041 non-coding and synonymous variants associated with complex diseases, including Alzheimer’s disease, schizophrenia and congenital heart disease. The Translatomer accurately models the genetic underpinnings of translation, bridging the gap between messenger RNA and protein levels as well as providing valuable mechanistic insights for uninterpreted disease variants.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"235 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142488349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s42256-024-00913-8
Yumeng Zhang, Zhikang Wang, Yunzhe Jiang, Dene R. Littler, Mark Gerstein, Anthony W. Purcell, Jamie Rossjohn, Hong-Yu Ou, Jiangning Song
Understanding the mechanisms of T cell antigen recognition that underpin adaptive immune responses is critical for developing vaccines, immunotherapies and treatments against autoimmune diseases. Despite extensive research efforts, accurate prediction of T cell receptor (TCR)–antigen binding pairs remains a great challenge due to the vast diversity and cross-reactivity of TCRs. Here we propose a deep-learning-based framework termed epitope-anchored contrastive transfer learning (EPACT) tailored to paired human CD8+ TCRs. Harnessing the pretrained representations and co-embeddings of peptide–major histocompatibility complex (pMHC) and TCR, EPACT demonstrated generalizability in predicting binding specificity for unseen epitopes and distinct TCR repertoires. Contrastive learning enabled highly precise predictions for immunodominant epitopes and interpretable analysis of epitope-specific T cells. We applied EPACT to SARS-CoV-2-responsive T cells, and the predicted binding strength aligned well with the surge in spike-specific immune responses after vaccination. We further fine-tuned EPACT on structural data to decipher the residue-level interactions involved in TCR–antigen recognition. EPACT was capable of quantifying interchain distance matrices and identifying contact residues, corroborating the presence of TCR cross-reactivity across multiple tumour-associated antigens. Together, EPACT can serve as a useful artificial intelligence approach with important potential in practical applications and contribute towards the development of TCR-based immunotherapies.
了解支撑适应性免疫反应的 T 细胞抗原识别机制对于开发疫苗、免疫疗法和治疗自身免疫性疾病至关重要。尽管开展了大量研究工作,但由于 TCR 的多样性和交叉反应性,准确预测 T 细胞受体(TCR)与抗原的结合对仍然是一项巨大的挑战。在这里,我们提出了一种基于深度学习的框架,称为表位锚定对比转移学习(EPACT),专门针对成对的人类 CD8+ TCR。利用肽-主要组织相容性复合体(pMHC)和TCR的预训练表征和共嵌入,EPACT在预测未知表位和不同TCR复合物的结合特异性方面展示了通用性。对比学习可以对免疫优势表位进行高度精确的预测,并对表位特异性 T 细胞进行可解释的分析。我们将 EPACT 应用于 SARS-CoV-2 反应性 T 细胞,预测的结合强度与接种疫苗后尖峰特异性免疫反应的激增非常吻合。我们根据结构数据进一步微调了 EPACT,以破译 TCR 与抗原识别中涉及的残基级相互作用。EPACT 能够量化链间距离矩阵并识别接触残基,从而证实多种肿瘤相关抗原之间存在 TCR 交叉反应。总之,EPACT可以作为一种有用的人工智能方法,在实际应用中具有重要潜力,并有助于开发基于TCR的免疫疗法。
{"title":"Epitope-anchored contrastive transfer learning for paired CD8+ T cell receptor–antigen recognition","authors":"Yumeng Zhang, Zhikang Wang, Yunzhe Jiang, Dene R. Littler, Mark Gerstein, Anthony W. Purcell, Jamie Rossjohn, Hong-Yu Ou, Jiangning Song","doi":"10.1038/s42256-024-00913-8","DOIUrl":"https://doi.org/10.1038/s42256-024-00913-8","url":null,"abstract":"<p>Understanding the mechanisms of T cell antigen recognition that underpin adaptive immune responses is critical for developing vaccines, immunotherapies and treatments against autoimmune diseases. Despite extensive research efforts, accurate prediction of T cell receptor (TCR)–antigen binding pairs remains a great challenge due to the vast diversity and cross-reactivity of TCRs. Here we propose a deep-learning-based framework termed epitope-anchored contrastive transfer learning (EPACT) tailored to paired human CD8<sup>+</sup> TCRs. Harnessing the pretrained representations and co-embeddings of peptide–major histocompatibility complex (pMHC) and TCR, EPACT demonstrated generalizability in predicting binding specificity for unseen epitopes and distinct TCR repertoires. Contrastive learning enabled highly precise predictions for immunodominant epitopes and interpretable analysis of epitope-specific T cells. We applied EPACT to SARS-CoV-2-responsive T cells, and the predicted binding strength aligned well with the surge in spike-specific immune responses after vaccination. We further fine-tuned EPACT on structural data to decipher the residue-level interactions involved in TCR–antigen recognition. EPACT was capable of quantifying interchain distance matrices and identifying contact residues, corroborating the presence of TCR cross-reactivity across multiple tumour-associated antigens. Together, EPACT can serve as a useful artificial intelligence approach with important potential in practical applications and contribute towards the development of TCR-based immunotherapies.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"27 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1038/s42256-024-00921-8
Distinguishing between real and fabricated facts has long been a societal challenge. As the Internet becomes increasingly littered with AI-generated content, the need for curation and safeguarding of high-quality data and information is more crucial than ever.
{"title":"Pick your AI poison","authors":"","doi":"10.1038/s42256-024-00921-8","DOIUrl":"10.1038/s42256-024-00921-8","url":null,"abstract":"Distinguishing between real and fabricated facts has long been a societal challenge. As the Internet becomes increasingly littered with AI-generated content, the need for curation and safeguarding of high-quality data and information is more crucial than ever.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 10","pages":"1119-1119"},"PeriodicalIF":18.8,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00921-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optimizing a candidate molecule’s physiochemical and functional properties has been a critical task in drug and material design. Although the non-trivial task of balancing multiple (potentially conflicting) optimization objectives is considered ideal for artificial intelligence, several technical challenges such as the scarcity of multiproperty-labelled training data have hindered the development of a satisfactory AI solution for a long time. Prompt-MolOpt is a tool for molecular optimization; it makes use of prompt-based embeddings, as used in large language models, to improve the transformer’s ability to optimize molecules for specific property adjustments. Notably, Prompt-MolOpt excels in working with limited multiproperty data (even under the zero-shot setting) by effectively generalizing causal relationships learned from single-property datasets. In comparative evaluations against established models such as JTNN, hierG2G and Modof, Prompt-MolOpt achieves over a 15% relative improvement in multiproperty optimization success rates compared with the leading Modof model. Furthermore, a variant of Prompt-MolOpt, named Prompt-MolOptP, can preserve the pharmacophores or any user-specified fragments under the structural transformation, further broadening its application scope. By constructing tailored optimization datasets, with the protocol introduced in this work, Prompt-MolOpt steers molecular optimization towards domain-relevant chemical spaces, enhancing the quality of the optimized molecules. Real-world tests, such as those involving blood–brain barrier permeability optimization, underscore its practical relevance. Prompt-MolOpt offers a versatile approach for multiproperty and multi-site molecular optimizations, suggesting its potential utility in chemistry research and drug and material discovery.
{"title":"Leveraging language model for advanced multiproperty molecular optimization via prompt engineering","authors":"Zhenxing Wu, Odin Zhang, Xiaorui Wang, Li Fu, Huifeng Zhao, Jike Wang, Hongyan Du, Dejun Jiang, Yafeng Deng, Dongsheng Cao, Chang-Yu Hsieh, Tingjun Hou","doi":"10.1038/s42256-024-00916-5","DOIUrl":"https://doi.org/10.1038/s42256-024-00916-5","url":null,"abstract":"<p>Optimizing a candidate molecule’s physiochemical and functional properties has been a critical task in drug and material design. Although the non-trivial task of balancing multiple (potentially conflicting) optimization objectives is considered ideal for artificial intelligence, several technical challenges such as the scarcity of multiproperty-labelled training data have hindered the development of a satisfactory AI solution for a long time. Prompt-MolOpt is a tool for molecular optimization; it makes use of prompt-based embeddings, as used in large language models, to improve the transformer’s ability to optimize molecules for specific property adjustments. Notably, Prompt-MolOpt excels in working with limited multiproperty data (even under the zero-shot setting) by effectively generalizing causal relationships learned from single-property datasets. In comparative evaluations against established models such as JTNN, hierG2G and Modof, Prompt-MolOpt achieves over a 15% relative improvement in multiproperty optimization success rates compared with the leading Modof model. Furthermore, a variant of Prompt-MolOpt, named Prompt-MolOpt<sup>P</sup>, can preserve the pharmacophores or any user-specified fragments under the structural transformation, further broadening its application scope. By constructing tailored optimization datasets, with the protocol introduced in this work, Prompt-MolOpt steers molecular optimization towards domain-relevant chemical spaces, enhancing the quality of the optimized molecules. Real-world tests, such as those involving blood–brain barrier permeability optimization, underscore its practical relevance. Prompt-MolOpt offers a versatile approach for multiproperty and multi-site molecular optimizations, suggesting its potential utility in chemistry research and drug and material discovery.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"224 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142451998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deciphering the relationships between genes and complex traits can enhance our understanding of phenotypic variations and disease mechanisms. However, determining the specific roles of individual genes and quantifying their direct and indirect causal effects on complex traits remains a significant challenge. Here we present a framework (called Bayesian network genome-wide association studies (BN-GWAS)) to decipher the total and direct causal effects of individual genes. BN-GWAS leverages imputed expression profiles from GWAS and raw expression data from a reference dataset to construct a directed gene–gene–phenotype causal network. It allows gene expression and disease traits to be evaluated in different samples, significantly improving the flexibility and applicability of the approach. It can be extended to decipher the joint causal network of two or more traits, and exhibits high specificity and precision (positive predictive value), making it particularly useful for selecting genes for follow-up studies. We verified the feasibility and validity of BN-GWAS by extensive simulations and applications to 52 traits across 14 tissues in the UK Biobank, revealing insights into their genetic architectures, including the relative contributions of direct, indirect and mediating causal genes. The identified (direct) causal genes were significantly enriched for genes highlighted in the Open Targets database. Overall, BN-GWAS provides a flexible and powerful framework for elucidating the genetic basis of complex traits through a systems-level, causal inference approach. Genome-wide association studies generate extensive data, but interpreting these data remains challenging. A Bayesian-network-based method is presented that uses imputed and raw gene expression data to decipher the causal effects of individual genes.
{"title":"Estimation of causal effects of genes on complex traits using a Bayesian-network-based framework applied to GWAS data","authors":"Liangying Yin, Yaning Feng, Yujia Shi, Alexandria Lau, Jinghong Qiu, Pak-Chung Sham, Hon-Cheong So","doi":"10.1038/s42256-024-00906-7","DOIUrl":"10.1038/s42256-024-00906-7","url":null,"abstract":"Deciphering the relationships between genes and complex traits can enhance our understanding of phenotypic variations and disease mechanisms. However, determining the specific roles of individual genes and quantifying their direct and indirect causal effects on complex traits remains a significant challenge. Here we present a framework (called Bayesian network genome-wide association studies (BN-GWAS)) to decipher the total and direct causal effects of individual genes. BN-GWAS leverages imputed expression profiles from GWAS and raw expression data from a reference dataset to construct a directed gene–gene–phenotype causal network. It allows gene expression and disease traits to be evaluated in different samples, significantly improving the flexibility and applicability of the approach. It can be extended to decipher the joint causal network of two or more traits, and exhibits high specificity and precision (positive predictive value), making it particularly useful for selecting genes for follow-up studies. We verified the feasibility and validity of BN-GWAS by extensive simulations and applications to 52 traits across 14 tissues in the UK Biobank, revealing insights into their genetic architectures, including the relative contributions of direct, indirect and mediating causal genes. The identified (direct) causal genes were significantly enriched for genes highlighted in the Open Targets database. Overall, BN-GWAS provides a flexible and powerful framework for elucidating the genetic basis of complex traits through a systems-level, causal inference approach. Genome-wide association studies generate extensive data, but interpreting these data remains challenging. A Bayesian-network-based method is presented that uses imputed and raw gene expression data to decipher the causal effects of individual genes.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 10","pages":"1231-1244"},"PeriodicalIF":18.8,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1038/s42256-024-00910-x
Enrui Zhang, Adar Kahana, Alena Kopaničáková, Eli Turkel, Rishikesh Ranade, Jay Pathak, George Em Karniadakis
Neural networks suffer from spectral bias and have difficulty representing the high-frequency components of a function, whereas relaxation methods can resolve high frequencies efficiently but stall at moderate to low frequencies. We exploit the weaknesses of the two approaches by combining them synergistically to develop a fast numerical solver of partial differential equations (PDEs) at scale. Specifically, we propose HINTS, a hybrid, iterative, numerical and transferable solver by integrating a Deep Operator Network (DeepONet) with standard relaxation methods, leading to parallel efficiency and algorithmic scalability for a wide class of PDEs, not tractable with existing monolithic solvers. HINTS balances the convergence behaviour across the spectrum of eigenmodes by utilizing the spectral bias of DeepONet, resulting in a uniform convergence rate and hence exceptional performance of the hybrid solver overall. Moreover, HINTS applies to large-scale, multidimensional systems; it is flexible with regards to discretizations, computational domain and boundary conditions; and it can also be used to precondition Krylov methods.
{"title":"Blending neural operators and relaxation methods in PDE numerical solvers","authors":"Enrui Zhang, Adar Kahana, Alena Kopaničáková, Eli Turkel, Rishikesh Ranade, Jay Pathak, George Em Karniadakis","doi":"10.1038/s42256-024-00910-x","DOIUrl":"https://doi.org/10.1038/s42256-024-00910-x","url":null,"abstract":"<p>Neural networks suffer from spectral bias and have difficulty representing the high-frequency components of a function, whereas relaxation methods can resolve high frequencies efficiently but stall at moderate to low frequencies. We exploit the weaknesses of the two approaches by combining them synergistically to develop a fast numerical solver of partial differential equations (PDEs) at scale. Specifically, we propose HINTS, a hybrid, iterative, numerical and transferable solver by integrating a Deep Operator Network (DeepONet) with standard relaxation methods, leading to parallel efficiency and algorithmic scalability for a wide class of PDEs, not tractable with existing monolithic solvers. HINTS balances the convergence behaviour across the spectrum of eigenmodes by utilizing the spectral bias of DeepONet, resulting in a uniform convergence rate and hence exceptional performance of the hybrid solver overall. Moreover, HINTS applies to large-scale, multidimensional systems; it is flexible with regards to discretizations, computational domain and boundary conditions; and it can also be used to precondition Krylov methods.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"73 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1038/s42256-024-00908-5
Bohao Zou, Jingjing Wang, Yi Ding, Zhenmiao Zhang, Yufen Huang, Xiaodong Fang, Ka Chun Cheung, Simon See, Lu Zhang
Metagenome-assembled genomes (MAGs) offer valuable insights into the exploration of microbial dark matter using metagenomic sequencing data. However, there is growing concern that contamination in MAGs may substantially affect the results of downstream analysis. Current MAG decontamination tools primarily rely on marker genes and do not fully use the contextual information of genomic sequences. To overcome this limitation, we introduce Deepurify for MAG decontamination. Deepurify uses a multi-modal deep language model with contrastive learning to match microbial genomic sequences with their taxonomic lineages. It allocates contigs within a MAG to a MAG-separated tree and applies a tree traversal algorithm to partition MAGs into sub-MAGs, with the goal of maximizing the number of high- and medium-quality sub-MAGs. Here we show that Deepurify outperformed MDMclearer and MAGpurify on simulated data, CAMI datasets and real-world datasets with varying complexities. Deepurify increased the number of high-quality MAGs by 20.0% in soil, 45.1% in ocean, 45.5% in plants, 33.8% in freshwater and 28.5% in human faecal metagenomic sequencing datasets. Metagenome-assembled genomes (MAGs) provide insights into microbial dark matter, but contamination remains a concern for downstream analysis. Zou et al. develop a multi-modal deep language model that leverages microbial sequences to remove ‘unexpected’ contigs from MAGs. This approach is compatible with any contig binning tools and increases the number of high-quality bins.
{"title":"A multi-modal deep language model for contaminant removal from metagenome-assembled genomes","authors":"Bohao Zou, Jingjing Wang, Yi Ding, Zhenmiao Zhang, Yufen Huang, Xiaodong Fang, Ka Chun Cheung, Simon See, Lu Zhang","doi":"10.1038/s42256-024-00908-5","DOIUrl":"10.1038/s42256-024-00908-5","url":null,"abstract":"Metagenome-assembled genomes (MAGs) offer valuable insights into the exploration of microbial dark matter using metagenomic sequencing data. However, there is growing concern that contamination in MAGs may substantially affect the results of downstream analysis. Current MAG decontamination tools primarily rely on marker genes and do not fully use the contextual information of genomic sequences. To overcome this limitation, we introduce Deepurify for MAG decontamination. Deepurify uses a multi-modal deep language model with contrastive learning to match microbial genomic sequences with their taxonomic lineages. It allocates contigs within a MAG to a MAG-separated tree and applies a tree traversal algorithm to partition MAGs into sub-MAGs, with the goal of maximizing the number of high- and medium-quality sub-MAGs. Here we show that Deepurify outperformed MDMclearer and MAGpurify on simulated data, CAMI datasets and real-world datasets with varying complexities. Deepurify increased the number of high-quality MAGs by 20.0% in soil, 45.1% in ocean, 45.5% in plants, 33.8% in freshwater and 28.5% in human faecal metagenomic sequencing datasets. Metagenome-assembled genomes (MAGs) provide insights into microbial dark matter, but contamination remains a concern for downstream analysis. Zou et al. develop a multi-modal deep language model that leverages microbial sequences to remove ‘unexpected’ contigs from MAGs. This approach is compatible with any contig binning tools and increases the number of high-quality bins.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 10","pages":"1245-1255"},"PeriodicalIF":18.8,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142383814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1038/s42256-024-00911-w
Cas Wognum, Jeremy R. Ash, Matteo Aldeghi, Raquel Rodríguez-Pérez, Cheng Fang, Alan C. Cheng, Daniel J. Price, Djork-Arné Clevert, Ola Engkvist, W. Patrick Walters
{"title":"A call for an industry-led initiative to critically assess machine learning for real-world drug discovery","authors":"Cas Wognum, Jeremy R. Ash, Matteo Aldeghi, Raquel Rodríguez-Pérez, Cheng Fang, Alan C. Cheng, Daniel J. Price, Djork-Arné Clevert, Ola Engkvist, W. Patrick Walters","doi":"10.1038/s42256-024-00911-w","DOIUrl":"10.1038/s42256-024-00911-w","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 10","pages":"1120-1121"},"PeriodicalIF":18.8,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142374186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1038/s42256-024-00902-x
Guruprasad Raghavan, Bahey Tharwat, Surya Narayanan Hari, Dhruvil Satani, Rex Liu, Matt Thomson
Contemporary machine learning algorithms train artificial neural networks by setting network weights to a single optimized configuration through gradient descent on task-specific training data. The resulting networks can achieve human-level performance on natural language processing, image analysis and agent-based tasks, but lack the flexibility and robustness characteristic of human intelligence. Here we introduce a differential geometry framework—functionally invariant paths—that provides flexible and continuous adaptation of trained neural networks so that secondary tasks can be achieved beyond the main machine learning goal, including increased network sparsification and adversarial robustness. We formulate the weight space of a neural network as a curved Riemannian manifold equipped with a metric tensor whose spectrum defines low-rank subspaces in weight space that accommodate network adaptation without loss of prior knowledge. We formalize adaptation as movement along a geodesic path in weight space while searching for networks that accommodate secondary objectives. With modest computational resources, the functionally invariant path algorithm achieves performance comparable with or exceeding state-of-the-art methods including low-rank adaptation on continual learning, sparsification and adversarial robustness tasks for large language models (bidirectional encoder representations from transformers), vision transformers (ViT and DeIT) and convolutional neural networks. Machine learning often includes secondary objectives, such as sparsity or robustness. To reach these objectives efficiently, the training of a neural network has been interpreted as the exploration of functionally invariant paths in the parameter space.
{"title":"Engineering flexible machine learning systems by traversing functionally invariant paths","authors":"Guruprasad Raghavan, Bahey Tharwat, Surya Narayanan Hari, Dhruvil Satani, Rex Liu, Matt Thomson","doi":"10.1038/s42256-024-00902-x","DOIUrl":"10.1038/s42256-024-00902-x","url":null,"abstract":"Contemporary machine learning algorithms train artificial neural networks by setting network weights to a single optimized configuration through gradient descent on task-specific training data. The resulting networks can achieve human-level performance on natural language processing, image analysis and agent-based tasks, but lack the flexibility and robustness characteristic of human intelligence. Here we introduce a differential geometry framework—functionally invariant paths—that provides flexible and continuous adaptation of trained neural networks so that secondary tasks can be achieved beyond the main machine learning goal, including increased network sparsification and adversarial robustness. We formulate the weight space of a neural network as a curved Riemannian manifold equipped with a metric tensor whose spectrum defines low-rank subspaces in weight space that accommodate network adaptation without loss of prior knowledge. We formalize adaptation as movement along a geodesic path in weight space while searching for networks that accommodate secondary objectives. With modest computational resources, the functionally invariant path algorithm achieves performance comparable with or exceeding state-of-the-art methods including low-rank adaptation on continual learning, sparsification and adversarial robustness tasks for large language models (bidirectional encoder representations from transformers), vision transformers (ViT and DeIT) and convolutional neural networks. Machine learning often includes secondary objectives, such as sparsity or robustness. To reach these objectives efficiently, the training of a neural network has been interpreted as the exploration of functionally invariant paths in the parameter space.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 10","pages":"1179-1196"},"PeriodicalIF":18.8,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00902-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142369346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}