Pub Date : 2026-02-13DOI: 10.1038/s42256-025-01176-7
Joseph Szymborski, Amin Emad
With the growing pervasiveness of pretrained protein language models (pLMs), pLM-based methods are increasingly being put forward for the protein–protein interaction (PPI) inference task. Here we identify and confirm that existing pretrained pLMs are a source of data leakage for the downstream PPI task. We characterize the extent of the data leakage problem by training and comparing small and efficient pLMs on a dataset that controls for data leakage (strict) with one that does not (non-strict). Although data leakage from pretrained pLMs cause a measurable inflation of testing scores, we find that this does not necessarily extend to other, non-paired biological tasks such as protein keyword annotation. Further, we find no connection between the context lengths of pLMs and the performance of pLM-based PPI inference methods on proteins with sequence lengths that surpass it. Furthermore, we show that pLM-based and non-pLM-based models fail to generalize in tasks such as prediction of the human-SARS-CoV-2 PPIs or the effect of point mutations on binding affinities. This study demonstrates the importance of extending existing protocols for the evaluation of pLM-based models applied to paired biological datasets and identifies areas of weakness of current pLM models. The usage of pretrained protein language models (pLMs) is rapidly growing. However, Szymborski and Emad find that pretrained pLMs can be a source of data leakage in the task of protein–protein interaction inference, showing inflated performance scores.
{"title":"A flaw in using pretrained protein language models in protein–protein interaction inference models","authors":"Joseph Szymborski, Amin Emad","doi":"10.1038/s42256-025-01176-7","DOIUrl":"10.1038/s42256-025-01176-7","url":null,"abstract":"With the growing pervasiveness of pretrained protein language models (pLMs), pLM-based methods are increasingly being put forward for the protein–protein interaction (PPI) inference task. Here we identify and confirm that existing pretrained pLMs are a source of data leakage for the downstream PPI task. We characterize the extent of the data leakage problem by training and comparing small and efficient pLMs on a dataset that controls for data leakage (strict) with one that does not (non-strict). Although data leakage from pretrained pLMs cause a measurable inflation of testing scores, we find that this does not necessarily extend to other, non-paired biological tasks such as protein keyword annotation. Further, we find no connection between the context lengths of pLMs and the performance of pLM-based PPI inference methods on proteins with sequence lengths that surpass it. Furthermore, we show that pLM-based and non-pLM-based models fail to generalize in tasks such as prediction of the human-SARS-CoV-2 PPIs or the effect of point mutations on binding affinities. This study demonstrates the importance of extending existing protocols for the evaluation of pLM-based models applied to paired biological datasets and identifies areas of weakness of current pLM models. The usage of pretrained protein language models (pLMs) is rapidly growing. However, Szymborski and Emad find that pretrained pLMs can be a source of data leakage in the task of protein–protein interaction inference, showing inflated performance scores.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"197-208"},"PeriodicalIF":23.9,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-12DOI: 10.1038/s42256-026-01187-y
Caitlin M. Butt, Allison S. Walker
Deep learning foundation models are becoming increasingly popular for use in bioactivity prediction. Recently, Feng et al. developed ActFound, a bioactive foundation model that jointly uses pairwise learning and meta-learning. By utilizing these techniques, the model is capable of being fine-tuned to a more specific bioactivity task with only a small amount of new data. Here, to investigate the generalizability of the model, we looked to fine-tune the foundation model on an antibacterial natural products (NPs) dataset. Large, labelled NPs datasets, which are needed to train traditional deep learning methods, are scarce. Therefore, the bioactivity prediction of NPs is an ideal task for foundation models. We studied the performance of ActFound on the NPs dataset using a range of few-shot settings. Additionally, we compared ActFound’s performance with those of other state-of-the-art models in the field. We found ActFound was unable to reach the same level of accuracy on the antibacterial NPs dataset as it did on other cross-domain tasks reported in the original publication. However, ActFound displayed comparable or better performance compared to the other models studied, especially at the low-shot settings. Our results establish ActFound as a useful foundation model for the bioactivity prediction of tasks with limited data, particularly for datasets that contain the bioactivities of similar compounds. This Reusability Report tests the ability of a foundation model, ActFound, to predict the antibacterial activity of plant natural products. We found that although all models performed poorly on this task, ActFound performed better than similar models.
{"title":"Reusability Report: Evaluating the performance of a meta-learning foundation model on predicting the antibacterial activity of natural products","authors":"Caitlin M. Butt, Allison S. Walker","doi":"10.1038/s42256-026-01187-y","DOIUrl":"10.1038/s42256-026-01187-y","url":null,"abstract":"Deep learning foundation models are becoming increasingly popular for use in bioactivity prediction. Recently, Feng et al. developed ActFound, a bioactive foundation model that jointly uses pairwise learning and meta-learning. By utilizing these techniques, the model is capable of being fine-tuned to a more specific bioactivity task with only a small amount of new data. Here, to investigate the generalizability of the model, we looked to fine-tune the foundation model on an antibacterial natural products (NPs) dataset. Large, labelled NPs datasets, which are needed to train traditional deep learning methods, are scarce. Therefore, the bioactivity prediction of NPs is an ideal task for foundation models. We studied the performance of ActFound on the NPs dataset using a range of few-shot settings. Additionally, we compared ActFound’s performance with those of other state-of-the-art models in the field. We found ActFound was unable to reach the same level of accuracy on the antibacterial NPs dataset as it did on other cross-domain tasks reported in the original publication. However, ActFound displayed comparable or better performance compared to the other models studied, especially at the low-shot settings. Our results establish ActFound as a useful foundation model for the bioactivity prediction of tasks with limited data, particularly for datasets that contain the bioactivities of similar compounds. This Reusability Report tests the ability of a foundation model, ActFound, to predict the antibacterial activity of plant natural products. We found that although all models performed poorly on this task, ActFound performed better than similar models.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"270-275"},"PeriodicalIF":23.9,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s42256-026-01187-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-11DOI: 10.1038/s42256-025-01168-7
Xinghang Li, Peiyan Li, Long Qian, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Xinlong Wang, Di Guo, Tao Kong, Hanbo Zhang, Huaping Liu
To utilize foundation vision–language models (VLMs) for robotic tasks and motion planning, the community has proposed different methods for injecting action components into VLMs and building the vision–language–action models (VLAs). Here we disclose the key factors that significantly influence the performance of VLA on robot manipulation problems and focus on answering three essential design choices: which backbone to select, how to formulate the VLA architectures and when to add cross-embodiment data. The obtained results convince us firmly to explain why we prefer VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments. Through our extensive experiments, which include over 8 VLM backbones, 4 policy architectures and over 600 distinct designed experiments, we provide a detailed guidebook for the future design of VLAs. In addition to the study, the highly flexible RoboVLMs framework, which supports easy integrations of new VLMs and free combinations of various design choices, is made public to facilitate future research. We open-source all details, including codes, models, datasets and toolkits, along with detailed training and evaluation recipes at robovlms.github.io . Vision–language–action models recently emerged as a tool for robotics. Here Li and colleagues compare vision–language–action models and highlight what makes a model useful.
{"title":"What matters in building vision–language–action models for generalist robots","authors":"Xinghang Li, Peiyan Li, Long Qian, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Xinlong Wang, Di Guo, Tao Kong, Hanbo Zhang, Huaping Liu","doi":"10.1038/s42256-025-01168-7","DOIUrl":"10.1038/s42256-025-01168-7","url":null,"abstract":"To utilize foundation vision–language models (VLMs) for robotic tasks and motion planning, the community has proposed different methods for injecting action components into VLMs and building the vision–language–action models (VLAs). Here we disclose the key factors that significantly influence the performance of VLA on robot manipulation problems and focus on answering three essential design choices: which backbone to select, how to formulate the VLA architectures and when to add cross-embodiment data. The obtained results convince us firmly to explain why we prefer VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments. Through our extensive experiments, which include over 8 VLM backbones, 4 policy architectures and over 600 distinct designed experiments, we provide a detailed guidebook for the future design of VLAs. In addition to the study, the highly flexible RoboVLMs framework, which supports easy integrations of new VLMs and free combinations of various design choices, is made public to facilitate future research. We open-source all details, including codes, models, datasets and toolkits, along with detailed training and evaluation recipes at robovlms.github.io . Vision–language–action models recently emerged as a tool for robotics. Here Li and colleagues compare vision–language–action models and highlight what makes a model useful.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"158-172"},"PeriodicalIF":23.9,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146152318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-11DOI: 10.1038/s42256-025-01169-6
Aakriti Kumar, Nalin Poungpeth, Diyi Yang, Erina Farrell, Bruce L. Lambert, Matthew Groh
Large language models (LLMs) excel at generating empathic responses in text-based conversations. But, how reliably do they judge the nuances of empathic communication? Here we investigate this question by comparing how experts, crowdworkers and LLMs annotate empathic communication across four evaluative frameworks drawn from psychology, natural language processing and communications applied to 200 real-world conversations where one speaker shares a personal problem and the other offers support. Drawing on 3,150 expert annotations, 2,844 crowd annotations and 3,150 LLM annotations, we assess interrater reliability between these three annotator groups. We find that expert agreement is high but varies across the frameworks’ subcomponents depending on their clarity, complexity and subjectivity. We show that expert agreement offers a more informative benchmark for contextualizing LLM performance than standard classification metrics. Across all four frameworks, LLMs consistently approach this expert level benchmark and exceed the reliability of crowdworkers. These results demonstrate how LLMs, when validated on specific tasks with appropriate benchmarks, can support transparency and oversight in emotionally sensitive applications including their use as conversational companions. Kumar et al. show that large language models (LLMs) nearly match expert reliability and outperform laypeople when assessing empathic communication across multiple frameworks. The performance of both LLMs and experts depends on clear and specific evaluation criteria.
{"title":"When large language models are reliable for judging empathic communication","authors":"Aakriti Kumar, Nalin Poungpeth, Diyi Yang, Erina Farrell, Bruce L. Lambert, Matthew Groh","doi":"10.1038/s42256-025-01169-6","DOIUrl":"10.1038/s42256-025-01169-6","url":null,"abstract":"Large language models (LLMs) excel at generating empathic responses in text-based conversations. But, how reliably do they judge the nuances of empathic communication? Here we investigate this question by comparing how experts, crowdworkers and LLMs annotate empathic communication across four evaluative frameworks drawn from psychology, natural language processing and communications applied to 200 real-world conversations where one speaker shares a personal problem and the other offers support. Drawing on 3,150 expert annotations, 2,844 crowd annotations and 3,150 LLM annotations, we assess interrater reliability between these three annotator groups. We find that expert agreement is high but varies across the frameworks’ subcomponents depending on their clarity, complexity and subjectivity. We show that expert agreement offers a more informative benchmark for contextualizing LLM performance than standard classification metrics. Across all four frameworks, LLMs consistently approach this expert level benchmark and exceed the reliability of crowdworkers. These results demonstrate how LLMs, when validated on specific tasks with appropriate benchmarks, can support transparency and oversight in emotionally sensitive applications including their use as conversational companions. Kumar et al. show that large language models (LLMs) nearly match expert reliability and outperform laypeople when assessing empathic communication across multiple frameworks. The performance of both LLMs and experts depends on clear and specific evaluation criteria.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"173-185"},"PeriodicalIF":23.9,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s42256-025-01169-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146152319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-10DOI: 10.1038/s42256-026-01184-1
Liang Zhang, Juan Zhang, Rui Huang, Yiwen Wang, Linjing Liu, Yanyong Zhang, Kong Chen, Jun Jiang, Yuen Wu
Optimizing molecular resource utilization for molecular discovery requires collaborative efforts across research institutions and organizations to accelerate progress. However, given the high research value of both successful and unsuccessful molecules produced by each institution (or organization), these findings are typically kept highly private and confidential until formal publication or commercialization, with even failed molecules rarely disclosed. This confidentiality requirement presents a great challenge for most existing methods when collaboratively handling molecular data with heterogeneous distributions under stringent privacy constraints. Here we propose FedLG (federated learning Lanczos graph), a federated graph learning method that leverages the Lanczos algorithm to facilitate collaborative model training across multiple parties, achieving reliable prediction performance under strict privacy protection conditions. Compared with various existing federate learning methods, FedLG exhibits excellent model performance on 18 benchmark datasets in a simulated federated learning environment. Under different privacy-preserving mechanism settings, FedLG demonstrates robust performance and resistance to noise. Leave-one-client-out experiments and comparison tests across each simulated institution show that FedLG achieves improved heterogeneous data aggregation capabilities and more promising outcomes than localized training. In addition, we incorporate Bayesian optimization into FedLG to show its scalability and further stabilize model performance. Overall, FedLG can be considered an effective method to realize multi-party collaboration while ensuring that sensitive molecular information is protected from potential leakage. Zhang et al. introduce FedLG, a federated graph learning framework that leverages Lanczos-based projection to effectively aggregate heterogeneous molecular data. Extensive benchmarks demonstrate its robustness across diverse molecular discovery tasks.
{"title":"A federated graph learning method to realize multi-party collaboration for molecular discovery","authors":"Liang Zhang, Juan Zhang, Rui Huang, Yiwen Wang, Linjing Liu, Yanyong Zhang, Kong Chen, Jun Jiang, Yuen Wu","doi":"10.1038/s42256-026-01184-1","DOIUrl":"10.1038/s42256-026-01184-1","url":null,"abstract":"Optimizing molecular resource utilization for molecular discovery requires collaborative efforts across research institutions and organizations to accelerate progress. However, given the high research value of both successful and unsuccessful molecules produced by each institution (or organization), these findings are typically kept highly private and confidential until formal publication or commercialization, with even failed molecules rarely disclosed. This confidentiality requirement presents a great challenge for most existing methods when collaboratively handling molecular data with heterogeneous distributions under stringent privacy constraints. Here we propose FedLG (federated learning Lanczos graph), a federated graph learning method that leverages the Lanczos algorithm to facilitate collaborative model training across multiple parties, achieving reliable prediction performance under strict privacy protection conditions. Compared with various existing federate learning methods, FedLG exhibits excellent model performance on 18 benchmark datasets in a simulated federated learning environment. Under different privacy-preserving mechanism settings, FedLG demonstrates robust performance and resistance to noise. Leave-one-client-out experiments and comparison tests across each simulated institution show that FedLG achieves improved heterogeneous data aggregation capabilities and more promising outcomes than localized training. In addition, we incorporate Bayesian optimization into FedLG to show its scalability and further stabilize model performance. Overall, FedLG can be considered an effective method to realize multi-party collaboration while ensuring that sensitive molecular information is protected from potential leakage. Zhang et al. introduce FedLG, a federated graph learning framework that leverages Lanczos-based projection to effectively aggregate heterogeneous molecular data. Extensive benchmarks demonstrate its robustness across diverse molecular discovery tasks.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"246-256"},"PeriodicalIF":23.9,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146152324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s42256-026-01193-0
Roxana Radu, Luc Rocher
{"title":"Attributing and situating knowledge cannot be left to language models","authors":"Roxana Radu, Luc Rocher","doi":"10.1038/s42256-026-01193-0","DOIUrl":"https://doi.org/10.1038/s42256-026-01193-0","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"3 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s42256-025-01171-y
Urs J. Muehlematter, Kerstin Noelle Vokinger
Less than 2% of artificial intelligence devices authorized by the US Food and Drug Agency are prognostic, with prediction horizons ranging from minutes to several years. As the number of prognostic AI devices could increase, it is important to address the accompanying regulatory and ethical challenges.
{"title":"Authorization of prognostic AI medical devices","authors":"Urs J. Muehlematter, Kerstin Noelle Vokinger","doi":"10.1038/s42256-025-01171-y","DOIUrl":"10.1038/s42256-025-01171-y","url":null,"abstract":"Less than 2% of artificial intelligence devices authorized by the US Food and Drug Agency are prognostic, with prediction horizons ranging from minutes to several years. As the number of prognostic AI devices could increase, it is important to address the accompanying regulatory and ethical challenges.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"138-143"},"PeriodicalIF":23.9,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s42256-026-01179-y
Gene Tangtartharakul, Katherine R. Storrs
Visual language models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require a high-level understanding of images. However, some recent reports of VLMs struggling to reason about elemental visual concepts such as orientation, position, continuity and occlusion suggest a potential gulf between human and VLM vision. Currently, few assessments enable a direct comparison between human and VLM performance, which limits our ability to measure alignment between the two systems. Here we use the toolkit of neuropsychology to systematically evaluate the capabilities of three state-of-the-art VLMs across low, mid and high visual domains. Using 51 tests drawn from 6 clinical and experimental psychology batteries, we characterize the visual abilities of leading VLMs relative to normative performance in healthy adults. While the models excel in straightforward object recognition tasks, we find widespread deficits in low- and mid-level visual abilities that would be considered clinically significant in humans. These selective deficits, profiled through validated test batteries, suggest that an artificial system can achieve complex object recognition without developing foundational visual concepts that in humans require no explicit training. Tangtartharakul and Storrs use standardized neuropsychological tests to compare human visual abilities with those of visual language models (VLMs). They report that while VLMs excel in high-level object recognition, they show deficits in low- and mid-level visual abilities.
{"title":"Visual language models show widespread visual deficits on neuropsychological tests","authors":"Gene Tangtartharakul, Katherine R. Storrs","doi":"10.1038/s42256-026-01179-y","DOIUrl":"10.1038/s42256-026-01179-y","url":null,"abstract":"Visual language models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require a high-level understanding of images. However, some recent reports of VLMs struggling to reason about elemental visual concepts such as orientation, position, continuity and occlusion suggest a potential gulf between human and VLM vision. Currently, few assessments enable a direct comparison between human and VLM performance, which limits our ability to measure alignment between the two systems. Here we use the toolkit of neuropsychology to systematically evaluate the capabilities of three state-of-the-art VLMs across low, mid and high visual domains. Using 51 tests drawn from 6 clinical and experimental psychology batteries, we characterize the visual abilities of leading VLMs relative to normative performance in healthy adults. While the models excel in straightforward object recognition tasks, we find widespread deficits in low- and mid-level visual abilities that would be considered clinically significant in humans. These selective deficits, profiled through validated test batteries, suggest that an artificial system can achieve complex object recognition without developing foundational visual concepts that in humans require no explicit training. Tangtartharakul and Storrs use standardized neuropsychological tests to compare human visual abilities with those of visual language models (VLMs). They report that while VLMs excel in high-level object recognition, they show deficits in low- and mid-level visual abilities.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"209-219"},"PeriodicalIF":23.9,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s42256-026-01191-2
Xiangzheng Cheng, Suoqin Jin
Identifying cell–cell interactions from imaging-based spatial transcriptomics suffers from limited gene panels. A new self-supervised graph transformer-based method can resolve spatial single-cell-level interactions without requiring known ligand–receptor pairs.
{"title":"Identifying spatial single-cell-level interactions with graph transformer","authors":"Xiangzheng Cheng, Suoqin Jin","doi":"10.1038/s42256-026-01191-2","DOIUrl":"10.1038/s42256-026-01191-2","url":null,"abstract":"Identifying cell–cell interactions from imaging-based spatial transcriptomics suffers from limited gene panels. A new self-supervised graph transformer-based method can resolve spatial single-cell-level interactions without requiring known ligand–receptor pairs.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"146-147"},"PeriodicalIF":23.9,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1038/s42256-026-01178-z
Raffaele Ciriello
{"title":"On the troubling rise of generative AI suspicion in academic publishing","authors":"Raffaele Ciriello","doi":"10.1038/s42256-026-01178-z","DOIUrl":"10.1038/s42256-026-01178-z","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 2","pages":"136-137"},"PeriodicalIF":23.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146089497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}