首页 > 最新文献

Nature Machine Intelligence最新文献

英文 中文
An interpretable deep learning framework for genome-informed precision oncology 用于基因组信息精准肿瘤学的可解释深度学习框架
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-11 DOI: 10.1038/s42256-024-00866-y
Shuangxia Ren, Gregory F. Cooper, Lujia Chen, Xinghua Lu
Cancers result from aberrations in cellular signalling systems, typically resulting from driver somatic genome alterations (SGAs) in individual tumours. Precision oncology requires understanding the cellular state and selecting medications that induce vulnerability in cancer cells under such conditions. To this end, we developed a computational framework consisting of two components: (1) a representation-learning component, which learns a representation of the cellular signalling systems when perturbed by SGAs and uses a biologically motivated and interpretable deep learning model, and (2) a drug-response prediction component, which predicts drug responses by leveraging the information of the cellular state of the cancer cells derived by the first component. Our cell-state-oriented framework notably improves the accuracy of predictions of drug responses compared to models using SGAs directly in cell lines. Moreover, our model performs well with real patient data. Importantly, our framework enables the prediction of responses to chemotherapy agents based on SGAs, thus expanding genome-informed precision oncology beyond molecularly targeted drugs. Precision oncology requires analysis of genomic alterations in cancer cells. Ren et al. develop an interpretable artificial intelligence framework that transforms somatic genomic alterations into representations of cellular signalling systems and accurately predicts cells’ responses to anticancer drugs.
癌症源于细胞信号系统的畸变,通常是由单个肿瘤中的体细胞基因组驱动改变(SGA)引起的。精准肿瘤学需要了解细胞状态,并选择能在这种条件下诱导癌细胞脆弱性的药物。为此,我们开发了一个由两部分组成的计算框架:(1)表征学习组件,学习细胞信号系统在受到 SGAs 干扰时的表征,并使用具有生物学动机和可解释的深度学习模型;(2)药物反应预测组件,利用第一个组件得出的癌细胞状态信息预测药物反应。与直接在细胞系中使用 SGA 的模型相比,我们以细胞状态为导向的框架显著提高了药物反应预测的准确性。此外,我们的模型在实际患者数据中表现良好。重要的是,我们的框架可以根据 SGA 预测化疗药物的反应,从而将基因组信息精准肿瘤学扩展到分子靶向药物之外。
{"title":"An interpretable deep learning framework for genome-informed precision oncology","authors":"Shuangxia Ren, Gregory F. Cooper, Lujia Chen, Xinghua Lu","doi":"10.1038/s42256-024-00866-y","DOIUrl":"10.1038/s42256-024-00866-y","url":null,"abstract":"Cancers result from aberrations in cellular signalling systems, typically resulting from driver somatic genome alterations (SGAs) in individual tumours. Precision oncology requires understanding the cellular state and selecting medications that induce vulnerability in cancer cells under such conditions. To this end, we developed a computational framework consisting of two components: (1) a representation-learning component, which learns a representation of the cellular signalling systems when perturbed by SGAs and uses a biologically motivated and interpretable deep learning model, and (2) a drug-response prediction component, which predicts drug responses by leveraging the information of the cellular state of the cancer cells derived by the first component. Our cell-state-oriented framework notably improves the accuracy of predictions of drug responses compared to models using SGAs directly in cell lines. Moreover, our model performs well with real patient data. Importantly, our framework enables the prediction of responses to chemotherapy agents based on SGAs, thus expanding genome-informed precision oncology beyond molecularly targeted drugs. Precision oncology requires analysis of genomic alterations in cancer cells. Ren et al. develop an interpretable artificial intelligence framework that transforms somatic genomic alterations into representations of cellular signalling systems and accurately predicts cells’ responses to anticancer drugs.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shielding sensitive medical imaging data 屏蔽敏感的医学成像数据
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-11 DOI: 10.1038/s42256-024-00865-z
Gaoyang Liu, Chen Wang, Tian Xia
Differential privacy offers protection in medical image processing but is traditionally thought to hinder accuracy. A recent study offers a reality check on the relationship between privacy measures and the ability of an artificial intelligence (AI) model to accurately analyse medical images.
差异化隐私在医学图像处理中提供了保护,但传统上被认为会妨碍准确性。最近的一项研究对隐私措施与人工智能(AI)模型准确分析医学图像的能力之间的关系进行了现实检验。
{"title":"Shielding sensitive medical imaging data","authors":"Gaoyang Liu, Chen Wang, Tian Xia","doi":"10.1038/s42256-024-00865-z","DOIUrl":"10.1038/s42256-024-00865-z","url":null,"abstract":"Differential privacy offers protection in medical image processing but is traditionally thought to hinder accuracy. A recent study offers a reality check on the relationship between privacy measures and the ability of an artificial intelligence (AI) model to accurately analyse medical images.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifelike agility and play in quadrupedal robots using reinforcement learning and generative pre-trained models 利用强化学习和生成预训练模型实现四足机器人逼真的敏捷性和游戏性
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.1038/s42256-024-00861-3
Lei Han, Qingxu Zhu, Jiapeng Sheng, Chong Zhang, Tingguang Li, Yizheng Zhang, He Zhang, Yuzhen Liu, Cheng Zhou, Rui Zhao, Jie Li, Yufeng Zhang, Rui Wang, Wanchao Chi, Xiong Li, Yonghui Zhu, Lingzhu Xiang, Xiao Teng, Zhengyou Zhang
Knowledge from animals and humans inspires robotic innovations. Numerous efforts have been made to achieve agile locomotion in quadrupedal robots through classical controllers or reinforcement learning approaches. These methods usually rely on physical models or handcrafted rewards to accurately describe the specific system, rather than on a generalized understanding like animals do. Here we propose a hierarchical framework to construct primitive-, environmental- and strategic-level knowledge that are all pre-trainable, reusable and enrichable for legged robots. The primitive module summarizes knowledge from animal motion data, where, inspired by large pre-trained models in language and image understanding, we introduce deep generative models to produce motor control signals stimulating legged robots to act like real animals. Then, we shape various traversing capabilities at a higher level to align with the environment by reusing the primitive module. Finally, a strategic module is trained focusing on complex downstream tasks by reusing the knowledge from previous levels. We apply the trained hierarchical controllers to the MAX robot, a quadrupedal robot developed in-house, to mimic animals, traverse complex obstacles and play in a designed challenging multi-agent chase tag game, where lifelike agility and strategy emerge in the robots. A key challenge in robotics is leveraging pre-training as a form of knowledge to generate movements. The authors propose a general learning framework for reusing pre-trained knowledge across different perception and task levels. The deployed robots exhibit lifelike agility and sophisticated game-playing strategies.
来自动物和人类的知识激发了机器人的创新。为了通过经典控制器或强化学习方法实现四足机器人的敏捷运动,人们做出了许多努力。这些方法通常依赖物理模型或手工制作的奖励来准确描述特定系统,而不是像动物那样依赖广义的理解。在这里,我们提出了一个分层框架,用于构建原始、环境和策略层面的知识,这些知识都是可预先训练、可重复使用和可丰富的,适用于有腿机器人。原始模块总结了来自动物运动数据的知识,受语言和图像理解方面的大型预训练模型的启发,我们引入了深度生成模型,以产生运动控制信号,刺激有腿机器人像真实动物一样行动。然后,我们在更高层次上塑造各种穿越能力,通过重复使用原始模块与环境保持一致。最后,通过重复使用前几级的知识,训练出一个战略模块,专注于复杂的下游任务。我们将训练好的分层控制器应用于 MAX 机器人(一种自主开发的四足机器人),让它模仿动物,穿越复杂的障碍物,并参与设计好的具有挑战性的多代理追逐游戏,在游戏中,机器人表现出栩栩如生的敏捷性和策略性。
{"title":"Lifelike agility and play in quadrupedal robots using reinforcement learning and generative pre-trained models","authors":"Lei Han, Qingxu Zhu, Jiapeng Sheng, Chong Zhang, Tingguang Li, Yizheng Zhang, He Zhang, Yuzhen Liu, Cheng Zhou, Rui Zhao, Jie Li, Yufeng Zhang, Rui Wang, Wanchao Chi, Xiong Li, Yonghui Zhu, Lingzhu Xiang, Xiao Teng, Zhengyou Zhang","doi":"10.1038/s42256-024-00861-3","DOIUrl":"10.1038/s42256-024-00861-3","url":null,"abstract":"Knowledge from animals and humans inspires robotic innovations. Numerous efforts have been made to achieve agile locomotion in quadrupedal robots through classical controllers or reinforcement learning approaches. These methods usually rely on physical models or handcrafted rewards to accurately describe the specific system, rather than on a generalized understanding like animals do. Here we propose a hierarchical framework to construct primitive-, environmental- and strategic-level knowledge that are all pre-trainable, reusable and enrichable for legged robots. The primitive module summarizes knowledge from animal motion data, where, inspired by large pre-trained models in language and image understanding, we introduce deep generative models to produce motor control signals stimulating legged robots to act like real animals. Then, we shape various traversing capabilities at a higher level to align with the environment by reusing the primitive module. Finally, a strategic module is trained focusing on complex downstream tasks by reusing the knowledge from previous levels. We apply the trained hierarchical controllers to the MAX robot, a quadrupedal robot developed in-house, to mimic animals, traverse complex obstacles and play in a designed challenging multi-agent chase tag game, where lifelike agility and strategy emerge in the robots. A key challenge in robotics is leveraging pre-training as a form of knowledge to generate movements. The authors propose a general learning framework for reusing pre-trained knowledge across different perception and task levels. The deployed robots exhibit lifelike agility and sophisticated game-playing strategies.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141553468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Molecular set representation learning 分子集表示学习
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.1038/s42256-024-00856-0
Maria Boulougouri, Pierre Vandergheynst, Daniel Probst
Computational representation of molecules can take many forms, including graphs, string encodings of graphs, binary vectors or learned embeddings in the form of real-valued vectors. These representations are then used in downstream classification and regression tasks using a wide range of machine learning models. However, existing models come with limitations, such as the requirement for clearly defined chemical bonds, which often do not represent the true underlying nature of a molecule. Here we propose a framework for molecular machine learning tasks based on set representation learning. We show that learning on sets of atom invariants alone reaches the performance of state-of-the-art graph-based models on the most-used chemical benchmark datasets and that introducing a set representation layer into graph neural networks can surpass the performance of established methods in the domains of chemistry, biology and material science. We introduce specialized set representation-based neural network architectures for reaction-yield and protein–ligand binding-affinity prediction. Overall, we show that the technique we denote molecular set representation learning is both an alternative and an extension to graph neural network architectures for machine learning tasks on molecules, molecule complexes and chemical reactions. Machine learning methods for molecule predictions use various representations of molecules such as in the form of strings or graphs. As an extension of graph representation learning, Probst and colleagues propose to represent a molecule as a set of atoms, to better capture the underlying chemical nature, and demonstrate improved performance in a range of machine learning tasks.
分子的计算表示有多种形式,包括图、图的字符串编码、二进制向量或实值向量形式的学习嵌入。然后,这些表征可通过各种机器学习模型用于下游分类和回归任务。然而,现有的模型有其局限性,例如需要明确定义的化学键,而这些化学键往往不能代表分子的真实基本性质。在此,我们提出了一个基于集合表示学习的分子机器学习任务框架。我们的研究表明,在最常用的化学基准数据集上,仅原子不变式集学习就能达到最先进的基于图的模型的性能,而在图神经网络中引入集合表示层,就能在化学、生物学和材料科学领域超越已有方法的性能。我们为反应产率和蛋白质配体结合亲和力预测引入了基于集合表示的专门神经网络架构。总之,我们表明,我们称之为分子集表示学习的技术既是图神经网络架构的一种替代方法,也是对分子、分子复合物和化学反应机器学习任务的一种扩展。
{"title":"Molecular set representation learning","authors":"Maria Boulougouri, Pierre Vandergheynst, Daniel Probst","doi":"10.1038/s42256-024-00856-0","DOIUrl":"10.1038/s42256-024-00856-0","url":null,"abstract":"Computational representation of molecules can take many forms, including graphs, string encodings of graphs, binary vectors or learned embeddings in the form of real-valued vectors. These representations are then used in downstream classification and regression tasks using a wide range of machine learning models. However, existing models come with limitations, such as the requirement for clearly defined chemical bonds, which often do not represent the true underlying nature of a molecule. Here we propose a framework for molecular machine learning tasks based on set representation learning. We show that learning on sets of atom invariants alone reaches the performance of state-of-the-art graph-based models on the most-used chemical benchmark datasets and that introducing a set representation layer into graph neural networks can surpass the performance of established methods in the domains of chemistry, biology and material science. We introduce specialized set representation-based neural network architectures for reaction-yield and protein–ligand binding-affinity prediction. Overall, we show that the technique we denote molecular set representation learning is both an alternative and an extension to graph neural network architectures for machine learning tasks on molecules, molecule complexes and chemical reactions. Machine learning methods for molecule predictions use various representations of molecules such as in the form of strings or graphs. As an extension of graph representation learning, Probst and colleagues propose to represent a molecule as a set of atoms, to better capture the underlying chemical nature, and demonstrate improved performance in a range of machine learning tasks.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00856-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141553473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual odometry with neuromorphic resonator networks 利用神经形态共振网络进行视觉里程测量
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-27 DOI: 10.1038/s42256-024-00846-2
Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, E. Paxon Frady, Friedrich T. Sommer, Yulia Sandamirskaya
Visual odometry (VO) is a method used to estimate self-motion of a mobile robot using visual sensors. Unlike odometry based on integrating differential measurements that can accumulate errors, such as inertial sensors or wheel encoders, VO is not compromised by drift. However, image-based VO is computationally demanding, limiting its application in use cases with low-latency, low-memory and low-energy requirements. Neuromorphic hardware offers low-power solutions to many vision and artificial intelligence problems, but designing such solutions is complicated and often has to be assembled from scratch. Here we propose the use of vector symbolic architecture (VSA) as an abstraction layer to design algorithms compatible with neuromorphic hardware. Building from a VSA model for scene analysis, described in our companion paper, we present a modular neuromorphic algorithm that achieves state-of-the-art performance on two-dimensional VO tasks. Specifically, the proposed algorithm stores and updates a working memory of the presented visual environment. Based on this working memory, a resonator network estimates the changing location and orientation of the camera. We experimentally validate the neuromorphic VSA-based approach to VO with two benchmarks: one based on an event-camera dataset and the other in a dynamic scene with a robotic task. Visual odometry, or self-motion estimation, is a fundamental task in robotics. Renner, Supic and colleagues introduce a neuromorphic algorithm for visual odometry that leverages hyperdimensional computing and hierarchical resonators. The approach estimates a robot’s motion from event-based vision, a step towards low-power machine vision for robotics.
视觉里程计(VO)是一种利用视觉传感器估算移动机器人自我运动的方法。不同于惯性传感器或轮子编码器等基于积分差分测量的里程测量法,视觉里程测量法不会受到漂移的影响。然而,基于图像的 VO 对计算要求很高,这限制了它在低延迟、低内存和低能耗要求的使用案例中的应用。神经形态硬件为许多视觉和人工智能问题提供了低功耗解决方案,但设计这种解决方案非常复杂,通常需要从头开始组装。在此,我们建议使用矢量符号架构(VSA)作为抽象层,设计与神经形态硬件兼容的算法。在我们的配套论文中描述的用于场景分析的 VSA 模型的基础上,我们提出了一种模块化神经形态算法,该算法在二维 VO 任务上实现了最先进的性能。具体来说,所提出的算法存储并更新呈现视觉环境的工作记忆。在此工作记忆的基础上,谐振器网络会估算出摄像头不断变化的位置和方向。我们通过两个基准实验验证了基于神经形态 VSA 的虚拟现实方法:一个基于事件摄像机数据集,另一个基于机器人任务的动态场景。
{"title":"Visual odometry with neuromorphic resonator networks","authors":"Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, E. Paxon Frady, Friedrich T. Sommer, Yulia Sandamirskaya","doi":"10.1038/s42256-024-00846-2","DOIUrl":"10.1038/s42256-024-00846-2","url":null,"abstract":"Visual odometry (VO) is a method used to estimate self-motion of a mobile robot using visual sensors. Unlike odometry based on integrating differential measurements that can accumulate errors, such as inertial sensors or wheel encoders, VO is not compromised by drift. However, image-based VO is computationally demanding, limiting its application in use cases with low-latency, low-memory and low-energy requirements. Neuromorphic hardware offers low-power solutions to many vision and artificial intelligence problems, but designing such solutions is complicated and often has to be assembled from scratch. Here we propose the use of vector symbolic architecture (VSA) as an abstraction layer to design algorithms compatible with neuromorphic hardware. Building from a VSA model for scene analysis, described in our companion paper, we present a modular neuromorphic algorithm that achieves state-of-the-art performance on two-dimensional VO tasks. Specifically, the proposed algorithm stores and updates a working memory of the presented visual environment. Based on this working memory, a resonator network estimates the changing location and orientation of the camera. We experimentally validate the neuromorphic VSA-based approach to VO with two benchmarks: one based on an event-camera dataset and the other in a dynamic scene with a robotic task. Visual odometry, or self-motion estimation, is a fundamental task in robotics. Renner, Supic and colleagues introduce a neuromorphic algorithm for visual odometry that leverages hyperdimensional computing and hierarchical resonators. The approach estimates a robot’s motion from event-based vision, a step towards low-power machine vision for robotics.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for micro- and nanorobots 微型和纳米机器人的机器学习
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-27 DOI: 10.1038/s42256-024-00859-x
Lidong Yang, Jialin Jiang, Fengtong Ji, Yangmin Li, Kai-Leung Yung, Antoine Ferreira, Li Zhang
Machine learning (ML) has revolutionized robotics by enhancing perception, adaptability, decision-making and more, enabling robots to work in complex scenarios beyond the capabilities of traditional approaches. However, the downsizing of robots to micro- and nanoscales introduces new challenges. For example, complexities in the actuation and locomotion of micro- and nanorobots defy traditional modelling methods, while control and navigation are complicated by strong environmental disruptions, and tracking in vivo encounters substantial noise interference. Recently, ML has also been shown to offer a promising avenue to tackle these complexities. Here we discuss how ML advances many crucial aspects of micro- and nanorobots, that is, in their design, actuation, locomotion, planning, tracking and navigation. Any application that can benefit from these fundamental advancements will be a potential beneficiary of this field, including micromanipulation, targeted delivery and therapy, bio-sensing, diagnosis and so on. This Review aims to provide an accessible and comprehensive survey for readers to quickly appreciate recent exciting accomplishments in ML for micro- and nanorobots. We also discuss potential issues and prospects of this burgeoning research direction. We hope this Review can foster interdisciplinary collaborations across robotics, computer science, material science and allied disciplines, to develop ML techniques that surmount fundamental challenges and further expand the application horizons of micro- and nanorobotics in biomedicine. Machine learning approaches in micro- and nanorobotics promise to overcome challenges encountered by applying traditional control methods at the microscopic scale. Lidong Yang et al. review this emerging area in robotics and discuss machine learning developments in design, actuation, locomotion, planning, tracking and navigation of microrobots.
机器学习(ML)通过增强感知能力、适应能力、决策能力等,使机器人能够在复杂场景中工作,超越了传统方法的能力范围,从而彻底改变了机器人技术。然而,将机器人缩小到微米级和纳米级带来了新的挑战。例如,微型和纳米机器人的驱动和运动的复杂性使传统的建模方法望而却步,而控制和导航因强烈的环境干扰而变得复杂,体内跟踪也会遇到大量噪声干扰。最近的研究表明,ML 为解决这些复杂问题提供了一条很有前景的途径。在此,我们将讨论 ML 如何在微型和纳米机器人的设计、驱动、运动、规划、跟踪和导航等许多关键方面取得进展。任何能从这些基本进步中获益的应用都将是这一领域的潜在受益者,包括微操纵、定向输送和治疗、生物传感、诊断等。本综述旨在为读者提供一份通俗易懂的综合调查报告,以便读者快速了解微机器人和纳米机器人在智能语言方面最近取得的令人振奋的成就。我们还讨论了这一新兴研究方向的潜在问题和前景。我们希望这篇综述能促进机器人学、计算机科学、材料科学和相关学科之间的跨学科合作,以开发克服基本挑战的 ML 技术,并进一步拓展微纳机器人在生物医学中的应用范围。
{"title":"Machine learning for micro- and nanorobots","authors":"Lidong Yang, Jialin Jiang, Fengtong Ji, Yangmin Li, Kai-Leung Yung, Antoine Ferreira, Li Zhang","doi":"10.1038/s42256-024-00859-x","DOIUrl":"10.1038/s42256-024-00859-x","url":null,"abstract":"Machine learning (ML) has revolutionized robotics by enhancing perception, adaptability, decision-making and more, enabling robots to work in complex scenarios beyond the capabilities of traditional approaches. However, the downsizing of robots to micro- and nanoscales introduces new challenges. For example, complexities in the actuation and locomotion of micro- and nanorobots defy traditional modelling methods, while control and navigation are complicated by strong environmental disruptions, and tracking in vivo encounters substantial noise interference. Recently, ML has also been shown to offer a promising avenue to tackle these complexities. Here we discuss how ML advances many crucial aspects of micro- and nanorobots, that is, in their design, actuation, locomotion, planning, tracking and navigation. Any application that can benefit from these fundamental advancements will be a potential beneficiary of this field, including micromanipulation, targeted delivery and therapy, bio-sensing, diagnosis and so on. This Review aims to provide an accessible and comprehensive survey for readers to quickly appreciate recent exciting accomplishments in ML for micro- and nanorobots. We also discuss potential issues and prospects of this burgeoning research direction. We hope this Review can foster interdisciplinary collaborations across robotics, computer science, material science and allied disciplines, to develop ML techniques that surmount fundamental challenges and further expand the application horizons of micro- and nanorobotics in biomedicine. Machine learning approaches in micro- and nanorobotics promise to overcome challenges encountered by applying traditional control methods at the microscopic scale. Lidong Yang et al. review this emerging area in robotics and discuss machine learning developments in design, actuation, locomotion, planning, tracking and navigation of microrobots.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Will generative AI transform robotics? 生成式人工智能会改变机器人技术吗?
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-27 DOI: 10.1038/s42256-024-00862-2
In the current wave of excitement about applying large vision–language models and generative AI to robotics, expectations are running high, but conquering real-world complexities remains challenging for robots.
目前,将大型视觉语言模型和生成式人工智能应用于机器人技术的热潮正在兴起,人们对此期望很高,但对于机器人来说,征服现实世界的复杂性仍然是一项挑战。
{"title":"Will generative AI transform robotics?","authors":"","doi":"10.1038/s42256-024-00862-2","DOIUrl":"10.1038/s42256-024-00862-2","url":null,"abstract":"In the current wave of excitement about applying large vision–language models and generative AI to robotics, expectations are running high, but conquering real-world complexities remains challenging for robots.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00862-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct conformational sampling from peptide energy landscapes through hypernetwork-conditioned diffusion 通过超网络条件扩散从多肽能谱中直接进行构象采样
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-27 DOI: 10.1038/s42256-024-00860-4
Osama Abdin, Philip M. Kim
Deep learning approaches have spurred substantial advances in the single-state prediction of biomolecular structures. The function of biomolecules is, however, dependent on the range of conformations they can assume. This is especially true for peptides, a highly flexible class of molecules that are involved in numerous biological processes and are of high interest as therapeutics. Here we introduce PepFlow, a transferable generative model that enables direct all-atom sampling from the allowable conformational space of input peptides. We train the model in a diffusion framework and subsequently use an equivalent flow to perform conformational sampling. To overcome the prohibitive cost of generalized all-atom modelling, we modularize the generation process and integrate a hypernetwork to predict sequence-specific network parameters. PepFlow accurately predicts peptide structures and effectively recapitulates experimental peptide ensembles at a fraction of the running time of traditional approaches. PepFlow can also be used to sample conformations that satisfy constraints such as macrocyclization. Modelling the different structures a peptide can assume is integral to understanding their function. The authors introduce PepFlow, a sequence-conditioned deep learning model that is shown to accurately and efficiently generate peptide conformations.
深度学习方法在生物分子结构的单态预测方面取得了长足进步。然而,生物分子的功能取决于它们所能呈现的构象范围。肽是一类高度灵活的分子,参与多种生物过程,作为治疗药物备受关注。在这里,我们介绍一种可转移的生成模型 PepFlow,它可以从输入肽的可构象空间中直接进行全原子采样。我们在扩散框架中训练模型,随后使用等效流进行构象采样。为了克服通用全原子建模成本过高的问题,我们将生成过程模块化,并整合了超网络来预测特定序列的网络参数。PepFlow 能准确预测多肽结构,并有效再现实验中的多肽组合,而运行时间仅为传统方法的一小部分。PepFlow 还可用于采样满足大环化等约束条件的构象。
{"title":"Direct conformational sampling from peptide energy landscapes through hypernetwork-conditioned diffusion","authors":"Osama Abdin, Philip M. Kim","doi":"10.1038/s42256-024-00860-4","DOIUrl":"10.1038/s42256-024-00860-4","url":null,"abstract":"Deep learning approaches have spurred substantial advances in the single-state prediction of biomolecular structures. The function of biomolecules is, however, dependent on the range of conformations they can assume. This is especially true for peptides, a highly flexible class of molecules that are involved in numerous biological processes and are of high interest as therapeutics. Here we introduce PepFlow, a transferable generative model that enables direct all-atom sampling from the allowable conformational space of input peptides. We train the model in a diffusion framework and subsequently use an equivalent flow to perform conformational sampling. To overcome the prohibitive cost of generalized all-atom modelling, we modularize the generation process and integrate a hypernetwork to predict sequence-specific network parameters. PepFlow accurately predicts peptide structures and effectively recapitulates experimental peptide ensembles at a fraction of the running time of traditional approaches. PepFlow can also be used to sample conformations that satisfy constraints such as macrocyclization. Modelling the different structures a peptide can assume is integral to understanding their function. The authors introduce PepFlow, a sequence-conditioned deep learning model that is shown to accurately and efficiently generate peptide conformations.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuromorphic visual scene understanding with resonator networks 利用共振网络实现神经形态视觉场景理解
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-27 DOI: 10.1038/s42256-024-00848-0
Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, Bruno A. Olshausen, Yulia Sandamirskaya, Friedrich T. Sommer, E. Paxon Frady
Analysing a visual scene by inferring the configuration of a generative model is widely considered the most flexible and generalizable approach to scene understanding. Yet, one major problem is the computational challenge of the inference procedure, involving a combinatorial search across object identities and poses. Here we propose a neuromorphic solution exploiting three key concepts: (1) a computational framework based on vector symbolic architectures (VSAs) with complex-valued vectors, (2) the design of hierarchical resonator networks to factorize the non-commutative transforms translation and rotation in visual scenes and (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued resonator networks on neuromorphic hardware. The VSA framework uses vector binding operations to form a generative image model in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which can then be efficiently factorized by a resonator network to infer objects and their poses. The hierarchical resonator network features a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. Our approach is demonstrated on synthetic scenes composed of simple two-dimensional shapes undergoing rigid geometric transformations and colour changes. A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics. The inference procedure for analysing a visual scene presents a computational challenge. Renner, Supic and colleagues develop a neural network model, the hierarchical resonator, to determine the generative factors of variation of objects in simple scenes. The resonator was implemented on neuromorphic hardware, using a spike-timing code for complex numbers.
通过推断生成模型的配置来分析视觉场景,被广泛认为是最灵活、最具通用性的场景理解方法。然而,一个主要问题是推理过程的计算挑战,其中涉及对物体身份和姿势的组合搜索。在此,我们提出了一种利用三个关键概念的神经形态解决方案:(1) 基于带有复值向量的向量符号架构(VSA)的计算框架;(2) 设计分层共振网络,将视觉场景中的非交换变换平移和旋转因数化;(3) 设计多室尖峰相位神经元模型,在神经形态硬件上实现复值共振网络。VSA 框架使用矢量绑定操作来形成生成图像模型,其中绑定操作是几何变换的等变操作。因此,场景可以描述为矢量乘积的总和,然后通过共振网络对其进行有效的因式分解,从而推断出物体及其姿势。分层共振网络采用分区架构,其中一个分区中的水平和垂直平移以及另一个分区中的旋转和缩放的矢量绑定是等变量的。尖峰神经元模型可将共振网络映射到高效、低功耗的神经形态硬件上。我们的方法在由简单的二维图形组成的合成场景上进行了演示,这些图形经历了刚性几何变换和颜色变化。另一篇论文则展示了在机器视觉和机器人技术的实际应用场景中的相同方法。
{"title":"Neuromorphic visual scene understanding with resonator networks","authors":"Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, Bruno A. Olshausen, Yulia Sandamirskaya, Friedrich T. Sommer, E. Paxon Frady","doi":"10.1038/s42256-024-00848-0","DOIUrl":"10.1038/s42256-024-00848-0","url":null,"abstract":"Analysing a visual scene by inferring the configuration of a generative model is widely considered the most flexible and generalizable approach to scene understanding. Yet, one major problem is the computational challenge of the inference procedure, involving a combinatorial search across object identities and poses. Here we propose a neuromorphic solution exploiting three key concepts: (1) a computational framework based on vector symbolic architectures (VSAs) with complex-valued vectors, (2) the design of hierarchical resonator networks to factorize the non-commutative transforms translation and rotation in visual scenes and (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued resonator networks on neuromorphic hardware. The VSA framework uses vector binding operations to form a generative image model in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which can then be efficiently factorized by a resonator network to infer objects and their poses. The hierarchical resonator network features a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. Our approach is demonstrated on synthetic scenes composed of simple two-dimensional shapes undergoing rigid geometric transformations and colour changes. A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics. The inference procedure for analysing a visual scene presents a computational challenge. Renner, Supic and colleagues develop a neural network model, the hierarchical resonator, to determine the generative factors of variation of objects in simple scenes. The resonator was implemented on neuromorphic hardware, using a spike-timing code for complex numbers.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordinate-based neural representations for computational adaptive optics in widefield microscopy 用于宽视场显微镜中计算自适应光学的基于坐标的神经表征
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-24 DOI: 10.1038/s42256-024-00853-3
Iksung Kang, Qinrong Zhang, Stella X. Yu, Na Ji
Widefield microscopy is widely used for non-invasive imaging of biological structures at subcellular resolution. When applied to a complex specimen, its image quality is degraded by sample-induced optical aberration. Adaptive optics can correct wavefront distortion and restore diffraction-limited resolution but require wavefront sensing and corrective devices, increasing system complexity and cost. Here we describe a self-supervised machine learning algorithm, CoCoA, that performs joint wavefront estimation and three-dimensional structural information extraction from a single-input three-dimensional image stack without the need for external training datasets. We implemented CoCoA for widefield imaging of mouse brain tissues and validated its performance with direct-wavefront-sensing-based adaptive optics. Importantly, we systematically explored and quantitatively characterized the limiting factors of CoCoA’s performance. Using CoCoA, we demonstrated in vivo widefield mouse brain imaging using machine learning-based adaptive optics. Incorporating coordinate-based neural representations and a forward physics model, the self-supervised scheme of CoCoA should be applicable to microscopy modalities in general. Adaptive optics (AO) corrects aberrations and restores resolution but requires specialized hardware. Kang et al. introduce a self-supervised AO method (CoCoA) for widefield microscopy, achieving in vivo mouse brain imaging without wavefront sensors.
宽场显微镜被广泛用于对亚细胞分辨率的生物结构进行无创成像。当应用于复杂样本时,其图像质量会因样本引起的光学像差而下降。自适应光学技术可以校正波前畸变,恢复衍射极限分辨率,但需要波前传感和校正设备,从而增加了系统的复杂性和成本。在这里,我们介绍了一种自监督机器学习算法 CoCoA,它可以从单一输入的三维图像堆栈中执行联合波前估计和三维结构信息提取,而无需外部训练数据集。我们将 CoCoA 应用于小鼠脑组织的宽场成像,并用基于直接波前传感的自适应光学验证了它的性能。重要的是,我们系统地探索并定量描述了限制 CoCoA 性能的因素。通过使用 CoCoA,我们利用基于机器学习的自适应光学技术演示了活体宽视场小鼠大脑成像。结合基于坐标的神经表征和前向物理模型,CoCoA 的自监督方案应适用于一般的显微镜模式。
{"title":"Coordinate-based neural representations for computational adaptive optics in widefield microscopy","authors":"Iksung Kang, Qinrong Zhang, Stella X. Yu, Na Ji","doi":"10.1038/s42256-024-00853-3","DOIUrl":"10.1038/s42256-024-00853-3","url":null,"abstract":"Widefield microscopy is widely used for non-invasive imaging of biological structures at subcellular resolution. When applied to a complex specimen, its image quality is degraded by sample-induced optical aberration. Adaptive optics can correct wavefront distortion and restore diffraction-limited resolution but require wavefront sensing and corrective devices, increasing system complexity and cost. Here we describe a self-supervised machine learning algorithm, CoCoA, that performs joint wavefront estimation and three-dimensional structural information extraction from a single-input three-dimensional image stack without the need for external training datasets. We implemented CoCoA for widefield imaging of mouse brain tissues and validated its performance with direct-wavefront-sensing-based adaptive optics. Importantly, we systematically explored and quantitatively characterized the limiting factors of CoCoA’s performance. Using CoCoA, we demonstrated in vivo widefield mouse brain imaging using machine learning-based adaptive optics. Incorporating coordinate-based neural representations and a forward physics model, the self-supervised scheme of CoCoA should be applicable to microscopy modalities in general. Adaptive optics (AO) corrects aberrations and restores resolution but requires specialized hardware. Kang et al. introduce a self-supervised AO method (CoCoA) for widefield microscopy, achieving in vivo mouse brain imaging without wavefront sensors.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Nature Machine Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1