首页 > 最新文献

Nature Machine Intelligence最新文献

英文 中文
Self-decoupling three-axis forces in a simple sensor 简单传感器中的三轴力自解耦功能
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-27 DOI: 10.1038/s42256-024-00941-4
Kuanming Yao, Qiuna Zhuang
A self-decoupling tactile sensor dramatically reduces calibration time for three-dimensional force measurement, scaling from cubic (N³) to linear (3N). This advancement facilitates robotic tactile perception in human–machine interfaces.
自解耦触觉传感器大大缩短了三维力测量的校准时间,从立方(N³)扩展到线性(3N)。这一进步促进了人机界面中的机器人触觉感知。
{"title":"Self-decoupling three-axis forces in a simple sensor","authors":"Kuanming Yao, Qiuna Zhuang","doi":"10.1038/s42256-024-00941-4","DOIUrl":"10.1038/s42256-024-00941-4","url":null,"abstract":"A self-decoupling tactile sensor dramatically reduces calibration time for three-dimensional force measurement, scaling from cubic (N³) to linear (3N). This advancement facilitates robotic tactile perception in human–machine interfaces.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1431-1432"},"PeriodicalIF":18.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal language and graph learning of adsorption configuration in catalysis 催化中吸附构型的多模态语言和图式学习
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-27 DOI: 10.1038/s42256-024-00930-7
Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani
Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.
吸附能是一种反应性描述符,必须对其进行准确预测,才能有效地将机器学习应用于催化剂筛选。这一过程涉及在催化剂表面的不同吸附构型中寻找能量最低的构型,而这些构型通常具有非常相似的能量。虽然图神经网络在计算催化剂系统能量方面取得了巨大成功,但它们在很大程度上依赖于原子空间坐标。相比之下,基于变换器的语言模型可以直接使用人类可读的文本输入,从而有可能绕过对详细原子位置或拓扑结构的需求;然而,这些语言模型在准确预测吸附构型的能量方面往往力不从心。我们的研究通过一种称为图辅助预训练的自我监督过程,将其潜在空间与成熟的图神经网络相匹配,从而改进了预测性语言模型。这种方法可将吸附构型能量预测的平均绝对误差降低 7.4-9.8%,从而将模型的注意力重新引向吸附构型。在此基础上,我们建议使用生成式大型语言模型为预测模型创建文本输入,而无需依赖精确的原子位置。这展示了在没有详细几何信息的情况下,语言模型在能量预测中的潜在用例。
{"title":"Multimodal language and graph learning of adsorption configuration in catalysis","authors":"Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani","doi":"10.1038/s42256-024-00930-7","DOIUrl":"10.1038/s42256-024-00930-7","url":null,"abstract":"Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1501-1511"},"PeriodicalIF":18.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual feature extraction hierarchies converge in large language models and the brain 大型语言模型和大脑中的上下文特征提取层次趋同
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1038/s42256-024-00925-4
Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani
Recent advancements in artificial intelligence have sparked interest in the parallels between large language models (LLMs) and human neural processing, particularly in language comprehension. Although previous research has demonstrated similarities between LLM representations and neural responses, the computational principles driving this convergence—especially as LLMs evolve—remain elusive. Here we used intracranial electroencephalography recordings from neurosurgical patients listening to speech to investigate the alignment between high-performance LLMs and the language-processing mechanisms of the brain. We examined a diverse selection of LLMs with similar parameter sizes and found that as their performance on benchmark tasks improves, they not only become more brain-like, reflected in better neural response predictions from model embeddings, but they also align more closely with the hierarchical feature extraction pathways of the brain, using fewer layers for the same encoding. Additionally, we identified commonalities in the hierarchical processing mechanisms of high-performing LLMs, revealing their convergence towards similar language-processing strategies. Finally, we demonstrate the critical role of contextual information in both LLM performance and brain alignment. These findings reveal converging aspects of language processing in the brain and LLMs, offering new directions for developing models that better align with human cognitive processing. Why brain-like feature extraction emerges in large language models (LLMs) remains elusive. Mischler, Li and colleagues demonstrate that high-performing LLMs not only predict neural responses more accurately than other LLMs but also align more closely with the hierarchical language processing pathway in the brain, revealing parallels between these models and human cognitive mechanisms.
人工智能的最新进展引发了人们对大型语言模型(LLM)与人类神经处理之间相似性的兴趣,尤其是在语言理解方面。尽管之前的研究已经证明了大型语言模型表征与神经反应之间的相似性,但驱动这种趋同的计算原理--尤其是在大型语言模型不断进化的过程中--仍然难以捉摸。在这里,我们利用神经外科患者聆听语音时的颅内脑电图记录来研究高性能 LLM 与大脑语言处理机制之间的一致性。我们研究了具有相似参数大小的多种 LLM,发现随着它们在基准任务上的表现不断提高,它们不仅变得更像大脑,反映在模型嵌入的神经响应预测上,而且它们与大脑的分层特征提取途径更加一致,使用更少的层数进行相同的编码。此外,我们还发现了高绩效 LLM 的分层处理机制的共性,揭示了它们向类似语言处理策略的趋同。最后,我们证明了语境信息在 LLM 性能和大脑排列中的关键作用。这些发现揭示了大脑和 LLMs 语言处理的趋同性,为开发更符合人类认知处理的模型提供了新的方向。
{"title":"Contextual feature extraction hierarchies converge in large language models and the brain","authors":"Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani","doi":"10.1038/s42256-024-00925-4","DOIUrl":"10.1038/s42256-024-00925-4","url":null,"abstract":"Recent advancements in artificial intelligence have sparked interest in the parallels between large language models (LLMs) and human neural processing, particularly in language comprehension. Although previous research has demonstrated similarities between LLM representations and neural responses, the computational principles driving this convergence—especially as LLMs evolve—remain elusive. Here we used intracranial electroencephalography recordings from neurosurgical patients listening to speech to investigate the alignment between high-performance LLMs and the language-processing mechanisms of the brain. We examined a diverse selection of LLMs with similar parameter sizes and found that as their performance on benchmark tasks improves, they not only become more brain-like, reflected in better neural response predictions from model embeddings, but they also align more closely with the hierarchical feature extraction pathways of the brain, using fewer layers for the same encoding. Additionally, we identified commonalities in the hierarchical processing mechanisms of high-performing LLMs, revealing their convergence towards similar language-processing strategies. Finally, we demonstrate the critical role of contextual information in both LLM performance and brain alignment. These findings reveal converging aspects of language processing in the brain and LLMs, offering new directions for developing models that better align with human cognitive processing. Why brain-like feature extraction emerges in large language models (LLMs) remains elusive. Mischler, Li and colleagues demonstrate that high-performing LLMs not only predict neural responses more accurately than other LLMs but also align more closely with the hierarchical language processing pathway in the brain, revealing parallels between these models and human cognitive mechanisms.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1467-1477"},"PeriodicalIF":18.8,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research 为降低生物医学研究中可能滥用人工智能的风险制定框架
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1038/s42256-024-00926-3
Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus
The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence. The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.
人工智能(AI)在生物医学研究领域的快速发展带来了相当大的滥用潜力,包括专制监控、数据滥用、生物武器开发、不公平现象加剧和隐私滥用。我们为研究人员提出了一个多管齐下的框架来降低这些风险,首先是研究人员可以根据自己的工作调整现有的伦理框架和监管措施,其次是现成的人工智能解决方案,然后是研究人员可以在其人工智能中构建特定设计的解决方案,以减少滥用。如果研究人员仍然无法解决潜在的有害误用问题,并且风险大于潜在收益,我们建议研究人员考虑采用不同的方法来回答他们的研究问题,如果风险仍然太大,则考虑提出新的研究问题。我们将这一框架应用于可能出现滥用问题的三个不同的人工智能研究领域:(1) 药物和化学发现人工智能;(2) 合成数据生成模型;(3) 环境智能。
{"title":"Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research","authors":"Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus","doi":"10.1038/s42256-024-00926-3","DOIUrl":"10.1038/s42256-024-00926-3","url":null,"abstract":"The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence. The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1435-1442"},"PeriodicalIF":18.8,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI pioneers win 2024 Nobel prizes 人工智能先驱荣获 2024 年诺贝尔奖
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1038/s42256-024-00945-0
The 2024 Nobel prizes in physics and chemistry highlight the interdisciplinary nature and impact of AI in science.
2024 年诺贝尔物理学奖和化学奖彰显了人工智能在科学领域的跨学科性质和影响。
{"title":"AI pioneers win 2024 Nobel prizes","authors":"","doi":"10.1038/s42256-024-00945-0","DOIUrl":"10.1038/s42256-024-00945-0","url":null,"abstract":"The 2024 Nobel prizes in physics and chemistry highlight the interdisciplinary nature and impact of AI in science.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1271-1271"},"PeriodicalIF":18.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00945-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142690742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for practical quantum error mitigation 实用量子错误缓解的机器学习
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1038/s42256-024-00927-2
Haoran Liao, Derek S. Wang, Iskandar Sitdikov, Ciro Salcedo, Alireza Seif, Zlatko K. Minev
Quantum computers have progressed towards outperforming classical supercomputers, but quantum errors remain the primary obstacle. In the past few years, the field of quantum error mitigation has provided strategies for overcoming errors in near-term devices, enabling improved accuracy at the cost of additional run time. Through experiments on state-of-the-art quantum computers using up to 100 qubits, we demonstrate that without sacrificing accuracy, machine learning for quantum error mitigation (ML-QEM) drastically reduces the cost of mitigation. We benchmarked ML-QEM using a variety of machine learning models—linear regression, random forest, multilayer perceptron and graph neural networks—on diverse classes of quantum circuits, over increasingly complex device noise profiles, under interpolation and extrapolation, and in both numerics and experiments. These tests employed the popular digital zero-noise extrapolation method as an added reference. Finally, we propose a path towards scalable mitigation using ML-QEM to mimic traditional mitigation methods with superior runtime efficiency. Our results show that classical machine learning can extend the reach and practicality of quantum error mitigation by reducing its overhead and highlight its broader potential for practical quantum computations. Quantum error mitigation improves the accuracy of quantum computers at a computational overhead. Liao et al. demonstrate that classical machine learning models can deliver accuracy comparable to that of conventional techniques while reducing quantum computational costs.
量子计算机在超越经典超级计算机方面取得了进展,但量子误差仍是主要障碍。在过去几年中,量子误差缓解领域提供了克服近期设备误差的策略,从而以增加运行时间为代价提高了准确性。通过在使用多达 100 量子位的最先进量子计算机上进行实验,我们证明,在不牺牲准确性的情况下,机器学习量子误差缓解(ML-QEM)可大幅降低缓解成本。我们使用各种机器学习模型--线性回归、随机森林、多层感知器和图神经网络--在不同类别的量子电路上,在日益复杂的器件噪声曲线上,在内插法和外推法下,在数值和实验中对 ML-QEM 进行了基准测试。这些测试采用了流行的数字零噪声外推法作为附加参考。最后,我们提出了一条利用 ML-QEM 模拟传统缓解方法的可扩展缓解路径,该方法具有卓越的运行效率。我们的研究结果表明,经典机器学习可以通过减少开销来扩展量子错误缓解的范围和实用性,并凸显了其在实际量子计算中的广泛潜力。
{"title":"Machine learning for practical quantum error mitigation","authors":"Haoran Liao, Derek S. Wang, Iskandar Sitdikov, Ciro Salcedo, Alireza Seif, Zlatko K. Minev","doi":"10.1038/s42256-024-00927-2","DOIUrl":"10.1038/s42256-024-00927-2","url":null,"abstract":"Quantum computers have progressed towards outperforming classical supercomputers, but quantum errors remain the primary obstacle. In the past few years, the field of quantum error mitigation has provided strategies for overcoming errors in near-term devices, enabling improved accuracy at the cost of additional run time. Through experiments on state-of-the-art quantum computers using up to 100 qubits, we demonstrate that without sacrificing accuracy, machine learning for quantum error mitigation (ML-QEM) drastically reduces the cost of mitigation. We benchmarked ML-QEM using a variety of machine learning models—linear regression, random forest, multilayer perceptron and graph neural networks—on diverse classes of quantum circuits, over increasingly complex device noise profiles, under interpolation and extrapolation, and in both numerics and experiments. These tests employed the popular digital zero-noise extrapolation method as an added reference. Finally, we propose a path towards scalable mitigation using ML-QEM to mimic traditional mitigation methods with superior runtime efficiency. Our results show that classical machine learning can extend the reach and practicality of quantum error mitigation by reducing its overhead and highlight its broader potential for practical quantum computations. Quantum error mitigation improves the accuracy of quantum computers at a computational overhead. Liao et al. demonstrate that classical machine learning models can deliver accuracy comparable to that of conventional techniques while reducing quantum computational costs.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1478-1486"},"PeriodicalIF":18.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142684374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A soft skin with self-decoupled three-axis force-sensing taxels 带有自解耦三轴力感应传感器的软皮肤
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-19 DOI: 10.1038/s42256-024-00904-9
Youcan Yan, Ahmed Zermane, Jia Pan, Abderrahmane Kheddar
Electronic skins integrating both normal and shear force per taxel have a wide range of applications across diverse fields, including robotics, haptics and health monitoring. Current multi-axis tactile sensors often present complexities in structure and fabrication or require an extensive calibration process, limiting their widespread applications. Here we report an electronic soft magnetic skin capable of self-decoupling three-axis forces at each taxel. We use a simple sensor structure with customizable sensitivity and measurement range, reducing the calibration complexity from known quadratic (N2) or cubic (N3) scales down to a linear (3N) scale. The three-axis self-decoupling property of the sensor is achieved by overlaying two sinusoidally magnetized flexible magnetic films with orthogonal magnetization patterns. Leveraging the self-decoupling feature and its simple structure, we demonstrate that our sensor can facilitate a diverse range of applications, such as measuring the three-dimensional force distribution in artificial knee joints, teaching robots by touch demonstration and monitoring the interaction forces between knee braces and human skin during various activities. Electronic skin with decoupled force feedback is essential in robotics. Yan et al. develop a soft magnetic skin capable of self-decoupling three-axis forces per taxel, reducing calibration complexity from quadratic or cubic scales to a linear scale.
集成了法向力和剪切力的电子表皮可广泛应用于机器人、触觉和健康监测等多个领域。目前的多轴触觉传感器往往结构和制造复杂,或需要大量校准过程,限制了其广泛应用。在此,我们报告了一种电子软磁皮肤,它能够在每个滑轮上对三轴力进行自解耦。我们使用的传感器结构简单,灵敏度和测量范围可定制,将校准复杂度从已知的二次方(N2)或三次方(N3)尺度降低到线性(3N)尺度。传感器的三轴自解耦特性是通过叠加两层具有正交磁化模式的正弦磁化柔性磁性薄膜实现的。利用自解耦特性及其简单的结构,我们证明了我们的传感器可以促进多种应用,例如测量人工膝关节的三维力分布、通过触摸演示进行机器人教学,以及监测各种活动中膝套与人体皮肤之间的相互作用力。
{"title":"A soft skin with self-decoupled three-axis force-sensing taxels","authors":"Youcan Yan, Ahmed Zermane, Jia Pan, Abderrahmane Kheddar","doi":"10.1038/s42256-024-00904-9","DOIUrl":"10.1038/s42256-024-00904-9","url":null,"abstract":"Electronic skins integrating both normal and shear force per taxel have a wide range of applications across diverse fields, including robotics, haptics and health monitoring. Current multi-axis tactile sensors often present complexities in structure and fabrication or require an extensive calibration process, limiting their widespread applications. Here we report an electronic soft magnetic skin capable of self-decoupling three-axis forces at each taxel. We use a simple sensor structure with customizable sensitivity and measurement range, reducing the calibration complexity from known quadratic (N2) or cubic (N3) scales down to a linear (3N) scale. The three-axis self-decoupling property of the sensor is achieved by overlaying two sinusoidally magnetized flexible magnetic films with orthogonal magnetization patterns. Leveraging the self-decoupling feature and its simple structure, we demonstrate that our sensor can facilitate a diverse range of applications, such as measuring the three-dimensional force distribution in artificial knee joints, teaching robots by touch demonstration and monitoring the interaction forces between knee braces and human skin during various activities. Electronic skin with decoupled force feedback is essential in robotics. Yan et al. develop a soft magnetic skin capable of self-decoupling three-axis forces per taxel, reducing calibration complexity from quadratic or cubic scales to a linear scale.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1284-1295"},"PeriodicalIF":18.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reshaping the discovery of self-assembling peptides with generative AI guided by hybrid deep learning 以混合深度学习为指导的生成式人工智能重塑自组装肽的发现过程
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-19 DOI: 10.1038/s42256-024-00928-1
Marko Njirjak, Lucija Žužić, Marko Babić, Patrizia Janković, Erik Otović, Daniela Kalafatovic, Goran Mauša
Supramolecular peptide-based materials have great potential for revolutionizing fields like nanotechnology and medicine. However, deciphering the intricate sequence-to-assembly pathway, essential for their real-life applications, remains a challenging endeavour. Their discovery relies primarily on empirical approaches that require substantial financial resources, impeding their disruptive potential. Consequently, despite the multitude of characterized self-assembling peptides and their demonstrated advantages, only a few peptide materials have found their way to the market. Machine learning trained on experimentally verified data presents a promising tool for quickly identifying sequences with a high propensity to self-assemble, thereby focusing resource expenditures on the most promising candidates. Here we introduce a framework that implements an accurate classifier in a metaheuristic-based generative model to navigate the search through the peptide sequence space of challenging size. For this purpose, we trained five recurrent neural networks among which the hybrid model that uses sequential information on aggregation propensity and specific physicochemical properties achieved a superior performance with 81.9% accuracy and 0.865 F1 score. Molecular dynamics simulations and experimental validation have confirmed the generative model to be 80–95% accurate in the discovery of self-assembling peptides, outperforming the current state-of-the-art models. The proposed modular framework efficiently complements human intuition in the exploration of self-assembling peptides and presents an important step in the development of intelligent laboratories for accelerated material discovery. A generative model guided by a machine-learning-based classifier capable of assessing unexplored regions of the peptide space in the search for new self-assembling sequences.
超分子肽基材料在纳米技术和医学等领域具有巨大的变革潜力。然而,破译其实际应用所必需的复杂序列到组装途径仍然是一项具有挑战性的工作。它们的发现主要依靠经验方法,需要大量的财政资源,这阻碍了它们的颠覆性潜力。因此,尽管自组装肽的特征繁多且优势明显,但只有少数肽材料进入了市场。根据实验验证数据训练的机器学习是一种很有前途的工具,可用于快速识别具有高度自组装倾向的序列,从而将资源支出集中在最有前途的候选产品上。在这里,我们介绍了一个框架,该框架在基于元启发式的生成模型中实施了精确的分类器,以引导在具有挑战性大小的肽序列空间中进行搜索。为此,我们训练了五个递归神经网络,其中使用聚集倾向和特定理化性质序列信息的混合模型取得了卓越的性能,准确率达到 81.9%,F1 分数为 0.865。分子动力学模拟和实验验证证实,该生成模型在发现自组装肽方面的准确率为 80-95%,优于目前最先进的模型。在探索自组装肽的过程中,所提出的模块化框架有效地补充了人类的直觉,为开发加速材料发现的智能实验室迈出了重要一步。
{"title":"Reshaping the discovery of self-assembling peptides with generative AI guided by hybrid deep learning","authors":"Marko Njirjak, Lucija Žužić, Marko Babić, Patrizia Janković, Erik Otović, Daniela Kalafatovic, Goran Mauša","doi":"10.1038/s42256-024-00928-1","DOIUrl":"10.1038/s42256-024-00928-1","url":null,"abstract":"Supramolecular peptide-based materials have great potential for revolutionizing fields like nanotechnology and medicine. However, deciphering the intricate sequence-to-assembly pathway, essential for their real-life applications, remains a challenging endeavour. Their discovery relies primarily on empirical approaches that require substantial financial resources, impeding their disruptive potential. Consequently, despite the multitude of characterized self-assembling peptides and their demonstrated advantages, only a few peptide materials have found their way to the market. Machine learning trained on experimentally verified data presents a promising tool for quickly identifying sequences with a high propensity to self-assemble, thereby focusing resource expenditures on the most promising candidates. Here we introduce a framework that implements an accurate classifier in a metaheuristic-based generative model to navigate the search through the peptide sequence space of challenging size. For this purpose, we trained five recurrent neural networks among which the hybrid model that uses sequential information on aggregation propensity and specific physicochemical properties achieved a superior performance with 81.9% accuracy and 0.865 F1 score. Molecular dynamics simulations and experimental validation have confirmed the generative model to be 80–95% accurate in the discovery of self-assembling peptides, outperforming the current state-of-the-art models. The proposed modular framework efficiently complements human intuition in the exploration of self-assembling peptides and presents an important step in the development of intelligent laboratories for accelerated material discovery. A generative model guided by a machine-learning-based classifier capable of assessing unexplored regions of the peptide space in the search for new self-assembling sequences.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1487-1500"},"PeriodicalIF":18.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient rare event sampling with unsupervised normalizing flows 利用无监督归一化流量进行高效罕见事件采样
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-19 DOI: 10.1038/s42256-024-00918-3
Solomon Asghar, Qing-Xiang Pei, Giorgio Volpe, Ran Ni
From physics and biology to seismology and economics, the behaviour of countless systems is determined by impactful yet unlikely transitions between metastable states known as rare events, the study of which is essential for understanding and controlling the properties of these systems. Classical computational methods to sample rare events remain prohibitively inefficient and are bottlenecks for enhanced samplers that require prior data. Here we introduce a physics-informed machine learning framework, normalizing Flow enhanced Rare Event Sampler (FlowRES), which uses unsupervised normalizing flow neural networks to enhance Monte Carlo sampling of rare events by generating high-quality non-local Monte Carlo proposals. We validated FlowRES by sampling the transition path ensembles of equilibrium and non-equilibrium systems of Brownian particles, exploring increasingly complex potentials. Beyond eliminating the requirements for prior data, FlowRES features key advantages over established samplers: no collective variables need to be defined, efficiency remains constant even as events become increasingly rare and systems with multiple routes between states can be straightforwardly simulated. Sampling rare events is key to various fields of science, but current methods are inefficient. Asghar and colleagues propose a rare event sampler based on normalizing flow neural networks that requires no prior data or collective variables, works at and out of equilibrium and keeps efficiency constant as events become rarer.
从物理学和生物学到地震学和经济学,无数系统的行为都是由被称为罕见事件的可变状态之间有影响但不可能发生的转变决定的,对这些事件的研究对于理解和控制这些系统的特性至关重要。对罕见事件进行采样的经典计算方法仍然效率极低,这也是需要先验数据的增强采样器的瓶颈所在。在这里,我们介绍了一种物理信息机器学习框架--归一化流增强罕见事件采样器(FlowRES),它使用无监督归一化流神经网络,通过生成高质量的非局部蒙特卡罗建议来增强罕见事件的蒙特卡罗采样。我们通过对布朗粒子平衡和非平衡系统的过渡路径集合进行采样,探索日益复杂的势能,从而验证了 FlowRES。与现有的采样器相比,FlowRES 除了无需先验数据外,还具有以下主要优势:无需定义集合变量,即使事件变得越来越罕见,效率也保持不变,而且可以直接模拟状态之间有多种路径的系统。
{"title":"Efficient rare event sampling with unsupervised normalizing flows","authors":"Solomon Asghar, Qing-Xiang Pei, Giorgio Volpe, Ran Ni","doi":"10.1038/s42256-024-00918-3","DOIUrl":"10.1038/s42256-024-00918-3","url":null,"abstract":"From physics and biology to seismology and economics, the behaviour of countless systems is determined by impactful yet unlikely transitions between metastable states known as rare events, the study of which is essential for understanding and controlling the properties of these systems. Classical computational methods to sample rare events remain prohibitively inefficient and are bottlenecks for enhanced samplers that require prior data. Here we introduce a physics-informed machine learning framework, normalizing Flow enhanced Rare Event Sampler (FlowRES), which uses unsupervised normalizing flow neural networks to enhance Monte Carlo sampling of rare events by generating high-quality non-local Monte Carlo proposals. We validated FlowRES by sampling the transition path ensembles of equilibrium and non-equilibrium systems of Brownian particles, exploring increasingly complex potentials. Beyond eliminating the requirements for prior data, FlowRES features key advantages over established samplers: no collective variables need to be defined, efficiency remains constant even as events become increasingly rare and systems with multiple routes between states can be straightforwardly simulated. Sampling rare events is key to various fields of science, but current methods are inefficient. Asghar and colleagues propose a rare event sampler based on normalizing flow neural networks that requires no prior data or collective variables, works at and out of equilibrium and keeps efficiency constant as events become rarer.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1370-1381"},"PeriodicalIF":18.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00918-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical large language models with misplaced focus 重点错位的临床大型语言模型
IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-18 DOI: 10.1038/s42256-024-00929-0
Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian
{"title":"Clinical large language models with misplaced focus","authors":"Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian","doi":"10.1038/s42256-024-00929-0","DOIUrl":"10.1038/s42256-024-00929-0","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1411-1412"},"PeriodicalIF":18.8,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Nature Machine Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1