首页 > 最新文献

Science Robotics最新文献

英文 中文
Milliwatt ultrasound for navigation in visually degraded environments on palm-sized aerial robots 手掌大小的空中机器人在视觉退化环境中导航的毫瓦超声
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-25 DOI: 10.1126/scirobotics.adz9609
Manoj Velmurugan, Phillip Brush, Colin Balfour, Richard J. Przybyla, Nitin J. Sanket
Tiny palm-sized aerial robots have exceptional agility and cost-effectiveness in navigating confined and cluttered environments. However, their limited payload capacity directly constrains the sensing suite onboard the robot, thereby limiting critical navigational tasks in Global Positioning System (GPS)–denied wild scenes. Common methods for obstacle avoidance use cameras and light detection and ranging (LIDAR), which become ineffective under visually degraded conditions such as low visibility, dust, fog, or darkness. Other sensors, such as radio detection and ranging (RADAR), have high power consumption, making them unsuitable for tiny aerial robots. Inspired by bats, we propose Saranga, a low-power, ultrasound-based perception stack that localizes obstacles using a dual sonar array. We present two key solutions to combat the low peak signal-to-noise ratio of −4.9 decibels: physical noise reduction and a deep learning–based denoising method. First, we present a practical way to block propeller-induced ultrasound noise on the weak echoes. The second solution is to train a neural network to use the long horizon of ultrasound echoes for finding signal patterns under high amounts of uncorrelated noise where classical methods were insufficient. We generalized to the real world by using a synthetic data generation pipeline augmented with limited real noise data for training. We enabled a palm-sized aerial robot to navigate under visually degraded conditions of dense fog, darkness, and snow in a cluttered environment with thin and transparent obstacles using only onboard sensing and computation. We provide extensive real-world results to demonstrate the efficacy of our approach.
手掌大小的微型空中机器人在狭窄和杂乱的环境中导航时具有非凡的灵活性和成本效益。然而,它们有限的有效载荷能力直接限制了机器人上的传感套件,从而限制了在全球定位系统(GPS)拒绝的野外场景中的关键导航任务。常见的避障方法是使用摄像头和光探测和测距(LIDAR),在低能见度、灰尘、雾或黑暗等视觉退化的条件下,这些方法变得无效。其他传感器,如无线电探测和测距(雷达),有很高的功耗,使它们不适合小型空中机器人。受蝙蝠的启发,我们提出了Saranga,一种低功耗,基于超声波的感知堆栈,它使用双声纳阵列来定位障碍物。我们提出了两个关键的解决方案来对抗- 4.9分贝的低峰值信噪比:物理降噪和基于深度学习的去噪方法。首先,我们提出了一种实用的方法来阻挡螺旋桨诱导的弱回波超声噪声。第二个解决方案是训练一个神经网络,利用超声波回波的长视界在大量不相关噪声下寻找信号模式,而经典方法是不够的。我们通过使用合成数据生成管道增强有限的真实噪声数据进行训练,将其推广到现实世界。我们使一个手掌大小的空中机器人能够在浓雾、黑暗和积雪等视觉退化的条件下,在有薄而透明障碍物的混乱环境中,仅使用机载传感和计算进行导航。我们提供了大量的实际结果来证明我们方法的有效性。
{"title":"Milliwatt ultrasound for navigation in visually degraded environments on palm-sized aerial robots","authors":"Manoj Velmurugan, Phillip Brush, Colin Balfour, Richard J. Przybyla, Nitin J. Sanket","doi":"10.1126/scirobotics.adz9609","DOIUrl":"https://doi.org/10.1126/scirobotics.adz9609","url":null,"abstract":"Tiny palm-sized aerial robots have exceptional agility and cost-effectiveness in navigating confined and cluttered environments. However, their limited payload capacity directly constrains the sensing suite onboard the robot, thereby limiting critical navigational tasks in Global Positioning System (GPS)–denied wild scenes. Common methods for obstacle avoidance use cameras and light detection and ranging (LIDAR), which become ineffective under visually degraded conditions such as low visibility, dust, fog, or darkness. Other sensors, such as radio detection and ranging (RADAR), have high power consumption, making them unsuitable for tiny aerial robots. Inspired by bats, we propose Saranga, a low-power, ultrasound-based perception stack that localizes obstacles using a dual sonar array. We present two key solutions to combat the low peak signal-to-noise ratio of −4.9 decibels: physical noise reduction and a deep learning–based denoising method. First, we present a practical way to block propeller-induced ultrasound noise on the weak echoes. The second solution is to train a neural network to use the long horizon of ultrasound echoes for finding signal patterns under high amounts of uncorrelated noise where classical methods were insufficient. We generalized to the real world by using a synthetic data generation pipeline augmented with limited real noise data for training. We enabled a palm-sized aerial robot to navigate under visually degraded conditions of dense fog, darkness, and snow in a cluttered environment with thin and transparent obstacles using only onboard sensing and computation. We provide extensive real-world results to demonstrate the efficacy of our approach.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"15 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147506912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electrofluidic fiber muscles 电流体纤维肌肉
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-25 DOI: 10.1126/scirobotics.ady6438
O. K. Afsar, G. Pupillo, G. Vitucci, W. Babatain, H. Ishii, V. Cacucciolo
Actuators are to robots what muscles are to humans. They enable motion and determine strength and dexterity. The fiber form factor makes skeletal muscles modular, scalable, and densely integrated (50% of human body weight). In contrast, servo motors that drive today’s robots lack the flexibility and modularity of muscle fibers, limiting integration and dexterity. Here, we report electrofluidic fiber muscles, soft artificial muscles for robotic applications with power density comparable to skeletal muscles (50 watts per kilogram), contraction strains of 20%, and response time of 0.3 second. These 2-millimeter-thick muscles comprise antagonistic fluidic actuators driven by electrohydrodynamic fiber pumps in a closed circuit. They require no external liquid reservoir and are electrically driven, untethered, and silent. We demonstrated that performance is increased by pre-pressurizing the muscles at an optimal bias pressure. Applying bias pressure allowed the antagonist actuator to act as a reservoir for the agonist, enabled 200% higher operating voltages by preventing cavitation, and leveraged the nonlinear pressure-stroke response of the actuators, increasing strain threefold at a given pump pressure. We characterized and modeled their dynamics, identifying optimal bias pressures. Electrofluidic muscles scale by simply bundling fibers. By selecting the ratio between pumps and actuators, we programmed their performance for different robotic tasks: a fast lever (180 millimeters per second) that launches objects in <0.3 second; a strong bundle that lifts 4 kilograms (200 times its weight) with a 30-millimeter stroke; a woven muscle that bends a robot arm by 40° and is compliant enough for a human handshake.
执行器之于机器人就像肌肉之于人类。它们使运动成为可能,决定力量和灵活性。纤维形状因素使骨骼肌模块化,可扩展和密集集成(50%的人体重量)。相比之下,驱动当今机器人的伺服电机缺乏肌肉纤维的灵活性和模块化,限制了集成度和灵活性。在这里,我们报告了电流体纤维肌肉,用于机器人应用的软人造肌肉,其功率密度与骨骼肌相当(50瓦/公斤),收缩应变为20%,响应时间为0.3秒。这些2毫米厚的肌肉由封闭回路中的电液动力纤维泵驱动的拮抗流体致动器组成。它们不需要外部储液器,采用电动驱动,不受束缚,静音。我们证明了在最佳偏压下对肌肉进行预加压可以提高性能。施加偏压可以使拮抗剂执行器充当激动剂的储层,通过防止空化,使工作电压提高200%,并利用执行器的非线性压力-冲程响应,在给定的泵压力下将应变增加三倍。我们对其动力学进行了表征和建模,确定了最佳偏置压力。电流体肌肉通过简单地捆扎纤维来扩展。通过选择泵和执行器之间的比例,我们对它们的性能进行了编程,以适应不同的机器人任务:一个快速杠杆(每秒180毫米),在0.3秒内发射物体;用30毫米的冲程可以举起4公斤(200倍于自身重量)的强力束;一种编织肌肉,可以使机器人手臂弯曲40°,并且足够柔顺,可以与人握手。
{"title":"Electrofluidic fiber muscles","authors":"O. K. Afsar, G. Pupillo, G. Vitucci, W. Babatain, H. Ishii, V. Cacucciolo","doi":"10.1126/scirobotics.ady6438","DOIUrl":"https://doi.org/10.1126/scirobotics.ady6438","url":null,"abstract":"Actuators are to robots what muscles are to humans. They enable motion and determine strength and dexterity. The fiber form factor makes skeletal muscles modular, scalable, and densely integrated (50% of human body weight). In contrast, servo motors that drive today’s robots lack the flexibility and modularity of muscle fibers, limiting integration and dexterity. Here, we report electrofluidic fiber muscles, soft artificial muscles for robotic applications with power density comparable to skeletal muscles (50 watts per kilogram), contraction strains of 20%, and response time of 0.3 second. These 2-millimeter-thick muscles comprise antagonistic fluidic actuators driven by electrohydrodynamic fiber pumps in a closed circuit. They require no external liquid reservoir and are electrically driven, untethered, and silent. We demonstrated that performance is increased by pre-pressurizing the muscles at an optimal bias pressure. Applying bias pressure allowed the antagonist actuator to act as a reservoir for the agonist, enabled 200% higher operating voltages by preventing cavitation, and leveraged the nonlinear pressure-stroke response of the actuators, increasing strain threefold at a given pump pressure. We characterized and modeled their dynamics, identifying optimal bias pressures. Electrofluidic muscles scale by simply bundling fibers. By selecting the ratio between pumps and actuators, we programmed their performance for different robotic tasks: a fast lever (180 millimeters per second) that launches objects in &lt;0.3 second; a strong bundle that lifts 4 kilograms (200 times its weight) with a 30-millimeter stroke; a woven muscle that bends a robot arm by 40° and is compliant enough for a human handshake.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"60 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147506913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neurorobotics may make a smarter, but not happier, robot. 神经机器人可能会制造出更聪明的机器人,但不会制造出更快乐的机器人。
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-18 DOI: 10.1126/scirobotics.aeg2324
Robin R Murphy
In Luminous, two generations of a Korean family use neurorobotics to build sentient robot friends.
在《夜光》中,一个韩国家庭的两代人利用神经机器人制造出了有感知能力的机器人朋友。
{"title":"Neurorobotics may make a smarter, but not happier, robot.","authors":"Robin R Murphy","doi":"10.1126/scirobotics.aeg2324","DOIUrl":"https://doi.org/10.1126/scirobotics.aeg2324","url":null,"abstract":"In Luminous, two generations of a Korean family use neurorobotics to build sentient robot friends.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"8 1","pages":"eaeg2324"},"PeriodicalIF":25.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147478604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Origami-inspired grasper for safe tissue manipulation. 折纸启发的安全组织操作的抓手。
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-18 DOI: 10.1126/scirobotics.aeh1283
Melisa Yashinski
The OriGrasp can flatten for storage and deploy as a compliant grasper for firm yet safe handling of bowel tissue.
OriGrasp可以放平,便于储存,也可以作为一个灵活的抓取器,对肠道组织进行牢固而安全的处理。
{"title":"Origami-inspired grasper for safe tissue manipulation.","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aeh1283","DOIUrl":"https://doi.org/10.1126/scirobotics.aeh1283","url":null,"abstract":"The OriGrasp can flatten for storage and deploy as a compliant grasper for firm yet safe handling of bowel tissue.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"111 1","pages":"eaeh1283"},"PeriodicalIF":25.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147478972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-robot behavior adaptation through intention alignment 通过意向对齐的跨机器人行为适应
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-18 DOI: 10.1126/scirobotics.adv2250
Xi Chen, Yuan Gao, Hangxin Liu, Fangkai Yang, Ali Ghadirzadeh, Jun Yang, Bin Liang, Chongjie Zhang, Tin Lun Lam, Song-Chun Zhu
Imitation learning (IL) has succeeded in enabling robots to perform new tasks by learning from demonstrations. However, its success is often constrained by the need for direct skill mappings between a learner and a demonstrator under identical conditions, limiting its adaptability to diverse environments and generalization across robots with different physical embodiments. To address these challenges, we introduce the Intention-Aligned Imitation Learning (IAIL) framework, a behavior adaptation approach that extends the conventional scope of IL by enabling robots to reproduce motions demonstrated by heterogeneous peers, even in previously unseen situations. Inspired by human cultural learning, IAIL aligns and adapts robot motions on the basis of high-level intentions annotated in natural language rather than by directly copying motor movements. This alignment is achieved by constructing a shared intention space that connects robot-generated motions with linguistic annotations, enabling inference-time behavior adaptation across diverse embodiments and environmental contexts. The framework further supports scalable task allocation in heterogeneous robot teams by leveraging differences in capabilities and constraints. We validated IAIL through real-world experiments involving seven distinct robots performing multistep collaboration tasks across 30 scenarios. Our results demonstrate that IAIL enables robust intention-aligned behavior adaptation across variations in embodiment, motion modality, and task configuration. These capabilities enable flexible behavior transfer across heterogeneous robots and support resilient, autonomous multirobot systems for reliable real-world collaboration.
模仿学习(IL)已经成功地使机器人通过从演示中学习来执行新任务。然而,它的成功往往受到学习者和演示者在相同条件下需要直接技能映射的限制,限制了它对不同环境的适应性和不同物理体现的机器人的泛化。为了应对这些挑战,我们引入了意向对齐模仿学习(IAIL)框架,这是一种行为适应方法,通过使机器人能够再现异类同伴所展示的动作,甚至在以前看不见的情况下,扩展了传统的IL范围。受人类文化学习的启发,IAIL在以自然语言注释的高级意图的基础上调整和适应机器人的动作,而不是直接复制运动。这种对齐是通过构建一个共享的意图空间来实现的,该空间将机器人生成的运动与语言注释连接起来,从而实现跨不同实施例和环境上下文的推理时间行为适应。该框架通过利用能力和约束的差异进一步支持异构机器人团队中的可伸缩任务分配。我们通过七个不同的机器人在30个场景中执行多步骤协作任务的真实实验验证了IAIL。我们的研究结果表明,IAIL能够在体现、运动模式和任务配置的变化中实现稳健的意图一致的行为适应。这些功能可以实现跨异构机器人的灵活行为转移,并支持弹性、自主的多机器人系统,实现可靠的现实协作。
{"title":"Cross-robot behavior adaptation through intention alignment","authors":"Xi Chen, Yuan Gao, Hangxin Liu, Fangkai Yang, Ali Ghadirzadeh, Jun Yang, Bin Liang, Chongjie Zhang, Tin Lun Lam, Song-Chun Zhu","doi":"10.1126/scirobotics.adv2250","DOIUrl":"https://doi.org/10.1126/scirobotics.adv2250","url":null,"abstract":"Imitation learning (IL) has succeeded in enabling robots to perform new tasks by learning from demonstrations. However, its success is often constrained by the need for direct skill mappings between a learner and a demonstrator under identical conditions, limiting its adaptability to diverse environments and generalization across robots with different physical embodiments. To address these challenges, we introduce the Intention-Aligned Imitation Learning (IAIL) framework, a behavior adaptation approach that extends the conventional scope of IL by enabling robots to reproduce motions demonstrated by heterogeneous peers, even in previously unseen situations. Inspired by human cultural learning, IAIL aligns and adapts robot motions on the basis of high-level intentions annotated in natural language rather than by directly copying motor movements. This alignment is achieved by constructing a shared intention space that connects robot-generated motions with linguistic annotations, enabling inference-time behavior adaptation across diverse embodiments and environmental contexts. The framework further supports scalable task allocation in heterogeneous robot teams by leveraging differences in capabilities and constraints. We validated IAIL through real-world experiments involving seven distinct robots performing multistep collaboration tasks across 30 scenarios. Our results demonstrate that IAIL enables robust intention-aligned behavior adaptation across variations in embodiment, motion modality, and task configuration. These capabilities enable flexible behavior transfer across heterogeneous robots and support resilient, autonomous multirobot systems for reliable real-world collaboration.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"58 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147478136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fly motion vision maximizes signal energy transfer between mechanical input and sensor output. 苍蝇运动视觉最大化机械输入和传感器输出之间的信号能量传递。
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-11 DOI: 10.1126/scirobotics.adx7524
J Sean Humbert,Holger G Krapp,James D Baeder,Camli Badrya,Inés L Dawson,Jiaqi V Huang,Andrew Hyslop,Yong Su Jung,Alix Leroy,Cosima Lutkus,Beth Mortimer,Indira Nagesh,Clément Ruah,Simon M Walker,Yingjie Yang,Rafal W Żbikowski,Graham K Taylor
Insects achieve agile flight using a sensor-rich control architecture whose embodiment eliminates the need for complex computation. For example, their visual systems are tuned to detect the optic flow associated with specific self-motions, but what functional principle does this tuning embed, and how does it facilitate motor control? Here, we tested the hypothesis that evolution cotunes physics and physiology by aligning an insect's sensors to its dynamically important modes of self-motion. Specifically, we show that the spatial tuning of the blowfly motion vision system maximizes the open-loop Hankel singular values, which quantify the flow of signal energy from gust disturbances and control inputs to sensor outputs, jointly optimizing observability and controllability. This evolutionary principle differs from the conventional engineering-design paradigm of optimizing state estimation, with implications for robotic systems combining high performance with minimal actuator usage.
昆虫利用传感器丰富的控制体系结构实现敏捷飞行,这种控制体系结构消除了复杂计算的需要。例如,他们的视觉系统被调谐以检测与特定自我运动相关的光流,但是这种调谐嵌入了什么功能原理,以及它如何促进运动控制?在这里,我们通过调整昆虫的传感器与其动态重要的自我运动模式来测试进化与物理和生理学相结合的假设。具体而言,我们证明了苍蝇运动视觉系统的空间调谐最大化了开环汉克尔奇异值,该奇异值量化了从风扰动和控制输入到传感器输出的信号能量流,共同优化了可观察性和可控性。这种进化原理不同于优化状态估计的传统工程设计范式,它对机器人系统的高性能和最少的执行器使用具有重要意义。
{"title":"Fly motion vision maximizes signal energy transfer between mechanical input and sensor output.","authors":"J Sean Humbert,Holger G Krapp,James D Baeder,Camli Badrya,Inés L Dawson,Jiaqi V Huang,Andrew Hyslop,Yong Su Jung,Alix Leroy,Cosima Lutkus,Beth Mortimer,Indira Nagesh,Clément Ruah,Simon M Walker,Yingjie Yang,Rafal W Żbikowski,Graham K Taylor","doi":"10.1126/scirobotics.adx7524","DOIUrl":"https://doi.org/10.1126/scirobotics.adx7524","url":null,"abstract":"Insects achieve agile flight using a sensor-rich control architecture whose embodiment eliminates the need for complex computation. For example, their visual systems are tuned to detect the optic flow associated with specific self-motions, but what functional principle does this tuning embed, and how does it facilitate motor control? Here, we tested the hypothesis that evolution cotunes physics and physiology by aligning an insect's sensors to its dynamically important modes of self-motion. Specifically, we show that the spatial tuning of the blowfly motion vision system maximizes the open-loop Hankel singular values, which quantify the flow of signal energy from gust disturbances and control inputs to sensor outputs, jointly optimizing observability and controllability. This evolutionary principle differs from the conventional engineering-design paradigm of optimizing state estimation, with implications for robotic systems combining high performance with minimal actuator usage.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"31 1","pages":"eadx7524"},"PeriodicalIF":25.0,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147393756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot-mediated haptic feedback outperforms vision in violin duo coordination. 机器人介导的触觉反馈在小提琴二重奏协调方面优于视觉。
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-11 DOI: 10.1126/scirobotics.aeb1901
Aleksandra Michałko,Francesco Di Tommaso,Emanuele Peperoni,Stefano L Capitani,Alessia Noccaro,Andrea Parri,Canan Gener,Roberto Conti,Nicola Di Stefano,Nevio Luigi Tagliamonte,Lorenzo Grazi,Francesco Giovacchini,Simona Crea,Emilio Trigili,Nicola Vitiello,Marc Leman,Domenico Formica
Joint actions among humans rely on the integration of multiple sensory modalities, most notably auditory and visual cues, which support explicit communication between partners. However, haptic feedback provides a direct, implicit channel for sensorimotor communication, and its contribution to fine motor coordination in joint actions remains largely unexplored. Here, we demonstrate that haptic communication, rendered through bidirectionally coupled wearable robots, outperforms traditional auditory-visual feedback in a complex and challenging real-life joint action: ensemble violin performance. First, we developed a pair of two-degree-of-freedom upper-limb exoskeletons capable of transparently following violinists' natural movements and rendering viscoelastic torques proportional to the joint angular deviation between the partners. Then, we designed a within-subject experiment with 20 violin duos performing a musical piece under four sensory feedback conditions: auditory (A), auditory-visual (AV), auditory-haptic (AH), and auditory-visual-haptic (AVH), across two tempi (72 and 100 beats per minute). Despite the musicians being unfamiliar with the robot-mediated haptic feedback and unaware of the bidirectional connection between them, haptic feedback (AH and AVH) substantially enhanced spatiotemporal coordination and dynamic musical alignment compared with the extensively trained auditory-visual feedback (A and AV). The multisensory feedback condition AVH yielded the highest scores across all measures. Our findings demonstrate that haptic feedback can support fine motor coordination in violin duo performance more effectively than visual cues, particularly for professional musicians, because of its implicit and embodied nature, and that it can be effectively delivered via wearable robots, expanding the paradigms of human-human sensorimotor interactions.
人类之间的联合行动依赖于多种感觉模式的整合,最明显的是听觉和视觉线索,这支持伙伴之间的明确沟通。然而,触觉反馈为感觉运动交流提供了一个直接的、隐式的通道,它对联合动作中精细运动协调的贡献在很大程度上仍未被探索。在这里,我们证明了通过双向耦合可穿戴机器人呈现的触觉交流,在复杂且具有挑战性的现实生活联合动作中优于传统的视听反馈:合奏小提琴表演。首先,我们开发了一对两自由度的上肢外骨骼,能够透明地跟随小提琴家的自然运动,并呈现与合作伙伴之间的关节角偏差成比例的粘弹性扭矩。然后,我们设计了一个受试者内实验,20对小提琴双人组在四种感官反馈条件下演奏音乐作品:听觉(a),听觉-视觉(AV),听觉-触觉(AH)和听觉-视觉-触觉(AVH),两个节奏(每分钟72和100拍)。尽管音乐家不熟悉机器人介导的触觉反馈,也不知道它们之间的双向联系,但与广泛训练的视听反馈(A和AV)相比,触觉反馈(AH和AVH)显著增强了时空协调和动态音乐对齐。多感官反馈条件AVH在所有测量中得分最高。我们的研究结果表明,与视觉线索相比,触觉反馈可以更有效地支持小提琴二重奏演奏中的精细运动协调,特别是对于专业音乐家来说,因为它具有隐含和体现的性质,并且它可以通过可穿戴机器人有效地传递,扩展了人与人之间的感觉运动交互范例。
{"title":"Robot-mediated haptic feedback outperforms vision in violin duo coordination.","authors":"Aleksandra Michałko,Francesco Di Tommaso,Emanuele Peperoni,Stefano L Capitani,Alessia Noccaro,Andrea Parri,Canan Gener,Roberto Conti,Nicola Di Stefano,Nevio Luigi Tagliamonte,Lorenzo Grazi,Francesco Giovacchini,Simona Crea,Emilio Trigili,Nicola Vitiello,Marc Leman,Domenico Formica","doi":"10.1126/scirobotics.aeb1901","DOIUrl":"https://doi.org/10.1126/scirobotics.aeb1901","url":null,"abstract":"Joint actions among humans rely on the integration of multiple sensory modalities, most notably auditory and visual cues, which support explicit communication between partners. However, haptic feedback provides a direct, implicit channel for sensorimotor communication, and its contribution to fine motor coordination in joint actions remains largely unexplored. Here, we demonstrate that haptic communication, rendered through bidirectionally coupled wearable robots, outperforms traditional auditory-visual feedback in a complex and challenging real-life joint action: ensemble violin performance. First, we developed a pair of two-degree-of-freedom upper-limb exoskeletons capable of transparently following violinists' natural movements and rendering viscoelastic torques proportional to the joint angular deviation between the partners. Then, we designed a within-subject experiment with 20 violin duos performing a musical piece under four sensory feedback conditions: auditory (A), auditory-visual (AV), auditory-haptic (AH), and auditory-visual-haptic (AVH), across two tempi (72 and 100 beats per minute). Despite the musicians being unfamiliar with the robot-mediated haptic feedback and unaware of the bidirectional connection between them, haptic feedback (AH and AVH) substantially enhanced spatiotemporal coordination and dynamic musical alignment compared with the extensively trained auditory-visual feedback (A and AV). The multisensory feedback condition AVH yielded the highest scores across all measures. Our findings demonstrate that haptic feedback can support fine motor coordination in violin duo performance more effectively than visual cues, particularly for professional musicians, because of its implicit and embodied nature, and that it can be effectively delivered via wearable robots, expanding the paradigms of human-human sensorimotor interactions.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"33 1","pages":"eaeb1901"},"PeriodicalIF":25.0,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147393757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vibrotactile feedback aids prosthesis usability. 振动触觉反馈有助于假体的可用性。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-03-11 DOI: 10.1126/scirobotics.aeg9510
Amos Matsiko

A noninvasive vibrotactile feedback system integrated with a knee prosthesis can improve perception, user experience, and gait.

一种集成了膝关节假体的无创振动触觉反馈系统可以改善感知、用户体验和步态。
{"title":"Vibrotactile feedback aids prosthesis usability.","authors":"Amos Matsiko","doi":"10.1126/scirobotics.aeg9510","DOIUrl":"https://doi.org/10.1126/scirobotics.aeg9510","url":null,"abstract":"<p><p>A noninvasive vibrotactile feedback system integrated with a knee prosthesis can improve perception, user experience, and gait.</p>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 112","pages":"eaeg9510"},"PeriodicalIF":27.5,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Would you give four stars to a restaurant entirely staffed by robots? 你会给一家完全由机器人工作的餐厅打四星吗?
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-02-25
Robin Murphy
Annalee Newitz’s Automatic Noodle illustrates the challenges of robots operating a ghost kitchen.
Annalee Newitz的《自动面条》展示了机器人操作幽灵厨房的挑战。
{"title":"Would you give four stars to a restaurant entirely staffed by robots?","authors":"Robin Murphy","doi":"","DOIUrl":"","url":null,"abstract":"<div >Annalee Newitz’s <i>Automatic Noodle</i> illustrates the challenges of robots operating a ghost kitchen.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 111","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147300068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collision-tolerant deformable quadrotor arms 抗碰撞变形四旋翼臂
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-02-25
Amos Matsiko
The HoLoArm drone has flexible arms for impact resistance and can recover from collisions and maintain flight stability.
HoLoArm无人机具有灵活的臂部,可以抵抗冲击,并可以从碰撞中恢复并保持飞行稳定性。
{"title":"Collision-tolerant deformable quadrotor arms","authors":"Amos Matsiko","doi":"","DOIUrl":"","url":null,"abstract":"<div >The HoLoArm drone has flexible arms for impact resistance and can recover from collisions and maintain flight stability.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 111","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147300069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1