Pub Date : 2026-03-25DOI: 10.1126/scirobotics.adz9609
Manoj Velmurugan, Phillip Brush, Colin Balfour, Richard J. Przybyla, Nitin J. Sanket
Tiny palm-sized aerial robots have exceptional agility and cost-effectiveness in navigating confined and cluttered environments. However, their limited payload capacity directly constrains the sensing suite onboard the robot, thereby limiting critical navigational tasks in Global Positioning System (GPS)–denied wild scenes. Common methods for obstacle avoidance use cameras and light detection and ranging (LIDAR), which become ineffective under visually degraded conditions such as low visibility, dust, fog, or darkness. Other sensors, such as radio detection and ranging (RADAR), have high power consumption, making them unsuitable for tiny aerial robots. Inspired by bats, we propose Saranga, a low-power, ultrasound-based perception stack that localizes obstacles using a dual sonar array. We present two key solutions to combat the low peak signal-to-noise ratio of −4.9 decibels: physical noise reduction and a deep learning–based denoising method. First, we present a practical way to block propeller-induced ultrasound noise on the weak echoes. The second solution is to train a neural network to use the long horizon of ultrasound echoes for finding signal patterns under high amounts of uncorrelated noise where classical methods were insufficient. We generalized to the real world by using a synthetic data generation pipeline augmented with limited real noise data for training. We enabled a palm-sized aerial robot to navigate under visually degraded conditions of dense fog, darkness, and snow in a cluttered environment with thin and transparent obstacles using only onboard sensing and computation. We provide extensive real-world results to demonstrate the efficacy of our approach.
{"title":"Milliwatt ultrasound for navigation in visually degraded environments on palm-sized aerial robots","authors":"Manoj Velmurugan, Phillip Brush, Colin Balfour, Richard J. Przybyla, Nitin J. Sanket","doi":"10.1126/scirobotics.adz9609","DOIUrl":"https://doi.org/10.1126/scirobotics.adz9609","url":null,"abstract":"Tiny palm-sized aerial robots have exceptional agility and cost-effectiveness in navigating confined and cluttered environments. However, their limited payload capacity directly constrains the sensing suite onboard the robot, thereby limiting critical navigational tasks in Global Positioning System (GPS)–denied wild scenes. Common methods for obstacle avoidance use cameras and light detection and ranging (LIDAR), which become ineffective under visually degraded conditions such as low visibility, dust, fog, or darkness. Other sensors, such as radio detection and ranging (RADAR), have high power consumption, making them unsuitable for tiny aerial robots. Inspired by bats, we propose Saranga, a low-power, ultrasound-based perception stack that localizes obstacles using a dual sonar array. We present two key solutions to combat the low peak signal-to-noise ratio of −4.9 decibels: physical noise reduction and a deep learning–based denoising method. First, we present a practical way to block propeller-induced ultrasound noise on the weak echoes. The second solution is to train a neural network to use the long horizon of ultrasound echoes for finding signal patterns under high amounts of uncorrelated noise where classical methods were insufficient. We generalized to the real world by using a synthetic data generation pipeline augmented with limited real noise data for training. We enabled a palm-sized aerial robot to navigate under visually degraded conditions of dense fog, darkness, and snow in a cluttered environment with thin and transparent obstacles using only onboard sensing and computation. We provide extensive real-world results to demonstrate the efficacy of our approach.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"15 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147506912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-25DOI: 10.1126/scirobotics.ady6438
O. K. Afsar, G. Pupillo, G. Vitucci, W. Babatain, H. Ishii, V. Cacucciolo
Actuators are to robots what muscles are to humans. They enable motion and determine strength and dexterity. The fiber form factor makes skeletal muscles modular, scalable, and densely integrated (50% of human body weight). In contrast, servo motors that drive today’s robots lack the flexibility and modularity of muscle fibers, limiting integration and dexterity. Here, we report electrofluidic fiber muscles, soft artificial muscles for robotic applications with power density comparable to skeletal muscles (50 watts per kilogram), contraction strains of 20%, and response time of 0.3 second. These 2-millimeter-thick muscles comprise antagonistic fluidic actuators driven by electrohydrodynamic fiber pumps in a closed circuit. They require no external liquid reservoir and are electrically driven, untethered, and silent. We demonstrated that performance is increased by pre-pressurizing the muscles at an optimal bias pressure. Applying bias pressure allowed the antagonist actuator to act as a reservoir for the agonist, enabled 200% higher operating voltages by preventing cavitation, and leveraged the nonlinear pressure-stroke response of the actuators, increasing strain threefold at a given pump pressure. We characterized and modeled their dynamics, identifying optimal bias pressures. Electrofluidic muscles scale by simply bundling fibers. By selecting the ratio between pumps and actuators, we programmed their performance for different robotic tasks: a fast lever (180 millimeters per second) that launches objects in <0.3 second; a strong bundle that lifts 4 kilograms (200 times its weight) with a 30-millimeter stroke; a woven muscle that bends a robot arm by 40° and is compliant enough for a human handshake.
{"title":"Electrofluidic fiber muscles","authors":"O. K. Afsar, G. Pupillo, G. Vitucci, W. Babatain, H. Ishii, V. Cacucciolo","doi":"10.1126/scirobotics.ady6438","DOIUrl":"https://doi.org/10.1126/scirobotics.ady6438","url":null,"abstract":"Actuators are to robots what muscles are to humans. They enable motion and determine strength and dexterity. The fiber form factor makes skeletal muscles modular, scalable, and densely integrated (50% of human body weight). In contrast, servo motors that drive today’s robots lack the flexibility and modularity of muscle fibers, limiting integration and dexterity. Here, we report electrofluidic fiber muscles, soft artificial muscles for robotic applications with power density comparable to skeletal muscles (50 watts per kilogram), contraction strains of 20%, and response time of 0.3 second. These 2-millimeter-thick muscles comprise antagonistic fluidic actuators driven by electrohydrodynamic fiber pumps in a closed circuit. They require no external liquid reservoir and are electrically driven, untethered, and silent. We demonstrated that performance is increased by pre-pressurizing the muscles at an optimal bias pressure. Applying bias pressure allowed the antagonist actuator to act as a reservoir for the agonist, enabled 200% higher operating voltages by preventing cavitation, and leveraged the nonlinear pressure-stroke response of the actuators, increasing strain threefold at a given pump pressure. We characterized and modeled their dynamics, identifying optimal bias pressures. Electrofluidic muscles scale by simply bundling fibers. By selecting the ratio between pumps and actuators, we programmed their performance for different robotic tasks: a fast lever (180 millimeters per second) that launches objects in <0.3 second; a strong bundle that lifts 4 kilograms (200 times its weight) with a 30-millimeter stroke; a woven muscle that bends a robot arm by 40° and is compliant enough for a human handshake.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"60 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147506913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-18DOI: 10.1126/scirobotics.aeg2324
Robin R Murphy
In Luminous, two generations of a Korean family use neurorobotics to build sentient robot friends.
在《夜光》中,一个韩国家庭的两代人利用神经机器人制造出了有感知能力的机器人朋友。
{"title":"Neurorobotics may make a smarter, but not happier, robot.","authors":"Robin R Murphy","doi":"10.1126/scirobotics.aeg2324","DOIUrl":"https://doi.org/10.1126/scirobotics.aeg2324","url":null,"abstract":"In Luminous, two generations of a Korean family use neurorobotics to build sentient robot friends.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"8 1","pages":"eaeg2324"},"PeriodicalIF":25.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147478604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-18DOI: 10.1126/scirobotics.aeh1283
Melisa Yashinski
The OriGrasp can flatten for storage and deploy as a compliant grasper for firm yet safe handling of bowel tissue.
OriGrasp可以放平,便于储存,也可以作为一个灵活的抓取器,对肠道组织进行牢固而安全的处理。
{"title":"Origami-inspired grasper for safe tissue manipulation.","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aeh1283","DOIUrl":"https://doi.org/10.1126/scirobotics.aeh1283","url":null,"abstract":"The OriGrasp can flatten for storage and deploy as a compliant grasper for firm yet safe handling of bowel tissue.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"111 1","pages":"eaeh1283"},"PeriodicalIF":25.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147478972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-18DOI: 10.1126/scirobotics.adv2250
Xi Chen, Yuan Gao, Hangxin Liu, Fangkai Yang, Ali Ghadirzadeh, Jun Yang, Bin Liang, Chongjie Zhang, Tin Lun Lam, Song-Chun Zhu
Imitation learning (IL) has succeeded in enabling robots to perform new tasks by learning from demonstrations. However, its success is often constrained by the need for direct skill mappings between a learner and a demonstrator under identical conditions, limiting its adaptability to diverse environments and generalization across robots with different physical embodiments. To address these challenges, we introduce the Intention-Aligned Imitation Learning (IAIL) framework, a behavior adaptation approach that extends the conventional scope of IL by enabling robots to reproduce motions demonstrated by heterogeneous peers, even in previously unseen situations. Inspired by human cultural learning, IAIL aligns and adapts robot motions on the basis of high-level intentions annotated in natural language rather than by directly copying motor movements. This alignment is achieved by constructing a shared intention space that connects robot-generated motions with linguistic annotations, enabling inference-time behavior adaptation across diverse embodiments and environmental contexts. The framework further supports scalable task allocation in heterogeneous robot teams by leveraging differences in capabilities and constraints. We validated IAIL through real-world experiments involving seven distinct robots performing multistep collaboration tasks across 30 scenarios. Our results demonstrate that IAIL enables robust intention-aligned behavior adaptation across variations in embodiment, motion modality, and task configuration. These capabilities enable flexible behavior transfer across heterogeneous robots and support resilient, autonomous multirobot systems for reliable real-world collaboration.
{"title":"Cross-robot behavior adaptation through intention alignment","authors":"Xi Chen, Yuan Gao, Hangxin Liu, Fangkai Yang, Ali Ghadirzadeh, Jun Yang, Bin Liang, Chongjie Zhang, Tin Lun Lam, Song-Chun Zhu","doi":"10.1126/scirobotics.adv2250","DOIUrl":"https://doi.org/10.1126/scirobotics.adv2250","url":null,"abstract":"Imitation learning (IL) has succeeded in enabling robots to perform new tasks by learning from demonstrations. However, its success is often constrained by the need for direct skill mappings between a learner and a demonstrator under identical conditions, limiting its adaptability to diverse environments and generalization across robots with different physical embodiments. To address these challenges, we introduce the Intention-Aligned Imitation Learning (IAIL) framework, a behavior adaptation approach that extends the conventional scope of IL by enabling robots to reproduce motions demonstrated by heterogeneous peers, even in previously unseen situations. Inspired by human cultural learning, IAIL aligns and adapts robot motions on the basis of high-level intentions annotated in natural language rather than by directly copying motor movements. This alignment is achieved by constructing a shared intention space that connects robot-generated motions with linguistic annotations, enabling inference-time behavior adaptation across diverse embodiments and environmental contexts. The framework further supports scalable task allocation in heterogeneous robot teams by leveraging differences in capabilities and constraints. We validated IAIL through real-world experiments involving seven distinct robots performing multistep collaboration tasks across 30 scenarios. Our results demonstrate that IAIL enables robust intention-aligned behavior adaptation across variations in embodiment, motion modality, and task configuration. These capabilities enable flexible behavior transfer across heterogeneous robots and support resilient, autonomous multirobot systems for reliable real-world collaboration.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"58 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147478136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-11DOI: 10.1126/scirobotics.adx7524
J Sean Humbert,Holger G Krapp,James D Baeder,Camli Badrya,Inés L Dawson,Jiaqi V Huang,Andrew Hyslop,Yong Su Jung,Alix Leroy,Cosima Lutkus,Beth Mortimer,Indira Nagesh,Clément Ruah,Simon M Walker,Yingjie Yang,Rafal W Żbikowski,Graham K Taylor
Insects achieve agile flight using a sensor-rich control architecture whose embodiment eliminates the need for complex computation. For example, their visual systems are tuned to detect the optic flow associated with specific self-motions, but what functional principle does this tuning embed, and how does it facilitate motor control? Here, we tested the hypothesis that evolution cotunes physics and physiology by aligning an insect's sensors to its dynamically important modes of self-motion. Specifically, we show that the spatial tuning of the blowfly motion vision system maximizes the open-loop Hankel singular values, which quantify the flow of signal energy from gust disturbances and control inputs to sensor outputs, jointly optimizing observability and controllability. This evolutionary principle differs from the conventional engineering-design paradigm of optimizing state estimation, with implications for robotic systems combining high performance with minimal actuator usage.
{"title":"Fly motion vision maximizes signal energy transfer between mechanical input and sensor output.","authors":"J Sean Humbert,Holger G Krapp,James D Baeder,Camli Badrya,Inés L Dawson,Jiaqi V Huang,Andrew Hyslop,Yong Su Jung,Alix Leroy,Cosima Lutkus,Beth Mortimer,Indira Nagesh,Clément Ruah,Simon M Walker,Yingjie Yang,Rafal W Żbikowski,Graham K Taylor","doi":"10.1126/scirobotics.adx7524","DOIUrl":"https://doi.org/10.1126/scirobotics.adx7524","url":null,"abstract":"Insects achieve agile flight using a sensor-rich control architecture whose embodiment eliminates the need for complex computation. For example, their visual systems are tuned to detect the optic flow associated with specific self-motions, but what functional principle does this tuning embed, and how does it facilitate motor control? Here, we tested the hypothesis that evolution cotunes physics and physiology by aligning an insect's sensors to its dynamically important modes of self-motion. Specifically, we show that the spatial tuning of the blowfly motion vision system maximizes the open-loop Hankel singular values, which quantify the flow of signal energy from gust disturbances and control inputs to sensor outputs, jointly optimizing observability and controllability. This evolutionary principle differs from the conventional engineering-design paradigm of optimizing state estimation, with implications for robotic systems combining high performance with minimal actuator usage.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"31 1","pages":"eadx7524"},"PeriodicalIF":25.0,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147393756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-11DOI: 10.1126/scirobotics.aeb1901
Aleksandra Michałko,Francesco Di Tommaso,Emanuele Peperoni,Stefano L Capitani,Alessia Noccaro,Andrea Parri,Canan Gener,Roberto Conti,Nicola Di Stefano,Nevio Luigi Tagliamonte,Lorenzo Grazi,Francesco Giovacchini,Simona Crea,Emilio Trigili,Nicola Vitiello,Marc Leman,Domenico Formica
Joint actions among humans rely on the integration of multiple sensory modalities, most notably auditory and visual cues, which support explicit communication between partners. However, haptic feedback provides a direct, implicit channel for sensorimotor communication, and its contribution to fine motor coordination in joint actions remains largely unexplored. Here, we demonstrate that haptic communication, rendered through bidirectionally coupled wearable robots, outperforms traditional auditory-visual feedback in a complex and challenging real-life joint action: ensemble violin performance. First, we developed a pair of two-degree-of-freedom upper-limb exoskeletons capable of transparently following violinists' natural movements and rendering viscoelastic torques proportional to the joint angular deviation between the partners. Then, we designed a within-subject experiment with 20 violin duos performing a musical piece under four sensory feedback conditions: auditory (A), auditory-visual (AV), auditory-haptic (AH), and auditory-visual-haptic (AVH), across two tempi (72 and 100 beats per minute). Despite the musicians being unfamiliar with the robot-mediated haptic feedback and unaware of the bidirectional connection between them, haptic feedback (AH and AVH) substantially enhanced spatiotemporal coordination and dynamic musical alignment compared with the extensively trained auditory-visual feedback (A and AV). The multisensory feedback condition AVH yielded the highest scores across all measures. Our findings demonstrate that haptic feedback can support fine motor coordination in violin duo performance more effectively than visual cues, particularly for professional musicians, because of its implicit and embodied nature, and that it can be effectively delivered via wearable robots, expanding the paradigms of human-human sensorimotor interactions.
{"title":"Robot-mediated haptic feedback outperforms vision in violin duo coordination.","authors":"Aleksandra Michałko,Francesco Di Tommaso,Emanuele Peperoni,Stefano L Capitani,Alessia Noccaro,Andrea Parri,Canan Gener,Roberto Conti,Nicola Di Stefano,Nevio Luigi Tagliamonte,Lorenzo Grazi,Francesco Giovacchini,Simona Crea,Emilio Trigili,Nicola Vitiello,Marc Leman,Domenico Formica","doi":"10.1126/scirobotics.aeb1901","DOIUrl":"https://doi.org/10.1126/scirobotics.aeb1901","url":null,"abstract":"Joint actions among humans rely on the integration of multiple sensory modalities, most notably auditory and visual cues, which support explicit communication between partners. However, haptic feedback provides a direct, implicit channel for sensorimotor communication, and its contribution to fine motor coordination in joint actions remains largely unexplored. Here, we demonstrate that haptic communication, rendered through bidirectionally coupled wearable robots, outperforms traditional auditory-visual feedback in a complex and challenging real-life joint action: ensemble violin performance. First, we developed a pair of two-degree-of-freedom upper-limb exoskeletons capable of transparently following violinists' natural movements and rendering viscoelastic torques proportional to the joint angular deviation between the partners. Then, we designed a within-subject experiment with 20 violin duos performing a musical piece under four sensory feedback conditions: auditory (A), auditory-visual (AV), auditory-haptic (AH), and auditory-visual-haptic (AVH), across two tempi (72 and 100 beats per minute). Despite the musicians being unfamiliar with the robot-mediated haptic feedback and unaware of the bidirectional connection between them, haptic feedback (AH and AVH) substantially enhanced spatiotemporal coordination and dynamic musical alignment compared with the extensively trained auditory-visual feedback (A and AV). The multisensory feedback condition AVH yielded the highest scores across all measures. Our findings demonstrate that haptic feedback can support fine motor coordination in violin duo performance more effectively than visual cues, particularly for professional musicians, because of its implicit and embodied nature, and that it can be effectively delivered via wearable robots, expanding the paradigms of human-human sensorimotor interactions.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"33 1","pages":"eaeb1901"},"PeriodicalIF":25.0,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147393757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-11DOI: 10.1126/scirobotics.aeg9510
Amos Matsiko
A noninvasive vibrotactile feedback system integrated with a knee prosthesis can improve perception, user experience, and gait.
一种集成了膝关节假体的无创振动触觉反馈系统可以改善感知、用户体验和步态。
{"title":"Vibrotactile feedback aids prosthesis usability.","authors":"Amos Matsiko","doi":"10.1126/scirobotics.aeg9510","DOIUrl":"https://doi.org/10.1126/scirobotics.aeg9510","url":null,"abstract":"<p><p>A noninvasive vibrotactile feedback system integrated with a knee prosthesis can improve perception, user experience, and gait.</p>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 112","pages":"eaeg9510"},"PeriodicalIF":27.5,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annalee Newitz’s Automatic Noodle illustrates the challenges of robots operating a ghost kitchen.
Annalee Newitz的《自动面条》展示了机器人操作幽灵厨房的挑战。
{"title":"Would you give four stars to a restaurant entirely staffed by robots?","authors":"Robin Murphy","doi":"","DOIUrl":"","url":null,"abstract":"<div >Annalee Newitz’s <i>Automatic Noodle</i> illustrates the challenges of robots operating a ghost kitchen.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 111","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147300068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The HoLoArm drone has flexible arms for impact resistance and can recover from collisions and maintain flight stability.
HoLoArm无人机具有灵活的臂部,可以抵抗冲击,并可以从碰撞中恢复并保持飞行稳定性。
{"title":"Collision-tolerant deformable quadrotor arms","authors":"Amos Matsiko","doi":"","DOIUrl":"","url":null,"abstract":"<div >The HoLoArm drone has flexible arms for impact resistance and can recover from collisions and maintain flight stability.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 111","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147300069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}