Pub Date : 2026-02-06DOI: 10.1109/LRA.2026.3662585
Yunfan Zhang;Yi Gan
This letter introduces a 10-active degree-of-freedom (DoF) robotic dexterous hand incorporating 4 modular fingers and 1 opposable thumb. Each finger (except the middle one) has 2-active-DoF implemented through 1 active flexion-extension (F-E) Proximal Interphalangeal (PIP) joint, 1 passive F-E Distal Interphalangeal (DIP) joint, and 1 active abduction-adduction (A-A) Metacarpophalangeal (MCP) joint. Specifically, the F-E motion of the PIP joint is actuated by a linear motor, and that of the DIP joint is mechanically coupled through a link. In contrast, the MCP joint’s A-A motion is pneumatically actuated due to its lower force requirements. A mathematical model that captures the chamber wall interactions in three consecutive stages (i.e., before contact, contact initiation, and during contact) is established to relate the pneumatic actuation to the finger A-A motion. Additionally, the proposed thumb has 3-active-DoF driven by 3 separate motors, allowing it to perform opposition movements to the other fingers. In the grasp evaluation, our hand successfully reproduces 7 out of 10 Kapandji test and 25 out of 33 grasps defined by the GRASP taxonomy.
{"title":"Design and Analysis of a Robotic Dexterous Hand: Combining Linkage Driven and Pneumatic Actuation","authors":"Yunfan Zhang;Yi Gan","doi":"10.1109/LRA.2026.3662585","DOIUrl":"https://doi.org/10.1109/LRA.2026.3662585","url":null,"abstract":"This letter introduces a 10-active degree-of-freedom (DoF) robotic dexterous hand incorporating 4 modular fingers and 1 opposable thumb. Each finger (except the middle one) has 2-active-DoF implemented through 1 active flexion-extension (F-E) Proximal Interphalangeal (PIP) joint, 1 passive F-E Distal Interphalangeal (DIP) joint, and 1 active abduction-adduction (A-A) Metacarpophalangeal (MCP) joint. Specifically, the F-E motion of the PIP joint is actuated by a linear motor, and that of the DIP joint is mechanically coupled through a link. In contrast, the MCP joint’s A-A motion is pneumatically actuated due to its lower force requirements. A mathematical model that captures the chamber wall interactions in three consecutive stages (i.e., before contact, contact initiation, and during contact) is established to relate the pneumatic actuation to the finger A-A motion. Additionally, the proposed thumb has 3-active-DoF driven by 3 separate motors, allowing it to perform opposition movements to the other fingers. In the grasp evaluation, our hand successfully reproduces 7 out of 10 Kapandji test and 25 out of 33 grasps defined by the GRASP taxonomy.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3844-3851"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1109/LRA.2026.3661320
Yujie Jiang;Chao Qin;Chengxi Zhong;Xiang Fu;You-Fu Li;Song Liu
Miniature adhesive patches (MAPs) are widely used in medicine for tissue repair, wound healing, and biosensing applications. Despite considerable advances in medical robotics, the automated in vivo delivery of MAPs remains a formidable challenge due to the intricate nature of biological environments, the delicate mechanical properties of MAPs, and the need for precise positioning on soft, often curved or dynamic tissue surfaces. This work presents a stereo microscope-guided dual-arm nanorobotic system capable of navigating intricate anatomical structures, enabling high-precision MAP delivery, and minimizing the internal stress on soft MAPs. The system employs adaptive bio-surface fitting and MAP delivery trajectory optimization based on the target tissue’s topography and MAP mechanical properties, followed by dual-arm execution under stereo microscope visual feedback. By automating MAP delivery onto complex in vivo surfaces, the system prevents internal stress imbalance (which often leads to significant and undesirable scar formation). Experimental validation demonstrated successful in vivo MAP delivery (Young’s modulus: 200 to 1000 Pa) onto mice nerve at vastus lateralis wound. The experiments also confirmed the system’s precision, dexterity, and uniform stress distribution during delivery process, underscoring its prospective utility in medical and clinical settings.
{"title":"Automated in Vivo Delivery of Miniature Adhesive Patches Using Dual-Arm Nanorobotic System under Stereo Microscope","authors":"Yujie Jiang;Chao Qin;Chengxi Zhong;Xiang Fu;You-Fu Li;Song Liu","doi":"10.1109/LRA.2026.3661320","DOIUrl":"https://doi.org/10.1109/LRA.2026.3661320","url":null,"abstract":"Miniature adhesive patches (MAPs) are widely used in medicine for tissue repair, wound healing, and biosensing applications. Despite considerable advances in medical robotics, the automated <italic>in vivo</i> delivery of MAPs remains a formidable challenge due to the intricate nature of biological environments, the delicate mechanical properties of MAPs, and the need for precise positioning on soft, often curved or dynamic tissue surfaces. This work presents a stereo microscope-guided dual-arm nanorobotic system capable of navigating intricate anatomical structures, enabling high-precision MAP delivery, and minimizing the internal stress on soft MAPs. The system employs adaptive bio-surface fitting and MAP delivery trajectory optimization based on the target tissue’s topography and MAP mechanical properties, followed by dual-arm execution under stereo microscope visual feedback. By automating MAP delivery onto complex <italic>in vivo</i> surfaces, the system prevents internal stress imbalance (which often leads to significant and undesirable scar formation). Experimental validation demonstrated successful <italic>in vivo</i> MAP delivery (Young’s modulus: 200 to 1000 Pa) onto mice nerve at vastus lateralis wound. The experiments also confirmed the system’s precision, dexterity, and uniform stress distribution during delivery process, underscoring its prospective utility in medical and clinical settings.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4545-4552"},"PeriodicalIF":5.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-18DOI: 10.1109/lra.2025.3645700
Viola Del Bono, Emma Capaldi, Anushka Kelshiker, Ayhan Aktas, Hiroyuki Aihara, Sheila Russo
Soft optical sensors hold potential for enhancing minimally invasive procedures like colonoscopy, yet their complex, multi-modal responses pose significant challenges. This work introduces a machine learning (ML) framework for real-time estimation of 3D shape and contact force in a soft robotic sleeve for colonoscopy. To overcome limitations of manual calibration and collect large datasets for ML, we developed an automated platform for collecting data across a range of orientations, curvatures, and contact forces. A cascaded ML architecture was implemented for sequential estimation of contact force and 3D shape, enabling an accuracy with errors of 4.7% for curvature, 2.37% for orientation, and 5.5% for force tracking. We also explored the potential of ML for contact localization by training a model to estimate contact intensity and location across 16 indenters distributed along the sleeve. The force intensity was estimated with an error ranging from 0.06 N to 0.31 N throughout the indenters. Despite the proximity of the contact points, the system achieved high localization performances, with 8 indenters reaching over 80% accuracy, demonstrating promising spatial resolution.
{"title":"Multi-modal sensing in colonoscopy: a data-driven approach.","authors":"Viola Del Bono, Emma Capaldi, Anushka Kelshiker, Ayhan Aktas, Hiroyuki Aihara, Sheila Russo","doi":"10.1109/lra.2025.3645700","DOIUrl":"10.1109/lra.2025.3645700","url":null,"abstract":"<p><p>Soft optical sensors hold potential for enhancing minimally invasive procedures like colonoscopy, yet their complex, multi-modal responses pose significant challenges. This work introduces a machine learning (ML) framework for real-time estimation of 3D shape and contact force in a soft robotic sleeve for colonoscopy. To overcome limitations of manual calibration and collect large datasets for ML, we developed an automated platform for collecting data across a range of orientations, curvatures, and contact forces. A cascaded ML architecture was implemented for sequential estimation of contact force and 3D shape, enabling an accuracy with errors of 4.7% for curvature, 2.37% for orientation, and 5.5% for force tracking. We also explored the potential of ML for contact localization by training a model to estimate contact intensity and location across 16 indenters distributed along the sleeve. The force intensity was estimated with an error ranging from 0.06 N to 0.31 N throughout the indenters. Despite the proximity of the contact points, the system achieved high localization performances, with 8 indenters reaching over 80% accuracy, demonstrating promising spatial resolution.</p>","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2018-2025"},"PeriodicalIF":5.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145998055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/LRA.2026.3656038
{"title":"IEEE Robotics and Automation Society Information","authors":"","doi":"10.1109/LRA.2026.3656038","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656038","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11367139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/LRA.2026.3656040
{"title":"IEEE Robotics and Automation Letters Information for Authors","authors":"","doi":"10.1109/LRA.2026.3656040","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656040","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"C4-C4"},"PeriodicalIF":5.3,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11367250","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/LRA.2026.3656773
Luís Marques;Maani Ghaffari;Dmitry Berenson
We propose Conformal Lie-group Action Prediction Sets (CLAPS), a symmetry-aware conformal prediction-based algorithm that constructs, for a given action, a set guaranteed to contain the resulting system configuration at a user-defined probability. Our assurance holds under both aleatoric and epistemic uncertainty, non-asymptotically, and does not require strong assumptions about the true system dynamics, the uncertainty sources, or the quality of the approximate dynamics model. Typically, uncertainty quantification is tackled by making strong assumptions about the error distribution or magnitude, or by relying on uncalibrated uncertainty estimates — i.e., with no link to frequentist probabilities — which are insufficient for safe control. Recently, conformal prediction has emerged as a statistical framework capable of providing distribution-free probabilistic guarantees on test-time prediction accuracy. While current conformal methods treat robot configurations as Euclidean points, many systems have non-Euclidean configurations, e.g., some mobile robots have $SE(2)$. In this work, we rigorously analyze configuration errors using Lie groups, extending previous Euclidean space theoretical guarantees to $SE(2)$. Our experiments on a simulated JetBot, and on a real MBot, suggest that by considering the configuration space’s structure, our symmetry-informed nonconformity score leads to more volume-efficient prediction regions which represent the underlying uncertainty better than existing approaches.
{"title":"Lies We Can Trust: Quantifying Action Uncertainty With Inaccurate Stochastic Dynamics Through Conformalized Nonholonomic Lie groups","authors":"Luís Marques;Maani Ghaffari;Dmitry Berenson","doi":"10.1109/LRA.2026.3656773","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656773","url":null,"abstract":"We propose <bold>C</b>onformal <bold>L</b>ie-group <bold>A</b>ction <bold>P</b>rediction <bold>S</b>ets (<bold>CLAPS</b>), a symmetry-aware conformal prediction-based algorithm that constructs, for a given action, a set guaranteed to contain the resulting system configuration at a user-defined probability. Our assurance holds under both aleatoric and epistemic uncertainty, non-asymptotically, and does not require strong assumptions about the true system dynamics, the uncertainty sources, or the quality of the approximate dynamics model. Typically, uncertainty quantification is tackled by making strong assumptions about the error distribution or magnitude, or by relying on uncalibrated uncertainty estimates — i.e., with no link to frequentist probabilities — which are insufficient for safe control. Recently, conformal prediction has emerged as a statistical framework capable of providing distribution-free probabilistic guarantees on test-time prediction accuracy. While current conformal methods treat robot configurations as Euclidean points, many systems have non-Euclidean configurations, e.g., some mobile robots have <inline-formula><tex-math>$SE(2)$</tex-math></inline-formula>. In this work, we rigorously analyze configuration errors using Lie groups, extending previous Euclidean space theoretical guarantees to <inline-formula><tex-math>$SE(2)$</tex-math></inline-formula>. Our experiments on a simulated JetBot, and on a real MBot, suggest that by considering the configuration space’s structure, our symmetry-informed nonconformity score leads to more volume-efficient prediction regions which represent the underlying uncertainty better than existing approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4801-4808"},"PeriodicalIF":5.3,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/LRA.2026.3656771
Taekbeom Lee;Dabin Kim;Youngseok Jang;H. Jin Kim
We present HERE, an active 3D scene reconstruction framework based on neural radiance fields, enabling high-fidelity implicit mapping. Our approach centers around an active learning strategy for camera trajectory generation, driven by accurate identification of unseen regions, which supports efficient data acquisition and precise scene reconstruction. The key to our approach is epistemic uncertainty quantification based on evidential deep learning, which directly captures data insufficiency and exhibits a strong correlation with reconstruction errors. This allows our framework to more reliably identify unexplored or poorly reconstructed regions compared to existing methods, leading to more informed and targeted exploration. Additionally, we design a hierarchical exploration strategy that leverages learned epistemic uncertainty, where local planning extracts target viewpoints from high-uncertainty voxels based on visibility for trajectory generation, and global planning uses uncertainty to guide large-scale coverage for efficient and comprehensive reconstruction. The effectiveness of the proposed method in active 3D reconstruction is demonstrated by achieving higher reconstruction completeness compared to previous approaches on photorealistic simulated scenes across varying scales, while a hardware demonstration further validates its real-world applicability.
{"title":"HERE: Hierarchical Active Exploration of Radiance Field With Epistemic Uncertainty Minimization","authors":"Taekbeom Lee;Dabin Kim;Youngseok Jang;H. Jin Kim","doi":"10.1109/LRA.2026.3656771","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656771","url":null,"abstract":"We present <italic>HERE</i>, an active 3D scene reconstruction framework based on neural radiance fields, enabling high-fidelity implicit mapping. Our approach centers around an active learning strategy for camera trajectory generation, driven by accurate identification of unseen regions, which supports efficient data acquisition and precise scene reconstruction. The key to our approach is epistemic uncertainty quantification based on evidential deep learning, which directly captures data insufficiency and exhibits a strong correlation with reconstruction errors. This allows our framework to more reliably identify unexplored or poorly reconstructed regions compared to existing methods, leading to more informed and targeted exploration. Additionally, we design a hierarchical exploration strategy that leverages learned epistemic uncertainty, where local planning extracts target viewpoints from high-uncertainty voxels based on visibility for trajectory generation, and global planning uses uncertainty to guide large-scale coverage for efficient and comprehensive reconstruction. The effectiveness of the proposed method in active 3D reconstruction is demonstrated by achieving higher reconstruction completeness compared to previous approaches on photorealistic simulated scenes across varying scales, while a hardware demonstration further validates its real-world applicability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3788-3795"},"PeriodicalIF":5.3,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning for robot manipulation promises to unlock generalization to novel tasks and environments. But how should we measure the progress of these policies towards generalization? Evaluating and quantifying generalization is the Wild West of modern robotics, with each work proposing and measuring different types of generalization in their own, often difficult to reproduce settings. In this work, our goal is (1) to outline the forms of generalization we believe are important for robot manipulation in a comprehensive and fine-grained manner, and (2) to provide reproducible guidelines for measuring these notions of generalization. We first propose $bigstar$-Gen, a taxonomy of generalization for robot manipulation structured around visual, semantic, and behavioral generalization. Next, we instantiate $bigstar$-Gen with two case studies on real-world benchmarking: one based on open-source models and the Bridge V2 dataset, and another based on the bimanual ALOHA 2 platform that covers more dexterous and longer horizon tasks. Our case studies reveal many interesting insights: for example, we observe that open-source vision-language-action models often struggle with semantic generalization, despite pre-training on internet-scale language datasets.
{"title":"A Taxonomy for Evaluating Generalist Robot Manipulation Policies","authors":"Jensen Gao;Suneel Belkhale;Sudeep Dasari;Ashwin Balakrishna;Dhruv Shah;Dorsa Sadigh","doi":"10.1109/LRA.2026.3656785","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656785","url":null,"abstract":"Machine learning for robot manipulation promises to unlock generalization to novel tasks and environments. But how should we measure the progress of these policies towards generalization? Evaluating and quantifying generalization is the Wild West of modern robotics, with each work proposing and measuring different types of generalization in their own, often difficult to reproduce settings. In this work, our goal is (1) to outline the forms of generalization we believe are important for robot manipulation in a comprehensive and fine-grained manner, and (2) to provide reproducible guidelines for measuring these notions of generalization. We first propose <inline-formula><tex-math>$bigstar$</tex-math></inline-formula>-Gen, a taxonomy of generalization for robot manipulation structured around visual, semantic, and behavioral generalization. Next, we instantiate <inline-formula><tex-math>$bigstar$</tex-math></inline-formula>-Gen with two case studies on real-world benchmarking: one based on open-source models and the Bridge V2 dataset, and another based on the bimanual ALOHA 2 platform that covers more dexterous and longer horizon tasks. Our case studies reveal many interesting insights: for example, we observe that open-source vision-language-action models often struggle with semantic generalization, despite pre-training on internet-scale language datasets.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3182-3189"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1109/LRA.2026.3656783
Quang Ngoc Pham;Jonas Eschmann;Yang Zhou;Alejandro Ojeda Olarte;Giuseppe Loianno;Van Anh Ho
The increasing use of drones in human-centric applications highlights the need for designs that can survive collisions and recover rapidly, minimizing risks to both humans and the environment. We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance. Experimental results demonstrate that HoLoArm can passively deform in any direction, including axial one, and recover within 0.3–0.6 s depending on the direction and level of the impact. The drone can survive collisions at speeds up to 7.6 m/s and carry a 540 g payload while maintaining stable flight. This work contributes to the morphological design of soft aerial robots with high agility and reliable safety, enabling operation in cluttered and human shared environments, and lays the groundwork for future fully soft drones that integrate compliant structures with intelligent control.
{"title":"HoLoArm: Deformable Arms for Collision-Tolerant Quadrotor Flight","authors":"Quang Ngoc Pham;Jonas Eschmann;Yang Zhou;Alejandro Ojeda Olarte;Giuseppe Loianno;Van Anh Ho","doi":"10.1109/LRA.2026.3656783","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656783","url":null,"abstract":"The increasing use of drones in human-centric applications highlights the need for designs that can survive collisions and recover rapidly, minimizing risks to both humans and the environment. We present <italic>HoLoArm</i>, a quadrotor with compliant arms inspired by the <italic>nodus</i> structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance. Experimental results demonstrate that <italic>HoLoArm</i> can passively deform in any direction, including axial one, and recover within 0.3–0.6 s depending on the direction and level of the impact. The drone can survive collisions at speeds up to 7.6 m/s and carry a 540 g payload while maintaining stable flight. This work contributes to the morphological design of soft aerial robots with high agility and reliable safety, enabling operation in cluttered and human shared environments, and lays the groundwork for future fully soft drones that integrate compliant structures with intelligent control.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3582-3589"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11361075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1109/LRA.2026.3656791
Yongjian Zhao;Yuyan Qi;Jiaqi Shao;Bin Sun;Min Wang;Songyi Zhong;Yang Yang
High power density and energy efficiency are critical for achieving agile locomotion and sustained operation in miniature flapping-wing robots. Here, a pneumatic linear reciprocating oscillator is developed as an actuation solution. The oscillator leverages the Bernoulli principle to establish a positive feedback mechanism through coordinated interactions among a soft membrane, a piston, and the airflow. Experimental validation demonstrates that the oscillator-based flapping-wing robot can generate a lift of 0.43 N to enable take-off and sustained flight in unstructured environments. The minimal oscillation unit exhibits maximum input and output specific power of 710.5 W/kg and 220.7 W/kg, respectively, with peak energy conversion efficiency reaching 41.9% . This design represents a paradigm shift from conventional electromechanical systems, offering two fundamental advancements: (i) simplified robotic drive architectures through an oscillator-based mechanism, and (ii) a foundation for hybrid energy systems that reduce reliance on electricity.
{"title":"An Energy-Efficient and Powerful Oscillator for Micro-Air Vehicles With Electronics-Free Flapping","authors":"Yongjian Zhao;Yuyan Qi;Jiaqi Shao;Bin Sun;Min Wang;Songyi Zhong;Yang Yang","doi":"10.1109/LRA.2026.3656791","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656791","url":null,"abstract":"High power density and energy efficiency are critical for achieving agile locomotion and sustained operation in miniature flapping-wing robots. Here, a pneumatic linear reciprocating oscillator is developed as an actuation solution. The oscillator leverages the Bernoulli principle to establish a positive feedback mechanism through coordinated interactions among a soft membrane, a piston, and the airflow. Experimental validation demonstrates that the oscillator-based flapping-wing robot can generate a lift of 0.43 N to enable take-off and sustained flight in unstructured environments. The minimal oscillation unit exhibits maximum input and output specific power of 710.5 W/kg and 220.7 W/kg, respectively, with peak energy conversion efficiency reaching 41.9% . This design represents a paradigm shift from conventional electromechanical systems, offering two fundamental advancements: (i) simplified robotic drive architectures through an oscillator-based mechanism, and (ii) a foundation for hybrid energy systems that reduce reliance on electricity.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3174-3181"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}