Pub Date : 2026-02-06DOI: 10.1109/LRA.2026.3662648
Daesung Park;KwangEun Ko;Dongbum Pyo;Jaehyeon Kang
Accurate real-time crop counting is essential for autonomous agricultural systems. However, existing methods often fail in dense plantings due to heavy foliage, irregular planting patterns, and frequent occlusions. While 2D tracking suffers from double-counting and 3D reconstruction requires offline processing, we propose a real-time crop counting framework that incrementally constructs global 3D crop instances during data collection. Each crop is modeled as a 3D oriented bounding box, initialized upon detection and updated with subsequent observations. To ensure robust association across frames, we employ 3D Generalized Intersection over Union (GIoU) for spatial matching and confidence-based filtering for validation, effectively reducing double-counting in dense orchards. Unlike prior methods, our approach supports on-the-fly counting without post-hoc reconstruction and performs reliably in unstructured field conditions. Experimental results demonstrate the accuracy and real-time capability of the proposed system in dense agricultural settings.
{"title":"Incremental 3D Crop Model Association for Real-Time Counting in Dense Orchards","authors":"Daesung Park;KwangEun Ko;Dongbum Pyo;Jaehyeon Kang","doi":"10.1109/LRA.2026.3662648","DOIUrl":"https://doi.org/10.1109/LRA.2026.3662648","url":null,"abstract":"Accurate real-time crop counting is essential for autonomous agricultural systems. However, existing methods often fail in dense plantings due to heavy foliage, irregular planting patterns, and frequent occlusions. While 2D tracking suffers from double-counting and 3D reconstruction requires offline processing, we propose a real-time crop counting framework that incrementally constructs global 3D crop instances during data collection. Each crop is modeled as a 3D oriented bounding box, initialized upon detection and updated with subsequent observations. To ensure robust association across frames, we employ 3D Generalized Intersection over Union (GIoU) for spatial matching and confidence-based filtering for validation, effectively reducing double-counting in dense orchards. Unlike prior methods, our approach supports on-the-fly counting without post-hoc reconstruction and performs reliably in unstructured field conditions. Experimental results demonstrate the accuracy and real-time capability of the proposed system in dense agricultural settings.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3860-3866"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1109/LRA.2026.3662633
Aabha Tamhankar;Ron Alterovitz;Ajit S. Puri;Giovanni Pittiglio
We propose a deterministic and time-efficient contact-aware path planner for neurovascular navigation. The algorithm leverages information from pre- and intra-operative images of the vessels to navigate pre-bent passive tools, by intelligently predicting and exploiting interactions with the anatomy. A kinematic model is derived and employed by the sampling-based planner for tree expansion that utilizes simplified motion primitives. This approach enables fast computation of the feasible path, with negligible loss in accuracy, as demonstrated in diverse and representative anatomies of the vessels. In these anatomical demonstrators, the algorithm shows a 100% convergence rate within 22.8 s in the worst case, with sub-millimeter tracking errors ($< {0.64},{mathrm{mm}}$), and is found effective on anatomical phantoms representative of $sim$94% of patients.
{"title":"Contact-Aware Path Planning for Autonomous Neuroendovascular Navigation","authors":"Aabha Tamhankar;Ron Alterovitz;Ajit S. Puri;Giovanni Pittiglio","doi":"10.1109/LRA.2026.3662633","DOIUrl":"https://doi.org/10.1109/LRA.2026.3662633","url":null,"abstract":"We propose a deterministic and time-efficient contact-aware path planner for neurovascular navigation. The algorithm leverages information from pre- and intra-operative images of the vessels to navigate pre-bent passive tools, by intelligently predicting and exploiting interactions with the anatomy. A kinematic model is derived and employed by the sampling-based planner for tree expansion that utilizes simplified motion primitives. This approach enables fast computation of the feasible path, with negligible loss in accuracy, as demonstrated in diverse and representative anatomies of the vessels. In these anatomical demonstrators, the algorithm shows a 100% convergence rate within 22.8 s in the worst case, with sub-millimeter tracking errors (<inline-formula><tex-math>$< {0.64},{mathrm{mm}}$</tex-math></inline-formula>), and is found effective on anatomical phantoms representative of <inline-formula><tex-math>$sim$</tex-math></inline-formula>94% of patients.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4130-4137"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1109/LRA.2026.3662585
Yunfan Zhang;Yi Gan
This letter introduces a 10-active degree-of-freedom (DoF) robotic dexterous hand incorporating 4 modular fingers and 1 opposable thumb. Each finger (except the middle one) has 2-active-DoF implemented through 1 active flexion-extension (F-E) Proximal Interphalangeal (PIP) joint, 1 passive F-E Distal Interphalangeal (DIP) joint, and 1 active abduction-adduction (A-A) Metacarpophalangeal (MCP) joint. Specifically, the F-E motion of the PIP joint is actuated by a linear motor, and that of the DIP joint is mechanically coupled through a link. In contrast, the MCP joint’s A-A motion is pneumatically actuated due to its lower force requirements. A mathematical model that captures the chamber wall interactions in three consecutive stages (i.e., before contact, contact initiation, and during contact) is established to relate the pneumatic actuation to the finger A-A motion. Additionally, the proposed thumb has 3-active-DoF driven by 3 separate motors, allowing it to perform opposition movements to the other fingers. In the grasp evaluation, our hand successfully reproduces 7 out of 10 Kapandji test and 25 out of 33 grasps defined by the GRASP taxonomy.
{"title":"Design and Analysis of a Robotic Dexterous Hand: Combining Linkage Driven and Pneumatic Actuation","authors":"Yunfan Zhang;Yi Gan","doi":"10.1109/LRA.2026.3662585","DOIUrl":"https://doi.org/10.1109/LRA.2026.3662585","url":null,"abstract":"This letter introduces a 10-active degree-of-freedom (DoF) robotic dexterous hand incorporating 4 modular fingers and 1 opposable thumb. Each finger (except the middle one) has 2-active-DoF implemented through 1 active flexion-extension (F-E) Proximal Interphalangeal (PIP) joint, 1 passive F-E Distal Interphalangeal (DIP) joint, and 1 active abduction-adduction (A-A) Metacarpophalangeal (MCP) joint. Specifically, the F-E motion of the PIP joint is actuated by a linear motor, and that of the DIP joint is mechanically coupled through a link. In contrast, the MCP joint’s A-A motion is pneumatically actuated due to its lower force requirements. A mathematical model that captures the chamber wall interactions in three consecutive stages (i.e., before contact, contact initiation, and during contact) is established to relate the pneumatic actuation to the finger A-A motion. Additionally, the proposed thumb has 3-active-DoF driven by 3 separate motors, allowing it to perform opposition movements to the other fingers. In the grasp evaluation, our hand successfully reproduces 7 out of 10 Kapandji test and 25 out of 33 grasps defined by the GRASP taxonomy.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3844-3851"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-18DOI: 10.1109/lra.2025.3645700
Viola Del Bono, Emma Capaldi, Anushka Kelshiker, Ayhan Aktas, Hiroyuki Aihara, Sheila Russo
Soft optical sensors hold potential for enhancing minimally invasive procedures like colonoscopy, yet their complex, multi-modal responses pose significant challenges. This work introduces a machine learning (ML) framework for real-time estimation of 3D shape and contact force in a soft robotic sleeve for colonoscopy. To overcome limitations of manual calibration and collect large datasets for ML, we developed an automated platform for collecting data across a range of orientations, curvatures, and contact forces. A cascaded ML architecture was implemented for sequential estimation of contact force and 3D shape, enabling an accuracy with errors of 4.7% for curvature, 2.37% for orientation, and 5.5% for force tracking. We also explored the potential of ML for contact localization by training a model to estimate contact intensity and location across 16 indenters distributed along the sleeve. The force intensity was estimated with an error ranging from 0.06 N to 0.31 N throughout the indenters. Despite the proximity of the contact points, the system achieved high localization performances, with 8 indenters reaching over 80% accuracy, demonstrating promising spatial resolution.
{"title":"Multi-modal sensing in colonoscopy: a data-driven approach.","authors":"Viola Del Bono, Emma Capaldi, Anushka Kelshiker, Ayhan Aktas, Hiroyuki Aihara, Sheila Russo","doi":"10.1109/lra.2025.3645700","DOIUrl":"10.1109/lra.2025.3645700","url":null,"abstract":"<p><p>Soft optical sensors hold potential for enhancing minimally invasive procedures like colonoscopy, yet their complex, multi-modal responses pose significant challenges. This work introduces a machine learning (ML) framework for real-time estimation of 3D shape and contact force in a soft robotic sleeve for colonoscopy. To overcome limitations of manual calibration and collect large datasets for ML, we developed an automated platform for collecting data across a range of orientations, curvatures, and contact forces. A cascaded ML architecture was implemented for sequential estimation of contact force and 3D shape, enabling an accuracy with errors of 4.7% for curvature, 2.37% for orientation, and 5.5% for force tracking. We also explored the potential of ML for contact localization by training a model to estimate contact intensity and location across 16 indenters distributed along the sleeve. The force intensity was estimated with an error ranging from 0.06 N to 0.31 N throughout the indenters. Despite the proximity of the contact points, the system achieved high localization performances, with 8 indenters reaching over 80% accuracy, demonstrating promising spatial resolution.</p>","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2018-2025"},"PeriodicalIF":5.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145998055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/LRA.2026.3656038
{"title":"IEEE Robotics and Automation Society Information","authors":"","doi":"10.1109/LRA.2026.3656038","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656038","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11367139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/LRA.2026.3656040
{"title":"IEEE Robotics and Automation Letters Information for Authors","authors":"","doi":"10.1109/LRA.2026.3656040","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656040","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"C4-C4"},"PeriodicalIF":5.3,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11367250","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/LRA.2026.3656771
Taekbeom Lee;Dabin Kim;Youngseok Jang;H. Jin Kim
We present HERE, an active 3D scene reconstruction framework based on neural radiance fields, enabling high-fidelity implicit mapping. Our approach centers around an active learning strategy for camera trajectory generation, driven by accurate identification of unseen regions, which supports efficient data acquisition and precise scene reconstruction. The key to our approach is epistemic uncertainty quantification based on evidential deep learning, which directly captures data insufficiency and exhibits a strong correlation with reconstruction errors. This allows our framework to more reliably identify unexplored or poorly reconstructed regions compared to existing methods, leading to more informed and targeted exploration. Additionally, we design a hierarchical exploration strategy that leverages learned epistemic uncertainty, where local planning extracts target viewpoints from high-uncertainty voxels based on visibility for trajectory generation, and global planning uses uncertainty to guide large-scale coverage for efficient and comprehensive reconstruction. The effectiveness of the proposed method in active 3D reconstruction is demonstrated by achieving higher reconstruction completeness compared to previous approaches on photorealistic simulated scenes across varying scales, while a hardware demonstration further validates its real-world applicability.
{"title":"HERE: Hierarchical Active Exploration of Radiance Field With Epistemic Uncertainty Minimization","authors":"Taekbeom Lee;Dabin Kim;Youngseok Jang;H. Jin Kim","doi":"10.1109/LRA.2026.3656771","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656771","url":null,"abstract":"We present <italic>HERE</i>, an active 3D scene reconstruction framework based on neural radiance fields, enabling high-fidelity implicit mapping. Our approach centers around an active learning strategy for camera trajectory generation, driven by accurate identification of unseen regions, which supports efficient data acquisition and precise scene reconstruction. The key to our approach is epistemic uncertainty quantification based on evidential deep learning, which directly captures data insufficiency and exhibits a strong correlation with reconstruction errors. This allows our framework to more reliably identify unexplored or poorly reconstructed regions compared to existing methods, leading to more informed and targeted exploration. Additionally, we design a hierarchical exploration strategy that leverages learned epistemic uncertainty, where local planning extracts target viewpoints from high-uncertainty voxels based on visibility for trajectory generation, and global planning uses uncertainty to guide large-scale coverage for efficient and comprehensive reconstruction. The effectiveness of the proposed method in active 3D reconstruction is demonstrated by achieving higher reconstruction completeness compared to previous approaches on photorealistic simulated scenes across varying scales, while a hardware demonstration further validates its real-world applicability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3788-3795"},"PeriodicalIF":5.3,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning for robot manipulation promises to unlock generalization to novel tasks and environments. But how should we measure the progress of these policies towards generalization? Evaluating and quantifying generalization is the Wild West of modern robotics, with each work proposing and measuring different types of generalization in their own, often difficult to reproduce settings. In this work, our goal is (1) to outline the forms of generalization we believe are important for robot manipulation in a comprehensive and fine-grained manner, and (2) to provide reproducible guidelines for measuring these notions of generalization. We first propose $bigstar$-Gen, a taxonomy of generalization for robot manipulation structured around visual, semantic, and behavioral generalization. Next, we instantiate $bigstar$-Gen with two case studies on real-world benchmarking: one based on open-source models and the Bridge V2 dataset, and another based on the bimanual ALOHA 2 platform that covers more dexterous and longer horizon tasks. Our case studies reveal many interesting insights: for example, we observe that open-source vision-language-action models often struggle with semantic generalization, despite pre-training on internet-scale language datasets.
{"title":"A Taxonomy for Evaluating Generalist Robot Manipulation Policies","authors":"Jensen Gao;Suneel Belkhale;Sudeep Dasari;Ashwin Balakrishna;Dhruv Shah;Dorsa Sadigh","doi":"10.1109/LRA.2026.3656785","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656785","url":null,"abstract":"Machine learning for robot manipulation promises to unlock generalization to novel tasks and environments. But how should we measure the progress of these policies towards generalization? Evaluating and quantifying generalization is the Wild West of modern robotics, with each work proposing and measuring different types of generalization in their own, often difficult to reproduce settings. In this work, our goal is (1) to outline the forms of generalization we believe are important for robot manipulation in a comprehensive and fine-grained manner, and (2) to provide reproducible guidelines for measuring these notions of generalization. We first propose <inline-formula><tex-math>$bigstar$</tex-math></inline-formula>-Gen, a taxonomy of generalization for robot manipulation structured around visual, semantic, and behavioral generalization. Next, we instantiate <inline-formula><tex-math>$bigstar$</tex-math></inline-formula>-Gen with two case studies on real-world benchmarking: one based on open-source models and the Bridge V2 dataset, and another based on the bimanual ALOHA 2 platform that covers more dexterous and longer horizon tasks. Our case studies reveal many interesting insights: for example, we observe that open-source vision-language-action models often struggle with semantic generalization, despite pre-training on internet-scale language datasets.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3182-3189"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1109/LRA.2026.3656783
Quang Ngoc Pham;Jonas Eschmann;Yang Zhou;Alejandro Ojeda Olarte;Giuseppe Loianno;Van Anh Ho
The increasing use of drones in human-centric applications highlights the need for designs that can survive collisions and recover rapidly, minimizing risks to both humans and the environment. We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance. Experimental results demonstrate that HoLoArm can passively deform in any direction, including axial one, and recover within 0.3–0.6 s depending on the direction and level of the impact. The drone can survive collisions at speeds up to 7.6 m/s and carry a 540 g payload while maintaining stable flight. This work contributes to the morphological design of soft aerial robots with high agility and reliable safety, enabling operation in cluttered and human shared environments, and lays the groundwork for future fully soft drones that integrate compliant structures with intelligent control.
{"title":"HoLoArm: Deformable Arms for Collision-Tolerant Quadrotor Flight","authors":"Quang Ngoc Pham;Jonas Eschmann;Yang Zhou;Alejandro Ojeda Olarte;Giuseppe Loianno;Van Anh Ho","doi":"10.1109/LRA.2026.3656783","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656783","url":null,"abstract":"The increasing use of drones in human-centric applications highlights the need for designs that can survive collisions and recover rapidly, minimizing risks to both humans and the environment. We present <italic>HoLoArm</i>, a quadrotor with compliant arms inspired by the <italic>nodus</i> structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance. Experimental results demonstrate that <italic>HoLoArm</i> can passively deform in any direction, including axial one, and recover within 0.3–0.6 s depending on the direction and level of the impact. The drone can survive collisions at speeds up to 7.6 m/s and carry a 540 g payload while maintaining stable flight. This work contributes to the morphological design of soft aerial robots with high agility and reliable safety, enabling operation in cluttered and human shared environments, and lays the groundwork for future fully soft drones that integrate compliant structures with intelligent control.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3582-3589"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11361075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-22DOI: 10.1109/LRA.2026.3656791
Yongjian Zhao;Yuyan Qi;Jiaqi Shao;Bin Sun;Min Wang;Songyi Zhong;Yang Yang
High power density and energy efficiency are critical for achieving agile locomotion and sustained operation in miniature flapping-wing robots. Here, a pneumatic linear reciprocating oscillator is developed as an actuation solution. The oscillator leverages the Bernoulli principle to establish a positive feedback mechanism through coordinated interactions among a soft membrane, a piston, and the airflow. Experimental validation demonstrates that the oscillator-based flapping-wing robot can generate a lift of 0.43 N to enable take-off and sustained flight in unstructured environments. The minimal oscillation unit exhibits maximum input and output specific power of 710.5 W/kg and 220.7 W/kg, respectively, with peak energy conversion efficiency reaching 41.9% . This design represents a paradigm shift from conventional electromechanical systems, offering two fundamental advancements: (i) simplified robotic drive architectures through an oscillator-based mechanism, and (ii) a foundation for hybrid energy systems that reduce reliance on electricity.
{"title":"An Energy-Efficient and Powerful Oscillator for Micro-Air Vehicles With Electronics-Free Flapping","authors":"Yongjian Zhao;Yuyan Qi;Jiaqi Shao;Bin Sun;Min Wang;Songyi Zhong;Yang Yang","doi":"10.1109/LRA.2026.3656791","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656791","url":null,"abstract":"High power density and energy efficiency are critical for achieving agile locomotion and sustained operation in miniature flapping-wing robots. Here, a pneumatic linear reciprocating oscillator is developed as an actuation solution. The oscillator leverages the Bernoulli principle to establish a positive feedback mechanism through coordinated interactions among a soft membrane, a piston, and the airflow. Experimental validation demonstrates that the oscillator-based flapping-wing robot can generate a lift of 0.43 N to enable take-off and sustained flight in unstructured environments. The minimal oscillation unit exhibits maximum input and output specific power of 710.5 W/kg and 220.7 W/kg, respectively, with peak energy conversion efficiency reaching 41.9% . This design represents a paradigm shift from conventional electromechanical systems, offering two fundamental advancements: (i) simplified robotic drive architectures through an oscillator-based mechanism, and (ii) a foundation for hybrid energy systems that reduce reliance on electricity.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3174-3181"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}