Pub Date : 2026-04-01Epub Date: 2026-02-13DOI: 10.1109/lra.2026.3664592
Max McCandless, Jonathan Hamid, Sammy Elmariah, Nathaniel Langer, Pierre E Dupont
To move away from open-heart surgery towards safer transcatheter procedures, there is a growing need for improved imaging techniques and robotic solutions to enable simple, accurate tool navigation. Common imaging modalities, such as fluoroscopy and ultrasound, have limitations that can be overcome using cardioscopy, i.e., direct optical visualization inside the beating heart. We present a cardioscope designed as a steerable balloon. As a balloon, it can be collapsed to pass through the vasculature and subsequently inflated inside the heart for visualization and tool delivery through an integrated working channel. Through careful design of balloon wall thickness, a single input, balloon inflation pressure, is used to sequentially independently control two outputs, balloon diameter (corresponding to field of view diameter) and balloon bending angle (enabling precise working channel positioning). This balloon technology can be tuned to produce cardioscopes designed for a range of intracardiac tasks. To illustrate this approach, a balloon design is presented for the specific task of aortic leaflet laceration. Image-based closed-loop control of bending angle is also demonstrated as a means of enabling stable orientation control during tool insertion and removal.
{"title":"Closed-loop Control of Steerable Balloon Endoscopes for Robot-assisted Transcatheter Intracardiac Procedures.","authors":"Max McCandless, Jonathan Hamid, Sammy Elmariah, Nathaniel Langer, Pierre E Dupont","doi":"10.1109/lra.2026.3664592","DOIUrl":"10.1109/lra.2026.3664592","url":null,"abstract":"<p><p>To move away from open-heart surgery towards safer transcatheter procedures, there is a growing need for improved imaging techniques and robotic solutions to enable simple, accurate tool navigation. Common imaging modalities, such as fluoroscopy and ultrasound, have limitations that can be overcome using cardioscopy, i.e., direct optical visualization inside the beating heart. We present a cardioscope designed as a steerable balloon. As a balloon, it can be collapsed to pass through the vasculature and subsequently inflated inside the heart for visualization and tool delivery through an integrated working channel. Through careful design of balloon wall thickness, a single input, balloon inflation pressure, is used to sequentially independently control two outputs, balloon diameter (corresponding to field of view diameter) and balloon bending angle (enabling precise working channel positioning). This balloon technology can be tuned to produce cardioscopes designed for a range of intracardiac tasks. To illustrate this approach, a balloon design is presented for the specific task of aortic leaflet laceration. Image-based closed-loop control of bending angle is also demonstrated as a means of enabling stable orientation control during tool insertion and removal.</p>","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4211-4218"},"PeriodicalIF":5.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12994373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147480592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable odometry in highly dynamic environments remains challenging when it relies on ICP-based registration: ICP assumes near-static scenes and degrades in repetitive or low-texture geometry. We introduce Dynamic-ICP, a Doppler-aware registration framework. The method (i) estimates ego translational velocity from per-point Doppler velocity via robust regression and builds a velocity filter, (ii) clusters dynamic objects and reconstructs object-wise translational velocities from ego-compensated radial measurements, (iii) predicts dynamic points with a constant-velocity model, and (iv) aligns scans using a compact objective that combines point-to-plane geometry residual with a translation-invariant, rotation-only Doppler residual. The approach requires no external sensors or sensor–vehicle calibration and operates directly on FMCW LiDAR range and Doppler velocities. We evaluate Dynamic-ICP on three real-world datasets-HeRCULES, HeLiPR, AevaScenes-focusing on highly dynamic scenes. Dynamic-ICP consistently improves rotational stability and translation accuracy over the state-of-the-art methods.
{"title":"Dynamic-ICP: Doppler-Aware Iterative Closest Point Registration for Dynamic Scenes","authors":"Dong Wang;Daniel Casado Herraez;Stefan May;Andreas Nüchter","doi":"10.1109/LRA.2026.3669808","DOIUrl":"https://doi.org/10.1109/LRA.2026.3669808","url":null,"abstract":"Reliable odometry in highly dynamic environments remains challenging when it relies on ICP-based registration: ICP assumes near-static scenes and degrades in repetitive or low-texture geometry. We introduce Dynamic-ICP, a Doppler-aware registration framework. The method (i) estimates ego translational velocity from per-point Doppler velocity via robust regression and builds a velocity filter, (ii) clusters dynamic objects and reconstructs object-wise translational velocities from ego-compensated radial measurements, (iii) predicts dynamic points with a constant-velocity model, and (iv) aligns scans using a compact objective that combines point-to-plane geometry residual with a translation-invariant, rotation-only Doppler residual. The approach requires no external sensors or sensor–vehicle calibration and operates directly on FMCW LiDAR range and Doppler velocities. We evaluate Dynamic-ICP on three real-world datasets-HeRCULES, HeLiPR, AevaScenes-focusing on highly dynamic scenes. Dynamic-ICP consistently improves rotational stability and translation accuracy over the state-of-the-art methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"5174-5181"},"PeriodicalIF":5.3,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11419773","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-02DOI: 10.1109/LRA.2026.3668985
Bowen Li;Xin Li;Hongguang Xu;Qitao Huang
Hydraulic systems have been widely applied in lower-limb prostheses, primarily for their compact actuation and inherent damping capability. However, when applied to powered prosthetic feet, valves and other damping elements cause unavoidable energy dissipation, thereby constraining their power density. To address this limitation, we propose a valve-less electro-hydrostatic powered prosthetic foot aimed at enhancing power efficiency. In addition, a gas accumulator’s nonlinear elasticity is considered, forming a hydraulic series elastic actuator with passive stiffness comparable to the human ankle during walking. This configuration lowers the motor’s required speed, consequently reducing its actual electrical power. The proposed design was evaluated through a treadmill walking experiment with a non-impaired subject walking at 1.1 m/s. The results showed that the prosthetic foot provided sufficient positive power relative to human reference values. Moreover, the prosthesis achieved a peak output power of 206.3 $pm$ 15.0 W, while the corresponding motor electrical power was only 147.5 $pm$ 29.5 W. Our study demonstrates that the electro-hydrostatic system holds significant potential for enhancing the power density of powered prosthetic feet.
{"title":"A Valve-Less Electro-Hydrostatic Powered Prosthetic Foot to Improve the Power Efficiency During Walking","authors":"Bowen Li;Xin Li;Hongguang Xu;Qitao Huang","doi":"10.1109/LRA.2026.3668985","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668985","url":null,"abstract":"Hydraulic systems have been widely applied in lower-limb prostheses, primarily for their compact actuation and inherent damping capability. However, when applied to powered prosthetic feet, valves and other damping elements cause unavoidable energy dissipation, thereby constraining their power density. To address this limitation, we propose a valve-less electro-hydrostatic powered prosthetic foot aimed at enhancing power efficiency. In addition, a gas accumulator’s nonlinear elasticity is considered, forming a hydraulic series elastic actuator with passive stiffness comparable to the human ankle during walking. This configuration lowers the motor’s required speed, consequently reducing its actual electrical power. The proposed design was evaluated through a treadmill walking experiment with a non-impaired subject walking at 1.1 m/s. The results showed that the prosthetic foot provided sufficient positive power relative to human reference values. Moreover, the prosthesis achieved a peak output power of 206.3 <inline-formula><tex-math>$pm$</tex-math></inline-formula> 15.0 W, while the corresponding motor electrical power was only 147.5 <inline-formula><tex-math>$pm$</tex-math></inline-formula> 29.5 W. Our study demonstrates that the electro-hydrostatic system holds significant potential for enhancing the power density of powered prosthetic feet.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4833-4840"},"PeriodicalIF":5.3,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-27DOI: 10.1109/LRA.2026.3668590
Han Zheng;Rong Xiong;Yue Wang;Jun Wu
Object segmentation is a critical prerequisite for robotic tasks such as grasping and assembly. While high accuracy and reliability typically require extensive real-world data, its collection and annotation are costly. Although synthetic data generated through physically-based rendering mitigates this need, a persistent domain gap hinders model performance. This letter introduces a novel hyper-realistic synthetic data generation method to mitigate this gap with minimal real-world data. By extracting domain information from limited real scenes, we shift synthetic data toward the target domain. Realistic backgrounds are synthesized using generative models, while a two-stage style transfer, guided by anchor image styles, adapts foregrounds. Our method achieves performance comparable to models trained on thousands of real images using as few as one real image, significantly reducing the reliance on large-scale data collection.
{"title":"Sim2Real Domain Shifting: Hyper-Realistic Data Generation for Object Segmentation","authors":"Han Zheng;Rong Xiong;Yue Wang;Jun Wu","doi":"10.1109/LRA.2026.3668590","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668590","url":null,"abstract":"Object segmentation is a critical prerequisite for robotic tasks such as grasping and assembly. While high accuracy and reliability typically require extensive real-world data, its collection and annotation are costly. Although synthetic data generated through physically-based rendering mitigates this need, a persistent domain gap hinders model performance. This letter introduces a novel hyper-realistic synthetic data generation method to mitigate this gap with minimal real-world data. By extracting domain information from limited real scenes, we shift synthetic data toward the target domain. Realistic backgrounds are synthesized using generative models, while a two-stage style transfer, guided by anchor image styles, adapts foregrounds. Our method achieves performance comparable to models trained on thousands of real images using as few as one real image, significantly reducing the reliance on large-scale data collection.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4761-4768"},"PeriodicalIF":5.3,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-27DOI: 10.1109/LRA.2026.3668579
Kaice Jiang;Qingxiao Wu;Sicong Li;Feng Zhu;Yingjian Fang;Jianxin Cai
Next Best View (NBV) algorithms are a critical area of research in 3D reconstruction. They aim to efficiently reconstruct 3D scenes by maximizing information gain from the next optimal viewpoint. However, current NBV methods often neglect the importance of high-quality candidate view sampling, leading to inconsistent quality of the candidate viewpoint set. Moreover, these methods frequently encounter difficulties in extracting effective information for accurate information gain estimation, especially when reconstructing complex objects or large-scale scenes. To address these challenges, we propose UNITE-NBV. Our method achieves accurate next best view selection across various scenes by effectively uniting the proposed high-quality candidate view sampling strategy and information gain estimation network. Specifically, we introduce Spherical Uncertainty Sampling (SUS), a novel candidate viewpoint sampling method. This method calculates the scene’s uncertainty field and maps it into a spherical sampling space. Within this space, the uncertainty distribution guides the sampling of high-quality candidate viewpoints. Additionally, we propose a Multi-Expert Information Gain Network (MEIGN) that performs disentangled encoding and expert encoder sequence processing on features extracted from the reconstructed scene. These processed features are then dynamically fused using sparse gating and Multi-Head Self-Attention, enabling accurate information gain estimation for candidate views. Extensive experimental results on both the small-object dataset ShapeNet and various large-scale 3D scene datasets demonstrate the effectiveness and superior performance of our proposed method. The code will be released.
{"title":"UNITE-NBV: Uncertainty-Driven and Information-Enhanced Gain Estimation for Next Best View","authors":"Kaice Jiang;Qingxiao Wu;Sicong Li;Feng Zhu;Yingjian Fang;Jianxin Cai","doi":"10.1109/LRA.2026.3668579","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668579","url":null,"abstract":"Next Best View (NBV) algorithms are a critical area of research in 3D reconstruction. They aim to efficiently reconstruct 3D scenes by maximizing information gain from the next optimal viewpoint. However, current NBV methods often neglect the importance of high-quality candidate view sampling, leading to inconsistent quality of the candidate viewpoint set. Moreover, these methods frequently encounter difficulties in extracting effective information for accurate information gain estimation, especially when reconstructing complex objects or large-scale scenes. To address these challenges, we propose UNITE-NBV. Our method achieves accurate next best view selection across various scenes by effectively uniting the proposed high-quality candidate view sampling strategy and information gain estimation network. Specifically, we introduce Spherical Uncertainty Sampling (SUS), a novel candidate viewpoint sampling method. This method calculates the scene’s uncertainty field and maps it into a spherical sampling space. Within this space, the uncertainty distribution guides the sampling of high-quality candidate viewpoints. Additionally, we propose a Multi-Expert Information Gain Network (MEIGN) that performs disentangled encoding and expert encoder sequence processing on features extracted from the reconstructed scene. These processed features are then dynamically fused using sparse gating and Multi-Head Self-Attention, enabling accurate information gain estimation for candidate views. Extensive experimental results on both the small-object dataset ShapeNet and various large-scale 3D scene datasets demonstrate the effectiveness and superior performance of our proposed method. The code will be released.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"5166-5173"},"PeriodicalIF":5.3,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a curriculum-based reinforcement learning framework for training precise and high-performance jumping policies for the robot Olympus. Separate policies are developed for vertical and horizontal jumps, leveraging a simple yet effective strategy. First, we densify the inherently sparse jumping reward using the laws of projectile motion. Next, a reference state initialization scheme is employed to accelerate the exploration of dynamic jumping behaviors. We also present a walking policy that, when combined with the jumping policies, unlocks versatile and dynamic locomotion capabilities. Comprehensive testing validates walking on varied terrain surfaces and jumping performance that exceeds previous works, effectively crossing the Sim2Real gap. Experimental validation demonstrates horizontal jumps up to 1.25 m with centimeter accuracy and vertical jumps up to 1.0 m. Additionally, we show that with only minor modifications, the proposed method can be used to learn omnidirectional jumping.
{"title":"Towards Quadrupedal Jumping and Walking for Dynamic Locomotion Using Reinforcement Learning","authors":"Jørgen Anker Olsen;Lars Rønhaug Pettersen;Kostas Alexis","doi":"10.1109/LRA.2026.3668467","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668467","url":null,"abstract":"This paper presents a curriculum-based reinforcement learning framework for training precise and high-performance jumping policies for the robot Olympus. Separate policies are developed for vertical and horizontal jumps, leveraging a simple yet effective strategy. First, we densify the inherently sparse jumping reward using the laws of projectile motion. Next, a reference state initialization scheme is employed to accelerate the exploration of dynamic jumping behaviors. We also present a walking policy that, when combined with the jumping policies, unlocks versatile and dynamic locomotion capabilities. Comprehensive testing validates walking on varied terrain surfaces and jumping performance that exceeds previous works, effectively crossing the Sim2Real gap. Experimental validation demonstrates horizontal jumps up to 1.25 m with centimeter accuracy and vertical jumps up to 1.0 m. Additionally, we show that with only minor modifications, the proposed method can be used to learn omnidirectional jumping.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4809-4816"},"PeriodicalIF":5.3,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-27DOI: 10.1109/LRA.2026.3668451
Yuan Chen;Jie Jiang
When uncrewed aerial vehicles (UAVs) conduct exploration tasks in weakly textured environments, such as planetary surfaces or outdoor scenes with sparse features, the absence of GPS typically necessitates the use of visual SLAM for localization. However, feature sparsity, motion blur caused by rapid camera movements, and viewpoint variations often lead to failures in feature-based pose tracking and relocalization. To address this issue, we propose a Global-Local feature learning Self-Attention Encoder (GLAFE), which simultaneously generates enhanced local and global feature descriptors by exploiting the correlations between local features, thereby improving robustness and efficiency in weakly textured scenes with viewpoint changes. A multi-objective optimization strategy based on shared samples is proposed to facilitate the joint learning of global and local features for GLAFE. Experiments on simulated Mars surface images and real-world flight data demonstrate that the proposed approach achieves better comprehensive performance in terms of robustness, accuracy, and efficiency compared with classical retrieval-based and other deep learning methods.$^{1}$
{"title":"GLAFE: A Global-Local Feature Learning Self-Attention Encoder for UAV Relocalization in Weak-Texture Environments","authors":"Yuan Chen;Jie Jiang","doi":"10.1109/LRA.2026.3668451","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668451","url":null,"abstract":"When uncrewed aerial vehicles (UAVs) conduct exploration tasks in weakly textured environments, such as planetary surfaces or outdoor scenes with sparse features, the absence of GPS typically necessitates the use of visual SLAM for localization. However, feature sparsity, motion blur caused by rapid camera movements, and viewpoint variations often lead to failures in feature-based pose tracking and relocalization. To address this issue, we propose a Global-Local feature learning Self-Attention Encoder (GLAFE), which simultaneously generates enhanced local and global feature descriptors by exploiting the correlations between local features, thereby improving robustness and efficiency in weakly textured scenes with viewpoint changes. A multi-objective optimization strategy based on shared samples is proposed to facilitate the joint learning of global and local features for GLAFE. Experiments on simulated Mars surface images and real-world flight data demonstrate that the proposed approach achieves better comprehensive performance in terms of robustness, accuracy, and efficiency compared with classical retrieval-based and other deep learning methods.<inline-formula><tex-math>$^{1}$</tex-math></inline-formula>","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"5151-5157"},"PeriodicalIF":5.3,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust forward-looking sonar (FLS) odometry is critical for underwater autonomous navigation but is hindered by severe noise and sparse textures in acoustic imaging. Traditional Fourier-based methods are susceptible to such degradations, while end-to-end deep learning approaches often struggle to learn intrinsic geometric relationships. We propose a novel deep learning framework that synergizes classical signal processing with learnable architectures. Our method decomposes pose estimation into rotation and translation stages, utilizing an improved Trans-UNet to enhance image feature interaction. Specifically, the rotation network leverages the Radon transform for noise filtering, combined with a multi-angle correlation layer to determine angular relationships. Following rotation correction, an improved learnable phase correlation module estimates translation within an end-to-end trainable system. Experiments on public datasets demonstrate that our method achieves outstanding odometry performance even without loop closure detection, and zero-shot evaluations on wetland datasets further validate its strong generalization capability.
{"title":"Deep Learning-Based Fourier Registration for Forward-Looking Sonar Odometry in Texture-Sparse Underwater Environments","authors":"Peng Yao;Qiming Liu;Yingming Sun;Yalu Wang;Jiatao Yu","doi":"10.1109/LRA.2026.3668623","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668623","url":null,"abstract":"Robust forward-looking sonar (FLS) odometry is critical for underwater autonomous navigation but is hindered by severe noise and sparse textures in acoustic imaging. Traditional Fourier-based methods are susceptible to such degradations, while end-to-end deep learning approaches often struggle to learn intrinsic geometric relationships. We propose a novel deep learning framework that synergizes classical signal processing with learnable architectures. Our method decomposes pose estimation into rotation and translation stages, utilizing an improved Trans-UNet to enhance image feature interaction. Specifically, the rotation network leverages the Radon transform for noise filtering, combined with a multi-angle correlation layer to determine angular relationships. Following rotation correction, an improved learnable phase correlation module estimates translation within an end-to-end trainable system. Experiments on public datasets demonstrate that our method achieves outstanding odometry performance even without loop closure detection, and zero-shot evaluations on wetland datasets further validate its strong generalization capability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4817-4824"},"PeriodicalIF":5.3,"publicationDate":"2026-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-26DOI: 10.1109/LRA.2026.3668701
Bao Yingwei;Yang Bo;Wang Zezheng;Li Huilai;Jiang Haoyi;Sun Maowen;Ouyang Xiaoping
Lower limb exoskeleton robots have been widely researched for load-carrying assistance. Recently, quasi-passive exoskeletons using low-power elements to modulate mechanical characteristics have emerged. However, achieving effective damping and stiffness across varying tasks and loads remains challenging. This letter proposes a dual-mode actuator (DMA) inspired by knee joint impedances, achieving controllable damping force and high spring stiffness for assistance, with good backdrivability for human-robot transparency. Based on the DMA, a quasi-passive lower limb exoskeleton (QLLE) is proposed and evaluated during loaded walking and squatting. Experimental results demonstrated that the DMA achieved a sinusoidal damping force tracking error of 5.9% at 1 Hz, a spring stiffness of 21.7 N/mm, and an unassisted backdrive force of 14.3 N for compression. In addition, QLLE assistance reduced the maximum net metabolic cost by 8.5% during walking and 15.6% during squatting, with the load effectively transferred to the ground. These findings highlight the potential of QLLEs in real-world applications, such as manual material transportation.
{"title":"A Dual-Mode Hydraulic Actuator for a Quasi-Passive Load-Carrying Exoskeleton in Multiple Conditions","authors":"Bao Yingwei;Yang Bo;Wang Zezheng;Li Huilai;Jiang Haoyi;Sun Maowen;Ouyang Xiaoping","doi":"10.1109/LRA.2026.3668701","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668701","url":null,"abstract":"Lower limb exoskeleton robots have been widely researched for load-carrying assistance. Recently, quasi-passive exoskeletons using low-power elements to modulate mechanical characteristics have emerged. However, achieving effective damping and stiffness across varying tasks and loads remains challenging. This letter proposes a dual-mode actuator (DMA) inspired by knee joint impedances, achieving controllable damping force and high spring stiffness for assistance, with good backdrivability for human-robot transparency. Based on the DMA, a quasi-passive lower limb exoskeleton (QLLE) is proposed and evaluated during loaded walking and squatting. Experimental results demonstrated that the DMA achieved a sinusoidal damping force tracking error of 5.9% at 1 Hz, a spring stiffness of 21.7 N/mm, and an unassisted backdrive force of 14.3 N for compression. In addition, QLLE assistance reduced the maximum net metabolic cost by 8.5% during walking and 15.6% during squatting, with the load effectively transferred to the ground. These findings highlight the potential of QLLEs in real-world applications, such as manual material transportation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"5135-5142"},"PeriodicalIF":5.3,"publicationDate":"2026-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a decision-theoretic framework in which a robot strategically can shape inferred human’s prosocial state during repeated interactions. Modeling the human’s prosociality as a latent state that evolves over time, the robot learns to infer and influence this state through its own actions, including helping and signaling. We formalize this as a latent-state POMDP with limited observations and learn the transition and observation dynamics using expectation maximization. The resulting belief-based policy balances task and social objectives, selecting actions that maximize long-term cooperative outcomes. We evaluate the model using data from user studies and show that the learned policy outperforms baseline strategies in both team performance and increasing observed human cooperative behavior.
{"title":"Strategic Shaping of Human Prosociality: A Latent-State POMDP Framework","authors":"Zahra Zahedi;Xinyue Hu;Shashank Mehrotra;Mark Steyvers;Kumar Akash","doi":"10.1109/LRA.2026.3668141","DOIUrl":"https://doi.org/10.1109/LRA.2026.3668141","url":null,"abstract":"We propose a decision-theoretic framework in which a robot strategically can shape inferred human’s prosocial state during repeated interactions. Modeling the human’s prosociality as a latent state that evolves over time, the robot learns to infer and influence this state through its own actions, including helping and signaling. We formalize this as a latent-state POMDP with limited observations and learn the transition and observation dynamics using expectation maximization. The resulting belief-based policy balances task and social objectives, selecting actions that maximize long-term cooperative outcomes. We evaluate the model using data from user studies and show that the learned policy outperforms baseline strategies in both team performance and increasing observed human cooperative behavior.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 4","pages":"4729-4736"},"PeriodicalIF":5.3,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}