Pub Date : 2024-09-14DOI: 10.1177/02783649241263366
Matthias Tummers, Frédéric Boyer, Vincent Lebastard, Alexis Offermann, Jocelyne Troccaz, Benoît Rosa, M. Taha Chikhaoui
Various approaches and structures emerged recently to design continuum robots. One of the most promising designs regards a new concept of continuum concentric push–pull robots (CPPRs) that have the characteristic of combining several key advantages of tendon actuated, multi-backbone, and concentric tube ones (direct curvature actuation, small outer/inner diameter ratio, free lumen, etc.). Geometrically-exact models of such recently introduced robots are yet to be developed to gain leverage of their full potential. This article extends beyond usual definitions of Cosserat rod theory in order to take into account this new type of continuum robots, constituted by sliding rods, in a shape of tubes whose cross-sections are neither uniform nor symmetrical along their entire length. The introduced model is capable of considering versatile design options, external loads, 3D deformations, an arbitrary number of tubes and profiles of the centroid lines, as well as a new actuation method consisting of an input rotation. Numerical simulations and experiments on CPPR prototypes validate our model.
{"title":"Continuum concentric push–pull robots: A Cosserat rod model","authors":"Matthias Tummers, Frédéric Boyer, Vincent Lebastard, Alexis Offermann, Jocelyne Troccaz, Benoît Rosa, M. Taha Chikhaoui","doi":"10.1177/02783649241263366","DOIUrl":"https://doi.org/10.1177/02783649241263366","url":null,"abstract":"Various approaches and structures emerged recently to design continuum robots. One of the most promising designs regards a new concept of continuum concentric push–pull robots (CPPRs) that have the characteristic of combining several key advantages of tendon actuated, multi-backbone, and concentric tube ones (direct curvature actuation, small outer/inner diameter ratio, free lumen, etc.). Geometrically-exact models of such recently introduced robots are yet to be developed to gain leverage of their full potential. This article extends beyond usual definitions of Cosserat rod theory in order to take into account this new type of continuum robots, constituted by sliding rods, in a shape of tubes whose cross-sections are neither uniform nor symmetrical along their entire length. The introduced model is capable of considering versatile design options, external loads, 3D deformations, an arbitrary number of tubes and profiles of the centroid lines, as well as a new actuation method consisting of an input rotation. Numerical simulations and experiments on CPPR prototypes validate our model.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-14DOI: 10.1177/02783649241273565
Noémie Jaquier, Michael C Welle, Andrej Gams, Kunpeng Yao, Bernardo Fichera, Aude Billard, Aleš Ude, Tamim Asfour, Danica Kragic
Transfer learning is a conceptually-enticing paradigm in pursuit of truly intelligent embodied agents. The core concept—reusing prior knowledge to learn in and from novel situations—is successfully leveraged by humans to handle novel situations. In recent years, transfer learning has received renewed interest from the community from different perspectives, including imitation learning, domain adaptation, and transfer of experience from simulation to the real world, among others. In this paper, we unify the concept of transfer learning in robotics and provide the first taxonomy of its kind considering the key concepts of robot, task, and environment. Through a review of the promises and challenges in the field, we identify the need of transferring at different abstraction levels, the need of quantifying the transfer gap and the quality of transfer, as well as the dangers of negative transfer. Via this position paper, we hope to channel the effort of the community towards the most significant roadblocks to realize the full potential of transfer learning in robotics.
{"title":"Transfer learning in robotics: An upcoming breakthrough? A review of promises and challenges","authors":"Noémie Jaquier, Michael C Welle, Andrej Gams, Kunpeng Yao, Bernardo Fichera, Aude Billard, Aleš Ude, Tamim Asfour, Danica Kragic","doi":"10.1177/02783649241273565","DOIUrl":"https://doi.org/10.1177/02783649241273565","url":null,"abstract":"Transfer learning is a conceptually-enticing paradigm in pursuit of truly intelligent embodied agents. The core concept—reusing prior knowledge to learn in and from novel situations—is successfully leveraged by humans to handle novel situations. In recent years, transfer learning has received renewed interest from the community from different perspectives, including imitation learning, domain adaptation, and transfer of experience from simulation to the real world, among others. In this paper, we unify the concept of transfer learning in robotics and provide the first taxonomy of its kind considering the key concepts of robot, task, and environment. Through a review of the promises and challenges in the field, we identify the need of transferring at different abstraction levels, the need of quantifying the transfer gap and the quality of transfer, as well as the dangers of negative transfer. Via this position paper, we hope to channel the effort of the community towards the most significant roadblocks to realize the full potential of transfer learning in robotics.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-14DOI: 10.1177/02783649241274937
Jason M O’Kane, Michael Otte, Dorsa Sadigh, Pratap Tokekar
{"title":"Selected papers from WAFR 2022","authors":"Jason M O’Kane, Michael Otte, Dorsa Sadigh, Pratap Tokekar","doi":"10.1177/02783649241274937","DOIUrl":"https://doi.org/10.1177/02783649241274937","url":null,"abstract":"","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1177/02783649241272115
Thomas Chaffre, Jonathan Wheare, Andrew Lammas, Paulo Santos, Gilles Le Chenadec, Karl Sammut, Benoit Clement
Learning-based adaptive control methods hold the potential to empower autonomous agents in mitigating the impact of process variations with minimal human intervention. However, their application to autonomous underwater vehicles (AUVs) has been constrained by two main challenges: (1) the presence of unknown dynamics in the form of sea current disturbances, which cannot be modelled or measured due to limited sensor capability, particularly on smaller low-cost AUVs, and (2) the nonlinearity of AUV tasks, where the controller response at certain operating points must be excessively conservative to meet specifications at other points. Deep Reinforcement Learning (DRL) offers a solution to these challenges by training versatile neural network policies. Nevertheless, the application of DRL algorithms to AUVs has been predominantly limited to simulated environments due to their inherent high sample complexity and the distribution shift problem. This paper introduces a novel approach by combining the Maximum Entropy Deep Reinforcement Learning framework with a classic model-based control architecture to formulate an adaptive controller. In this framework, we propose a Sim-to-Real transfer strategy, incorporating a bio-inspired experience replay mechanism, an enhanced domain randomisation technique, and an evaluation protocol executed on a physical platform. Our experimental assessments demonstrate the effectiveness of this method in learning proficient policies from suboptimal simulated models of the AUV. When transferred to a real-world vehicle, the approach exhibits a control performance three times higher compared to its model-based nonadaptive but optimal counterpart.
{"title":"Sim-to-real transfer of adaptive control parameters for AUV stabilisation under current disturbance","authors":"Thomas Chaffre, Jonathan Wheare, Andrew Lammas, Paulo Santos, Gilles Le Chenadec, Karl Sammut, Benoit Clement","doi":"10.1177/02783649241272115","DOIUrl":"https://doi.org/10.1177/02783649241272115","url":null,"abstract":"Learning-based adaptive control methods hold the potential to empower autonomous agents in mitigating the impact of process variations with minimal human intervention. However, their application to autonomous underwater vehicles (AUVs) has been constrained by two main challenges: (1) the presence of unknown dynamics in the form of sea current disturbances, which cannot be modelled or measured due to limited sensor capability, particularly on smaller low-cost AUVs, and (2) the nonlinearity of AUV tasks, where the controller response at certain operating points must be excessively conservative to meet specifications at other points. Deep Reinforcement Learning (DRL) offers a solution to these challenges by training versatile neural network policies. Nevertheless, the application of DRL algorithms to AUVs has been predominantly limited to simulated environments due to their inherent high sample complexity and the distribution shift problem. This paper introduces a novel approach by combining the Maximum Entropy Deep Reinforcement Learning framework with a classic model-based control architecture to formulate an adaptive controller. In this framework, we propose a Sim-to-Real transfer strategy, incorporating a bio-inspired experience replay mechanism, an enhanced domain randomisation technique, and an evaluation protocol executed on a physical platform. Our experimental assessments demonstrate the effectiveness of this method in learning proficient policies from suboptimal simulated models of the AUV. When transferred to a real-world vehicle, the approach exhibits a control performance three times higher compared to its model-based nonadaptive but optimal counterpart.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1177/02783649241261398
Andrey Zhitnikov, Ori Sztyglic, Vadim Indelman
Continuous Partially Observable Markov Decision Processes (POMDPs) with general belief-dependent rewards are notoriously difficult to solve online. In this paper, we present a complete provable theory of adaptive multilevel simplification for the setting of a given externally constructed belief tree and Monte Carlo Tree Search (MCTS) that constructs the belief tree on the fly using an exploration technique. Our theory allows to accelerate POMDP planning with belief-dependent rewards without any sacrifice in the quality of the obtained solution. We rigorously prove each theoretical claim in the proposed unified theory. Using the general theoretical results, we present three algorithms to accelerate continuous POMDP online planning with belief-dependent rewards. Our two algorithms, SITH-BSP and LAZY-SITH-BSP, can be utilized on top of any method that constructs a belief tree externally. The third algorithm, SITH-PFT, is an anytime MCTS method that permits to plug-in any exploration technique. All our methods are guaranteed to return exactly the same optimal action as their unsimplified equivalents. We replace the costly computation of information-theoretic rewards with novel adaptive upper and lower bounds which we derive in this paper, and are of independent interest. We show that they are easy to calculate and can be tightened by the demand of our algorithms. Our approach is general; namely, any bounds that monotonically converge to the reward can be utilized to achieve a significant speedup without any loss in performance. Our theory and algorithms support the challenging setting of continuous states, actions, and observations. The beliefs can be parametric or general and represented by weighted particles. We demonstrate in simulation a significant speedup in planning compared to baseline approaches with guaranteed identical performance.
具有一般信念依赖奖励的连续部分可观测马尔可夫决策过程(POMDPs)在线求解的难度是众所周知的。在本文中,我们针对给定外部构建的信念树和蒙特卡罗树搜索(Monte Carlo Tree Search,MCTS)提出了一套完整的可证明的自适应多级简化理论,该理论可使用探索技术快速构建信念树。我们的理论允许在不牺牲所获解决方案质量的情况下,加速具有依赖于信念的奖励的 POMDP 规划。我们严格证明了所提出的统一理论中的每个理论主张。利用一般理论结果,我们提出了三种算法来加速具有信念依赖奖励的连续 POMDP 在线规划。我们的两种算法 SITH-BSP 和 LAZY-SITH-BSP 可用于任何从外部构建信念树的方法。第三种算法 SITH-PFT 是一种随时 MCTS 方法,允许插入任何探索技术。我们的所有方法都能保证返回与其未简化等效方法完全相同的最优行动。我们用新颖的自适应上界和下界取代了代价高昂的信息论奖励计算,这些上界和下界是我们在本文中推导出来的,具有独立的意义。我们证明,它们很容易计算,而且可以根据我们算法的要求加以收紧。我们的方法是通用的,即可以利用任何单调收敛于奖励的边界,在不损失任何性能的情况下实现显著提速。我们的理论和算法支持具有挑战性的连续状态、行动和观察设置。信念可以是参数信念,也可以是一般信念,并用加权粒子表示。我们在仿真中证明,与保证性能相同的基线方法相比,规划速度明显加快。
{"title":"No compromise in solution quality: Speeding up belief-dependent continuous partially observable Markov decision processes via adaptive multilevel simplification","authors":"Andrey Zhitnikov, Ori Sztyglic, Vadim Indelman","doi":"10.1177/02783649241261398","DOIUrl":"https://doi.org/10.1177/02783649241261398","url":null,"abstract":"Continuous Partially Observable Markov Decision Processes (POMDPs) with general belief-dependent rewards are notoriously difficult to solve online. In this paper, we present a complete provable theory of adaptive multilevel simplification for the setting of a given externally constructed belief tree and Monte Carlo Tree Search (MCTS) that constructs the belief tree on the fly using an exploration technique. Our theory allows to accelerate POMDP planning with belief-dependent rewards without any sacrifice in the quality of the obtained solution. We rigorously prove each theoretical claim in the proposed unified theory. Using the general theoretical results, we present three algorithms to accelerate continuous POMDP online planning with belief-dependent rewards. Our two algorithms, SITH-BSP and LAZY-SITH-BSP, can be utilized on top of any method that constructs a belief tree externally. The third algorithm, SITH-PFT, is an anytime MCTS method that permits to plug-in any exploration technique. All our methods are guaranteed to return exactly the same optimal action as their unsimplified equivalents. We replace the costly computation of information-theoretic rewards with novel adaptive upper and lower bounds which we derive in this paper, and are of independent interest. We show that they are easy to calculate and can be tightened by the demand of our algorithms. Our approach is general; namely, any bounds that monotonically converge to the reward can be utilized to achieve a significant speedup without any loss in performance. Our theory and algorithms support the challenging setting of continuous states, actions, and observations. The beliefs can be parametric or general and represented by weighted particles. We demonstrate in simulation a significant speedup in planning compared to baseline approaches with guaranteed identical performance.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional cameras employed in autonomous vehicle (AV) systems support many perception tasks but are challenged by low-light or high dynamic range scenes, adverse weather, and fast motion. Novel sensors, such as event and thermal cameras, offer capabilities with the potential to address these scenarios, but they remain to be fully exploited. This paper introduces the Novel Sensors for Autonomous Vehicle Perception (NSAVP) dataset to facilitate future research on this topic. The dataset was captured with a platform including stereo event, thermal, monochrome, and RGB cameras as well as a high precision navigation system providing ground truth poses. The data was collected by repeatedly driving two ∼8 km routes and includes varied lighting conditions and opposing viewpoint perspectives. We provide benchmarking experiments on the task of place recognition to demonstrate challenges and opportunities for novel sensors to enhance critical AV perception tasks. To our knowledge, the NSAVP dataset is the first to include stereo thermal cameras together with stereo event and monochrome cameras. The dataset and supporting software suite is available at https://umautobots.github.io/nsavp .
{"title":"Dataset and Benchmark: Novel Sensors for Autonomous Vehicle Perception","authors":"Spencer Carmichael, Austin Buchan, Mani Ramanagopal, Radhika Ravi, Ram Vasudevan, Katherine A Skinner","doi":"10.1177/02783649241273554","DOIUrl":"https://doi.org/10.1177/02783649241273554","url":null,"abstract":"Conventional cameras employed in autonomous vehicle (AV) systems support many perception tasks but are challenged by low-light or high dynamic range scenes, adverse weather, and fast motion. Novel sensors, such as event and thermal cameras, offer capabilities with the potential to address these scenarios, but they remain to be fully exploited. This paper introduces the Novel Sensors for Autonomous Vehicle Perception (NSAVP) dataset to facilitate future research on this topic. The dataset was captured with a platform including stereo event, thermal, monochrome, and RGB cameras as well as a high precision navigation system providing ground truth poses. The data was collected by repeatedly driving two ∼8 km routes and includes varied lighting conditions and opposing viewpoint perspectives. We provide benchmarking experiments on the task of place recognition to demonstrate challenges and opportunities for novel sensors to enhance critical AV perception tasks. To our knowledge, the NSAVP dataset is the first to include stereo thermal cameras together with stereo event and monochrome cameras. The dataset and supporting software suite is available at https://umautobots.github.io/nsavp .","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1177/02783649241245726
Yulin Yang, Patrick Geneva, Guoquan Huang
In this paper, we study state estimation of multi-visual-inertial systems (MVIS) and develop sensor fusion algorithms to optimally fuse an arbitrary number of asynchronous inertial measurement units (IMUs) or gyroscopes and global and/or rolling shutter cameras. We are especially interested in the full calibration of the associated visual-inertial sensors, including the IMU/camera intrinsics and the IMU-IMU/camera spatiotemporal extrinsics as well as the image readout time of rolling-shutter cameras (if used). To this end, we develop a new analytic combined IMU integration with inertial intrinsics—termed ACI3—to pre-integrate IMU measurements, which is leveraged to fuse auxiliary IMUs and/or gyroscopes alongside a base IMU. We model the multi-inertial measurements to include all the necessary inertial intrinsic and IMU-IMU spatiotemporal extrinsic parameters, while leveraging IMU-IMU rigid-body constraints to eliminate the necessity of auxiliary inertial poses and thus reducing computational complexity. By performing observability analysis of MVIS, we prove that the standard four unobservable directions remain—no matter how many inertial sensors are used, and also identify, for the first time, degenerate motions for IMU-IMU spatiotemporal extrinsics and auxiliary inertial intrinsics. In addition to extensive simulations that validate our analysis and algorithms, we have built our own MVIS sensor rig and collected over 25 real-world datasets to experimentally verify the proposed calibration against the state-of-the-art calibration method Kalibr. We show that the proposed MVIS calibration is able to achieve competing accuracy with improved convergence and repeatability, which is open sourced to better benefit the community.
{"title":"Multi-visual-inertial system: Analysis, calibration, and estimation","authors":"Yulin Yang, Patrick Geneva, Guoquan Huang","doi":"10.1177/02783649241245726","DOIUrl":"https://doi.org/10.1177/02783649241245726","url":null,"abstract":"In this paper, we study state estimation of multi-visual-inertial systems (MVIS) and develop sensor fusion algorithms to optimally fuse an arbitrary number of asynchronous inertial measurement units (IMUs) or gyroscopes and global and/or rolling shutter cameras. We are especially interested in the full calibration of the associated visual-inertial sensors, including the IMU/camera intrinsics and the IMU-IMU/camera spatiotemporal extrinsics as well as the image readout time of rolling-shutter cameras (if used). To this end, we develop a new analytic combined IMU integration with inertial intrinsics—termed ACI<jats:sup>3</jats:sup>—to pre-integrate IMU measurements, which is leveraged to fuse auxiliary IMUs and/or gyroscopes alongside a base IMU. We model the multi-inertial measurements to include all the necessary inertial intrinsic and IMU-IMU spatiotemporal extrinsic parameters, while leveraging IMU-IMU rigid-body constraints to eliminate the necessity of auxiliary inertial poses and thus reducing computational complexity. By performing observability analysis of MVIS, we prove that the standard four unobservable directions remain—no matter how many inertial sensors are used, and also identify, for the first time, degenerate motions for IMU-IMU spatiotemporal extrinsics and auxiliary inertial intrinsics. In addition to extensive simulations that validate our analysis and algorithms, we have built our own MVIS sensor rig and collected over 25 real-world datasets to experimentally verify the proposed calibration against the state-of-the-art calibration method Kalibr. We show that the proposed MVIS calibration is able to achieve competing accuracy with improved convergence and repeatability, which is open sourced to better benefit the community.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last decades, supernumerary robotic limbs (SRLs) have been proposed as technological aids for rehabilitation, assistance, and functional augmentation. Whether they are in the form of wearable devices or grounded systems, SRLs can be used to compensate for lost motor functions in patients with disabilities, as well as to augment the human sensorimotor capabilities. By using SRLs, users gain the ability to perform a wide range of complex tasks that may otherwise be challenging or even impossible with their natural limbs. Designing effective strategies and policies for the control and operation of SRLs represents a substantial challenge in their development. A key aspect that remains insufficiently addressed is the formulation of successful and intuitive augmentation policies that do not hinder the functionality of a person’s natural limbs. This work introduces an innovative strategy based on the exploitation of the redundancy of the human kinematic chain involved in a task for commanding SRLs having one degree of freedom. This concept is summarized in the definition of the Intrinsic Kinematic Null Space (IKNS). The newly developed procedure encompasses a real-time analysis of body motion and a subsequent computation of the control signal for SRLs based on the IKNS for single-arm tasks. What sets our approach apart is its explicit emphasis on incorporating user-specific biomechanical and physiological characteristics and constraints. This ensures an efficient and intuitive approach to commanding SRLs, tailored to the individual user’s needs. Towards a complete evaluation of the proposed system, we studied the users’ capability of exploiting the IKNS both in virtual and real environments. Obtained results demonstrated that the exploitation of the Intrinsic Kinematic Null Space allows to perform complex tasks involving both biological and artificial limbs, and that practice improves the ability to accurately manage the coordination of human and supernumerary artificial limbs.
{"title":"Exploiting body redundancy to control supernumerary robotic limbs in human augmentation","authors":"Tommaso Lisini Baldi, Nicole D’Aurizio, Chiara Gaudeni, Sergio Gurgone, Daniele Borzelli, Andrea d’Avella, Domenico Prattichizzo","doi":"10.1177/02783649241265451","DOIUrl":"https://doi.org/10.1177/02783649241265451","url":null,"abstract":"In the last decades, supernumerary robotic limbs (SRLs) have been proposed as technological aids for rehabilitation, assistance, and functional augmentation. Whether they are in the form of wearable devices or grounded systems, SRLs can be used to compensate for lost motor functions in patients with disabilities, as well as to augment the human sensorimotor capabilities. By using SRLs, users gain the ability to perform a wide range of complex tasks that may otherwise be challenging or even impossible with their natural limbs. Designing effective strategies and policies for the control and operation of SRLs represents a substantial challenge in their development. A key aspect that remains insufficiently addressed is the formulation of successful and intuitive augmentation policies that do not hinder the functionality of a person’s natural limbs. This work introduces an innovative strategy based on the exploitation of the redundancy of the human kinematic chain involved in a task for commanding SRLs having one degree of freedom. This concept is summarized in the definition of the Intrinsic Kinematic Null Space (IKNS). The newly developed procedure encompasses a real-time analysis of body motion and a subsequent computation of the control signal for SRLs based on the IKNS for single-arm tasks. What sets our approach apart is its explicit emphasis on incorporating user-specific biomechanical and physiological characteristics and constraints. This ensures an efficient and intuitive approach to commanding SRLs, tailored to the individual user’s needs. Towards a complete evaluation of the proposed system, we studied the users’ capability of exploiting the IKNS both in virtual and real environments. Obtained results demonstrated that the exploitation of the Intrinsic Kinematic Null Space allows to perform complex tasks involving both biological and artificial limbs, and that practice improves the ability to accurately manage the coordination of human and supernumerary artificial limbs.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1177/02783649241266852
Shashank Ramesh, Mark Plecnik
The kinematic configuration space of a manipulator determines the set of all possible motions that may occur, and its differential properties have a strong, albeit indirect, influence on both static and dynamic performance. By viewing first-order kinematics as a field of Jacobian-defined ellipses across a workspace, a novel two degree-of-freedom manipulator was designed, and is tested in this paper for its benefits. The manipulator exhibits a field of ellipses that biases transmission characteristics in Cartesian directions of the end-effector. The horizontal direction is biased toward speed in order to move across the width of the workspace quickly, while the vertical direction is biased toward force production in order to resist gravitational loads. The latter bias endows the manipulator with load capacity in the absence of gears. Such an exclusion can forego the extra weight, complexity, backlash, transmission losses, and fragility of gearboxes. Additionally, a direct drive set-up improves backdrivability and transparency. The latter is relevant to applications that involve interacting with the environment or people. Our novel design is set through an array of theoretical and experimental performance studies in comparison to a conventional direct drive manipulator. The experimental results showed a 3.75× increase in payload capacity, a 2× increase in dynamic tracking accuracy, a 2.07× increase in dynamic cycling frequency, and at least a 3.70× reduction in power consumption, considering both static and dynamic experiments.
{"title":"A direct-drive five-bar manipulator with tuned directional first-order kinematics for low energy consumption in vertical loading","authors":"Shashank Ramesh, Mark Plecnik","doi":"10.1177/02783649241266852","DOIUrl":"https://doi.org/10.1177/02783649241266852","url":null,"abstract":"The kinematic configuration space of a manipulator determines the set of all possible motions that may occur, and its differential properties have a strong, albeit indirect, influence on both static and dynamic performance. By viewing first-order kinematics as a field of Jacobian-defined ellipses across a workspace, a novel two degree-of-freedom manipulator was designed, and is tested in this paper for its benefits. The manipulator exhibits a field of ellipses that biases transmission characteristics in Cartesian directions of the end-effector. The horizontal direction is biased toward speed in order to move across the width of the workspace quickly, while the vertical direction is biased toward force production in order to resist gravitational loads. The latter bias endows the manipulator with load capacity in the absence of gears. Such an exclusion can forego the extra weight, complexity, backlash, transmission losses, and fragility of gearboxes. Additionally, a direct drive set-up improves backdrivability and transparency. The latter is relevant to applications that involve interacting with the environment or people. Our novel design is set through an array of theoretical and experimental performance studies in comparison to a conventional direct drive manipulator. The experimental results showed a 3.75× increase in payload capacity, a 2× increase in dynamic tracking accuracy, a 2.07× increase in dynamic cycling frequency, and at least a 3.70× reduction in power consumption, considering both static and dynamic experiments.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1177/02783649241264577
David Vutetakis, Jing Xiao
This work addresses the problem of online exploration and visual sensor coverage of unknown environments. We introduce a novel perception roadmap we refer to as the Active Perception Network (APN) that serves as a hierarchical topological graph describing how to traverse and perceive an incrementally built spatial map of the environment. The APN state is incrementally updated to expand a connected configuration space that extends throughout as much of the known space as possible, using efficient difference-awareness techniques that track the discrete changes of the spatial map to inform the updates. A frontier-guided approach is presented for efficient evaluation of information gain and covisible information, which guides view sampling and refinement to ensure maximum coverage of the unmapped space is maintained within the APN. The updated roadmap is hierarchically decomposed into subgraph regions which we use to facilitate a non-myopic global view sequence planner. A comparative analysis to several state-of-the-art approaches was conducted, showing significant performance improvements in terms of total exploration time and surface coverage, and demonstrating high computational efficiency that is scalable to large and complex environments.
{"title":"Active perception network for non-myopic online exploration and visual surface coverage","authors":"David Vutetakis, Jing Xiao","doi":"10.1177/02783649241264577","DOIUrl":"https://doi.org/10.1177/02783649241264577","url":null,"abstract":"This work addresses the problem of online exploration and visual sensor coverage of unknown environments. We introduce a novel perception roadmap we refer to as the Active Perception Network (APN) that serves as a hierarchical topological graph describing how to traverse and perceive an incrementally built spatial map of the environment. The APN state is incrementally updated to expand a connected configuration space that extends throughout as much of the known space as possible, using efficient difference-awareness techniques that track the discrete changes of the spatial map to inform the updates. A frontier-guided approach is presented for efficient evaluation of information gain and covisible information, which guides view sampling and refinement to ensure maximum coverage of the unmapped space is maintained within the APN. The updated roadmap is hierarchically decomposed into subgraph regions which we use to facilitate a non-myopic global view sequence planner. A comparative analysis to several state-of-the-art approaches was conducted, showing significant performance improvements in terms of total exploration time and surface coverage, and demonstrating high computational efficiency that is scalable to large and complex environments.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141934159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}