Pub Date : 2023-11-02DOI: 10.1177/02783649231207654
Hyungtae Lim, Beomsoo Kim, Daebeom Kim, Eungchang Mason Lee, Hyun Myung
Global registration is a fundamental task that estimates the relative pose between two viewpoints of 3D point clouds. However, there are two issues that degrade the performance of global registration in LiDAR SLAM: one is the sparsity issue and the other is degeneracy. The sparsity issue is caused by the sparse characteristics of the 3D point cloud measurements in a mechanically spinning LiDAR sensor. The degeneracy issue sometimes occurs because the outlier-rejection methods reject too many correspondences, leaving less than three inliers. These two issues have become more severe as the pose discrepancy between the two viewpoints of 3D point clouds becomes greater. To tackle these problems, we propose a robust global registration framework, called Quatro++. Extending our previous work that solely focused on the global registration itself, we address the robust global registration in terms of the loop closing in LiDAR SLAM. To this end, ground segmentation is exploited to achieve robust global registration. Through the experiments, we demonstrate that our proposed method shows a higher succeiasdsss rate than the state-of-the-art global registration methods, overcoming the sparsity and degeneracy issues. In addition, we show that ground segmentation asdasd asignificantly helps to idfdfsncrease the success rate for the ground vehicles. Finally, we apply our proposed method to the loop clossdasdlksajing modulasdse in LiDAR SLAM and confirm that the quality of the loop constraints is improved, showing more precise mapping results. Therefore, the experimental evidence corroborated the suitabilitiasdasdy of our method as an initial alignment in the loop closing. Our code is available at https://quatro-plusplus.github.io .
{"title":"Quatro++: Robust global registration exploiting ground segmentation for loop closing in LiDAR SLAM","authors":"Hyungtae Lim, Beomsoo Kim, Daebeom Kim, Eungchang Mason Lee, Hyun Myung","doi":"10.1177/02783649231207654","DOIUrl":"https://doi.org/10.1177/02783649231207654","url":null,"abstract":"Global registration is a fundamental task that estimates the relative pose between two viewpoints of 3D point clouds. However, there are two issues that degrade the performance of global registration in LiDAR SLAM: one is the sparsity issue and the other is degeneracy. The sparsity issue is caused by the sparse characteristics of the 3D point cloud measurements in a mechanically spinning LiDAR sensor. The degeneracy issue sometimes occurs because the outlier-rejection methods reject too many correspondences, leaving less than three inliers. These two issues have become more severe as the pose discrepancy between the two viewpoints of 3D point clouds becomes greater. To tackle these problems, we propose a robust global registration framework, called Quatro++. Extending our previous work that solely focused on the global registration itself, we address the robust global registration in terms of the loop closing in LiDAR SLAM. To this end, ground segmentation is exploited to achieve robust global registration. Through the experiments, we demonstrate that our proposed method shows a higher succeiasdsss rate than the state-of-the-art global registration methods, overcoming the sparsity and degeneracy issues. In addition, we show that ground segmentation asdasd asignificantly helps to idfdfsncrease the success rate for the ground vehicles. Finally, we apply our proposed method to the loop clossdasdlksajing modulasdse in LiDAR SLAM and confirm that the quality of the loop constraints is improved, showing more precise mapping results. Therefore, the experimental evidence corroborated the suitabilitiasdasdy of our method as an initial alignment in the loop closing. Our code is available at https://quatro-plusplus.github.io .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a structured prediction approach for robot imitation learning from demonstrations. Among various tools for robot imitation learning, supervised learning has been observed to have a prominent role. Structured prediction is a form of supervised learning that enables learning models to operate on output spaces with complex structures. Through the lens of structured prediction, we show how robots can learn to imitate trajectories belonging to not only Euclidean spaces but also Riemannian manifolds. Exploiting ideas from information theory, we propose a class of loss functions based on the f-divergence to measure the information loss between the demonstrated and reproduced probabilistic trajectories. Different types of f-divergence will result in different policies, which we call imitation modes. Furthermore, our approach enables the incorporation of spatial and temporal trajectory modulation, which is necessary for robots to be adaptive to the change in working conditions. We benchmark our algorithm against state-of-the-art methods in terms of trajectory reproduction and adaptation. The quantitative evaluation shows that our approach outperforms other algorithms regarding both accuracy and efficiency. We also report real-world experimental results on learning manifold trajectories in a polishing task with a KUKA LWR robot arm, illustrating the effectiveness of our algorithmic framework.
{"title":"A structured prediction approach for robot imitation learning","authors":"Anqing Duan, Iason Batzianoulis, Raffaello Camoriano, Lorenzo Rosasco, Daniele Pucci, Aude Billard","doi":"10.1177/02783649231204656","DOIUrl":"https://doi.org/10.1177/02783649231204656","url":null,"abstract":"We propose a structured prediction approach for robot imitation learning from demonstrations. Among various tools for robot imitation learning, supervised learning has been observed to have a prominent role. Structured prediction is a form of supervised learning that enables learning models to operate on output spaces with complex structures. Through the lens of structured prediction, we show how robots can learn to imitate trajectories belonging to not only Euclidean spaces but also Riemannian manifolds. Exploiting ideas from information theory, we propose a class of loss functions based on the f-divergence to measure the information loss between the demonstrated and reproduced probabilistic trajectories. Different types of f-divergence will result in different policies, which we call imitation modes. Furthermore, our approach enables the incorporation of spatial and temporal trajectory modulation, which is necessary for robots to be adaptive to the change in working conditions. We benchmark our algorithm against state-of-the-art methods in terms of trajectory reproduction and adaptation. The quantitative evaluation shows that our approach outperforms other algorithms regarding both accuracy and efficiency. We also report real-world experimental results on learning manifold trajectories in a polishing task with a KUKA LWR robot arm, illustrating the effectiveness of our algorithmic framework.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135372905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1177/02783649231210325
Naiyao Wang, Bo Zhang, Haixu Chi, Hua Wang, Seán McLoone, Hongbo Liu
Reliable obstacle avoidance, which is essential for safe autonomous robot interaction with the real world, raises various challenges such as difficulties with obstacle perception and latent factor cognition impacting multi-modal obstacle avoidance. In this paper, we propose a Depth visUal Ego-motion Learning (DUEL) model, consisting of a cognitive generation network, a policy decision network and a potential partition network, to learn autonomous obstacle avoidance from expert policies. The DUEL model takes advantage of binocular vision to perceive scene depth. This serves as the input to the cognitive generation network which generates obstacle avoidance policies by maximizing its causal entropy. The policy decision network then optimizes the generation of the policies referring to expert policies. The generated obstacle avoidance policies are simultaneously transferred to the potential partition network to capture the latent factors contained within expert policies and perform multi-modal obstacle avoidance. These three core networks iteratively optimize the multi-modal policies relying on causal entropy and mutual information theorems, which are proven theoretically. Experimental comparisons with state-of-the-art models on 7 metrics demonstrate the effectiveness of the DUEL model. It achieves the best performance with an average ADE (Average Displacement Error) of 0.29 and average FDE (Final Displacement Error) of 0.55 across five different scenarios. Results show that the DUEL model can maintain an average obstacle avoidance success rate of 97% for both simulated and real world scenarios with multiple obstacles, demonstrating its success at capturing latent factors from expert policies. Our source codes are available at https://github.com/ACoTAI/DUEL .
{"title":"DUEL: Depth visUal Ego-motion Learning for autonomous robot obstacle avoidance","authors":"Naiyao Wang, Bo Zhang, Haixu Chi, Hua Wang, Seán McLoone, Hongbo Liu","doi":"10.1177/02783649231210325","DOIUrl":"https://doi.org/10.1177/02783649231210325","url":null,"abstract":"Reliable obstacle avoidance, which is essential for safe autonomous robot interaction with the real world, raises various challenges such as difficulties with obstacle perception and latent factor cognition impacting multi-modal obstacle avoidance. In this paper, we propose a Depth visUal Ego-motion Learning (DUEL) model, consisting of a cognitive generation network, a policy decision network and a potential partition network, to learn autonomous obstacle avoidance from expert policies. The DUEL model takes advantage of binocular vision to perceive scene depth. This serves as the input to the cognitive generation network which generates obstacle avoidance policies by maximizing its causal entropy. The policy decision network then optimizes the generation of the policies referring to expert policies. The generated obstacle avoidance policies are simultaneously transferred to the potential partition network to capture the latent factors contained within expert policies and perform multi-modal obstacle avoidance. These three core networks iteratively optimize the multi-modal policies relying on causal entropy and mutual information theorems, which are proven theoretically. Experimental comparisons with state-of-the-art models on 7 metrics demonstrate the effectiveness of the DUEL model. It achieves the best performance with an average ADE (Average Displacement Error) of 0.29 and average FDE (Final Displacement Error) of 0.55 across five different scenarios. Results show that the DUEL model can maintain an average obstacle avoidance success rate of 97% for both simulated and real world scenarios with multiple obstacles, demonstrating its success at capturing latent factors from expert policies. Our source codes are available at https://github.com/ACoTAI/DUEL .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136019443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1177/02783649231210326
Stefan Jorgensen, Marco Pavone
Consider deploying a team of robots in order to visit sites in a risky environment (i.e., where a robot might be lost during a traversal), subject to team-based operational constraints such as limits on team composition, traffic throughputs, and launch constraints. We formalize this problem using a graph to represent the environment, enforcing probabilistic survival constraints for each robot, and using a matroid (which generalizes linear independence to sets) to capture the team-based operational constraints. The resulting “Matroid Team Surviving Orienteers” (MTSO) problem has broad applications for robotics such as informative path planning, resource delivery, and search and rescue. We demonstrate that the objective for the MTSO problem has submodular structure, which leads us to develop two polynomial time algorithms which are guaranteed to find a solution with value within a constant factor of the optimum. The second of our algorithms is an extension of the accelerated continuous greedy algorithm, and can be applied to much broader classes of constraints while maintaining bounds on suboptimality. In addition to in-depth analysis, we demonstrate the efficiency of our approaches by applying them to a scenario where a team of robots must gather information while avoiding dangers in the Coral Triangle and characterize scaling and parameter selection using a synthetic dataset.
考虑部署一个机器人团队,以便访问危险环境中的站点(例如,机器人可能在遍历过程中丢失),并受到基于团队的操作约束,例如团队组成、流量吞吐量和发射约束的限制。我们使用一个图来表示环境,对每个机器人实施概率生存约束,并使用一个矩阵(将线性独立性推广到集合)来捕获基于团队的操作约束,从而形式化了这个问题。由此产生的“Matroid Team survival Orienteers”(MTSO)问题在机器人领域有着广泛的应用,比如信息路径规划、资源传递、搜索和救援。我们证明了MTSO问题的目标具有子模结构,这使得我们开发了两种多项式时间算法,保证找到值在最优的常数因子内的解。我们的第二种算法是加速连续贪婪算法的扩展,可以应用于更广泛的约束类别,同时保持次优性的界限。除了深入分析之外,我们还通过将我们的方法应用于一个场景来证明我们的方法的效率,在这个场景中,一组机器人必须收集信息,同时避免珊瑚三角中的危险,并使用合成数据集描述缩放和参数选择。
{"title":"The matroid team surviving orienteers problem and its variants: Constrained routing of heterogeneous teams with risky traversal","authors":"Stefan Jorgensen, Marco Pavone","doi":"10.1177/02783649231210326","DOIUrl":"https://doi.org/10.1177/02783649231210326","url":null,"abstract":"Consider deploying a team of robots in order to visit sites in a risky environment (i.e., where a robot might be lost during a traversal), subject to team-based operational constraints such as limits on team composition, traffic throughputs, and launch constraints. We formalize this problem using a graph to represent the environment, enforcing probabilistic survival constraints for each robot, and using a matroid (which generalizes linear independence to sets) to capture the team-based operational constraints. The resulting “Matroid Team Surviving Orienteers” (MTSO) problem has broad applications for robotics such as informative path planning, resource delivery, and search and rescue. We demonstrate that the objective for the MTSO problem has submodular structure, which leads us to develop two polynomial time algorithms which are guaranteed to find a solution with value within a constant factor of the optimum. The second of our algorithms is an extension of the accelerated continuous greedy algorithm, and can be applied to much broader classes of constraints while maintaining bounds on suboptimality. In addition to in-depth analysis, we demonstrate the efficiency of our approaches by applying them to a scenario where a team of robots must gather information while avoiding dangers in the Coral Triangle and characterize scaling and parameter selection using a synthetic dataset.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136019442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1177/02783649231210965
Linda Lastrico, Valerio Belcamino, Alessandro Carfì, Alessia Vignolo, Alessandra Sciutti, Fulvio Mastrogiovanni, Francesco Rea
We propose a dataset to study the influence of object-specific characteristics on human pick-and-place movements and compare the quality of the motion kinematics extracted by various sensors. This dataset is also suitable for promoting a broader discussion on general learning problems in the hand-object interaction domain, such as intention recognition or motion generation with applications in the Robotics field. The dataset consists of the recordings of 15 subjects performing 80 repetitions of a pick-and-place action under various experimental conditions, for a total of 1200 pick-and-places. The data has been collected thanks to a multimodal setup composed of multiple cameras, observing the actions from different perspectives, a motion capture system, and a wrist-worn inertial measurement unit. All the objects manipulated in the experiments are identical in shape, size, and appearance but differ in weight and liquid filling, which influences the carefulness required for their handling.
{"title":"The effects of selected object features on a pick-and-place task: A human multimodal dataset","authors":"Linda Lastrico, Valerio Belcamino, Alessandro Carfì, Alessia Vignolo, Alessandra Sciutti, Fulvio Mastrogiovanni, Francesco Rea","doi":"10.1177/02783649231210965","DOIUrl":"https://doi.org/10.1177/02783649231210965","url":null,"abstract":"We propose a dataset to study the influence of object-specific characteristics on human pick-and-place movements and compare the quality of the motion kinematics extracted by various sensors. This dataset is also suitable for promoting a broader discussion on general learning problems in the hand-object interaction domain, such as intention recognition or motion generation with applications in the Robotics field. The dataset consists of the recordings of 15 subjects performing 80 repetitions of a pick-and-place action under various experimental conditions, for a total of 1200 pick-and-places. The data has been collected thanks to a multimodal setup composed of multiple cameras, observing the actions from different perspectives, a motion capture system, and a wrist-worn inertial measurement unit. All the objects manipulated in the experiments are identical in shape, size, and appearance but differ in weight and liquid filling, which influences the carefulness required for their handling.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136068402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robots usually establish contacts at rigid surfaces with near-zero relative velocities. Otherwise, impact-induced energy propagates in the robot’s linkage and may cause irreversible damage to the hardware. Moreover, abrupt changes in task-space contact velocity and peak impact forces also result in abrupt changes in robot joint velocities and torques; which can compromise controllers’ stability, especially for those based on smooth models. In reality, several tasks would require establishing contact with moderately high velocity. We propose to enhance task-space multi-objective controllers formulated as a quadratic program to be resilient to frictional impacts in three dimensions. We devise new constraints and reformulate the usual ones to be robust to the abrupt joint state changes mentioned earlier. The impact event becomes a controlled process once the optimal control search space is aware of: (1) the hardware-affordable impact bounds and (2) analytically computed feasible set (polyhedra) that constrain post-impact critical states. Prior to and nearby the targeted contact spot, we assume, at each control cycle, that the impact will occur at the next iteration. This somewhat one-step preview makes our controller robust to impact time and location. To assess our approach, we experimented its resilience to moderate impacts with the Panda manipulator and achieved swift grabbing tasks with the HRP-4 humanoid robot.
{"title":"Impact-aware task-space quadratic-programming control","authors":"Yuquan Wang, Niels Dehio, Arnaud Tanguy, Abderrahmane Kheddar","doi":"10.1177/02783649231198558","DOIUrl":"https://doi.org/10.1177/02783649231198558","url":null,"abstract":"Robots usually establish contacts at rigid surfaces with near-zero relative velocities. Otherwise, impact-induced energy propagates in the robot’s linkage and may cause irreversible damage to the hardware. Moreover, abrupt changes in task-space contact velocity and peak impact forces also result in abrupt changes in robot joint velocities and torques; which can compromise controllers’ stability, especially for those based on smooth models. In reality, several tasks would require establishing contact with moderately high velocity. We propose to enhance task-space multi-objective controllers formulated as a quadratic program to be resilient to frictional impacts in three dimensions. We devise new constraints and reformulate the usual ones to be robust to the abrupt joint state changes mentioned earlier. The impact event becomes a controlled process once the optimal control search space is aware of: (1) the hardware-affordable impact bounds and (2) analytically computed feasible set (polyhedra) that constrain post-impact critical states. Prior to and nearby the targeted contact spot, we assume, at each control cycle, that the impact will occur at the next iteration. This somewhat one-step preview makes our controller robust to impact time and location. To assess our approach, we experimented its resilience to moderate impacts with the Panda manipulator and achieved swift grabbing tasks with the HRP-4 humanoid robot.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136079275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1177/02783649231204659
Ankit J. Shah, Pritish Kamath, Shen Li, Patrick L. Craven, Kevin J. Landers, Kevin Oden, Julie Shah
When observing task demonstrations, human apprentices are able to identify whether a given task is executed correctly long before they gain expertise in actually performing that task. Prior research into learning from demonstrations (LfD) has failed to capture this notion of the acceptability of a task’s execution; meanwhile, temporal logics provide a flexible language for expressing task specifications. Inspired by this, we present Bayesian specification inference, a probabilistic model for inferring task specification as a temporal logic formula. We incorporate methods from probabilistic programming to define our priors, along with a domain-independent likelihood function to enable sampling-based inference. We demonstrate the efficacy of our model for inferring specifications, with over 90% similarity observed between the inferred specification and the ground truth—both within a synthetic domain and during a real-world table setting task.
{"title":"Supervised Bayesian specification inference from demonstrations","authors":"Ankit J. Shah, Pritish Kamath, Shen Li, Patrick L. Craven, Kevin J. Landers, Kevin Oden, Julie Shah","doi":"10.1177/02783649231204659","DOIUrl":"https://doi.org/10.1177/02783649231204659","url":null,"abstract":"When observing task demonstrations, human apprentices are able to identify whether a given task is executed correctly long before they gain expertise in actually performing that task. Prior research into learning from demonstrations (LfD) has failed to capture this notion of the acceptability of a task’s execution; meanwhile, temporal logics provide a flexible language for expressing task specifications. Inspired by this, we present Bayesian specification inference, a probabilistic model for inferring task specification as a temporal logic formula. We incorporate methods from probabilistic programming to define our priors, along with a domain-independent likelihood function to enable sampling-based inference. We demonstrate the efficacy of our model for inferring specifications, with over 90% similarity observed between the inferred specification and the ground truth—both within a synthetic domain and during a real-world table setting task.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135547512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1177/02783649231191184
Zachary J. Harris, Annie M. Mao, Tyler M. Paine, Louis L. Whitcomb
Model-based approaches to navigation, control, and fault detection that utilize precise nonlinear models of vehicle plant dynamics will enable more accurate control and navigation, assured autonomy, and more complex missions for such vehicles. This paper reports novel theoretical and experimental results addressing the problem of parameter estimation of plant and actuator models for underactuated underwater vehicles operating in 6 degrees-of-freedom (DOF) whose dynamics are modeled by finite-dimensional Newton-Euler equations. This paper reports the first theoretical approach and experimental validation to identify simultaneously plant-model parameters (parameters such as mass, added mass, hydrodynamic drag, and buoyancy) and control-actuator parameters (control-surface models and thruster models) in 6-DOF. Most previously reported studies on parameter identification assume that the control-actuator parameters are known a priori. Moreover, this paper reports the first proof of convergence of the parameter estimates to the true set of parameters for this class of vehicles under a persistence of excitation condition. The reported adaptive identification (AID) algorithm does not require instrumentation of 6-DOF vehicle acceleration, which is required by conventional approaches to parameter estimation such as least squares. Additionally, the reported AID algorithm is applicable under any arbitrary open-loop or closed-loop control law. We report simulation and experimental results for identifying the plant-model and control-actuator parameters for an L3 OceanServer Iver3 autonomous underwater vehicle. We believe this general approach to AID could be extended to apply to other classes of machines and other classes of marine, land, aerial, and space vehicles.
{"title":"Stable nullspace adaptive parameter identification of 6 degree-of-freedom plant and actuator models for underactuated vehicles: Theory and experimental evaluation","authors":"Zachary J. Harris, Annie M. Mao, Tyler M. Paine, Louis L. Whitcomb","doi":"10.1177/02783649231191184","DOIUrl":"https://doi.org/10.1177/02783649231191184","url":null,"abstract":"Model-based approaches to navigation, control, and fault detection that utilize precise nonlinear models of vehicle plant dynamics will enable more accurate control and navigation, assured autonomy, and more complex missions for such vehicles. This paper reports novel theoretical and experimental results addressing the problem of parameter estimation of plant and actuator models for underactuated underwater vehicles operating in 6 degrees-of-freedom (DOF) whose dynamics are modeled by finite-dimensional Newton-Euler equations. This paper reports the first theoretical approach and experimental validation to identify simultaneously plant-model parameters (parameters such as mass, added mass, hydrodynamic drag, and buoyancy) and control-actuator parameters (control-surface models and thruster models) in 6-DOF. Most previously reported studies on parameter identification assume that the control-actuator parameters are known a priori. Moreover, this paper reports the first proof of convergence of the parameter estimates to the true set of parameters for this class of vehicles under a persistence of excitation condition. The reported adaptive identification (AID) algorithm does not require instrumentation of 6-DOF vehicle acceleration, which is required by conventional approaches to parameter estimation such as least squares. Additionally, the reported AID algorithm is applicable under any arbitrary open-loop or closed-loop control law. We report simulation and experimental results for identifying the plant-model and control-actuator parameters for an L3 OceanServer Iver3 autonomous underwater vehicle. We believe this general approach to AID could be extended to apply to other classes of machines and other classes of marine, land, aerial, and space vehicles.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136247626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-29DOI: 10.1177/02783649231195380
Fang Bai, Kanzhi Wu, Adrien Bartoli
We study the generalized Procrustes analysis (GPA), as a minimal formulation to the simultaneous localization and mapping (SLAM) problem. We propose KernelGPA, a novel global registration technique to solve SLAM in the deformable environment. We propose the concept of deformable transformation which encodes the entangled pose and deformation. We define deformable transformations using a kernel method, and show that both the deformable transformations and the environment map can be solved globally in closed-form, up to global scale ambiguities. We solve the scale ambiguities by an optimization formulation that maximizes rigidity. We demonstrate KernelGPA using the Gaussian kernel, and validate the superiority of KernelGPA with various datasets. Code and data are available at url{https://bitbucket.org/FangBai/deformableprocrustes}.
{"title":"Kernel-GPA: A globally optimal solution to deformable SLAM in closed-form","authors":"Fang Bai, Kanzhi Wu, Adrien Bartoli","doi":"10.1177/02783649231195380","DOIUrl":"https://doi.org/10.1177/02783649231195380","url":null,"abstract":"We study the generalized Procrustes analysis (GPA), as a minimal formulation to the simultaneous localization and mapping (SLAM) problem. We propose KernelGPA, a novel global registration technique to solve SLAM in the deformable environment. We propose the concept of deformable transformation which encodes the entangled pose and deformation. We define deformable transformations using a kernel method, and show that both the deformable transformations and the environment map can be solved globally in closed-form, up to global scale ambiguities. We solve the scale ambiguities by an optimization formulation that maximizes rigidity. We demonstrate KernelGPA using the Gaussian kernel, and validate the superiority of KernelGPA with various datasets. Code and data are available at url{https://bitbucket.org/FangBai/deformableprocrustes}.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135199085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-23DOI: 10.1177/02783649231201196
Matteo Saveriano, Fares J. Abu-Dakka, Aljaz Kramberger, Luka Peternel
Biological systems, including human beings, have the innate ability to perform complex tasks in a versatile and agile manner. Researchers in sensorimotor control have aimed to comprehend and formally define this innate characteristic. The idea, supported by several experimental findings, that biological systems are able to combine and adapt basic units of motion into complex tasks finally leads to the formulation of the motor primitives’ theory. In this respect, Dynamic Movement Primitives (DMPs) represent an elegant mathematical formulation of the motor primitives as stable dynamical systems and are well suited to generate motor commands for artificial systems like robots. In the last decades, DMPs have inspired researchers in different robotic fields including imitation and reinforcement learning, optimal control, physical interaction, and human–robot co-working, resulting in a considerable amount of published papers. The goal of this tutorial survey is two-fold. On one side, we present the existing DMP formulations in rigorous mathematical terms and discuss the advantages and limitations of each approach as well as practical implementation details. In the tutorial vein, we also search for existing implementations of presented approaches and release several others. On the other side, we provide a systematic and comprehensive review of existing literature and categorize state-of-the-art work on DMP. The paper concludes with a discussion on the limitations of DMPs and an outline of possible research directions.
生物系统,包括人类,具有以灵活多样的方式执行复杂任务的天生能力。感觉运动控制的研究人员一直致力于理解和正式定义这种先天特征。生物系统能够将基本的运动单位组合并适应于复杂的任务,这一观点得到了几个实验结果的支持,最终导致了运动原语理论的形成。在这方面,动态运动原语(Dynamic Movement Primitives, dmp)代表了一种优雅的数学公式,将运动原语作为稳定的动力系统,非常适合为机器人等人工系统生成运动命令。在过去的几十年里,dmp启发了不同机器人领域的研究人员,包括模仿和强化学习、最优控制、物理交互和人机协同工作,并发表了大量论文。本教程调查的目的有两个。一方面,我们用严格的数学术语介绍了现有的DMP公式,并讨论了每种方法的优点和局限性以及实际实现细节。在本教程中,我们还搜索了所提供方法的现有实现,并发布了其他一些实现。另一方面,我们对现有文献进行了系统和全面的回顾,并对DMP的最新工作进行了分类。本文最后讨论了DMPs的局限性,并概述了可能的研究方向。
{"title":"Dynamic movement primitives in robotics: A tutorial survey","authors":"Matteo Saveriano, Fares J. Abu-Dakka, Aljaz Kramberger, Luka Peternel","doi":"10.1177/02783649231201196","DOIUrl":"https://doi.org/10.1177/02783649231201196","url":null,"abstract":"Biological systems, including human beings, have the innate ability to perform complex tasks in a versatile and agile manner. Researchers in sensorimotor control have aimed to comprehend and formally define this innate characteristic. The idea, supported by several experimental findings, that biological systems are able to combine and adapt basic units of motion into complex tasks finally leads to the formulation of the motor primitives’ theory. In this respect, Dynamic Movement Primitives (DMPs) represent an elegant mathematical formulation of the motor primitives as stable dynamical systems and are well suited to generate motor commands for artificial systems like robots. In the last decades, DMPs have inspired researchers in different robotic fields including imitation and reinforcement learning, optimal control, physical interaction, and human–robot co-working, resulting in a considerable amount of published papers. The goal of this tutorial survey is two-fold. On one side, we present the existing DMP formulations in rigorous mathematical terms and discuss the advantages and limitations of each approach as well as practical implementation details. In the tutorial vein, we also search for existing implementations of presented approaches and release several others. On the other side, we provide a systematic and comprehensive review of existing literature and categorize state-of-the-art work on DMP. The paper concludes with a discussion on the limitations of DMPs and an outline of possible research directions.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135957920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}