Pub Date : 2024-05-29DOI: 10.1016/j.robot.2024.104728
Jinsuk Choi , Wookyong Kwon , Kwanwoong Yoon , Seongwon Yoon , Young Sam Lee , Soo Jeon , Soohee Han
This article presents the self-balancing slosh-free control (SBSFC) scheme, a notable advancement for stable navigation in food-serving robots. The uniqueness of SBSFC is that it does not require direct modeling of slosh dynamics. Utilizing just two inertial measurement units (IMUs), the proposed scheme offers an online solution, obviating the need for complex dynamics or high-cost supplementary systems. Central to this work is the design of a control strategy favorable for sloshing suppression, achieved through feedforward reference shaping and disturbance compensation. This means the SBSFC indirectly alleviates and compensates for sloshing effects, rather than directly controlling them as a state variable by relying on pixel-based measurements of sloshing. Key contributions include rapid slosh damping via reference shaping, robust posture stabilization through optimal control, and enhanced disturbance handling with a disturbance observer. These strategies synergistically ensure immediate vibration reduction and long-term stability under real-world conditions. This study is expected to lead to a significant leap forward in commercial food-serving robotics.
{"title":"Suppressing violent sloshing flow in food serving robots","authors":"Jinsuk Choi , Wookyong Kwon , Kwanwoong Yoon , Seongwon Yoon , Young Sam Lee , Soo Jeon , Soohee Han","doi":"10.1016/j.robot.2024.104728","DOIUrl":"https://doi.org/10.1016/j.robot.2024.104728","url":null,"abstract":"<div><p>This article presents the self-balancing slosh-free control (SBSFC) scheme, a notable advancement for stable navigation in food-serving robots. The uniqueness of SBSFC is that it does not require direct modeling of slosh dynamics. Utilizing just two inertial measurement units (IMUs), the proposed scheme offers an online solution, obviating the need for complex dynamics or high-cost supplementary systems. Central to this work is the design of a control strategy favorable for sloshing suppression, achieved through feedforward reference shaping and disturbance compensation. This means the SBSFC indirectly alleviates and compensates for sloshing effects, rather than directly controlling them as a state variable by relying on pixel-based measurements of sloshing. Key contributions include rapid slosh damping via reference shaping, robust posture stabilization through optimal control, and enhanced disturbance handling with a disturbance observer. These strategies synergistically ensure immediate vibration reduction and long-term stability under real-world conditions. This study is expected to lead to a significant leap forward in commercial food-serving robotics.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"179 ","pages":"Article 104728"},"PeriodicalIF":4.3,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.1016/j.robot.2024.104725
Doganay Sirintuna , Theodora Kastritsi , Idil Ozdamar , Juan M. Gandarias , Arash Ajoudani
Transporting large and heavy objects can benefit from Human–Robot Collaboration (HRC), increasing the contribution of robots to our daily tasks and addressing challenges arising from labor shortages. This strategy typically positions the human collaborator as the leader, with the robot assuming the follower role. However, when transporting large objects, the operator’s situational awareness can be compromised as the objects may occlude different parts of the environment, weakening the human leader’s decision-making capacity and leading to failure due to collision. This paper proposes a situational awareness framework for collaborative transportation to face this challenge. The framework integrates a multi-modal haptic-based Obstacle Feedback Module with two units. The first unit consists of a warning module that alerts the operator through a haptic belt with four vibrotactile devices that provide feedback about the location and proximity of the obstacles. The second unit implements virtual fixtures as hard constraints for mobility. The warning feedback and the virtual fixtures act online based on the information given by two Lidars mounted on a mobile manipulator to detect the obstacles in the surroundings. By enhancing the operator’s awareness of the environment, the proposed module improves the safety of the human–robot team in collaborative transportation scenarios by preventing collisions. Experiments with 16 non-expert subjects in four feedback modalities during four scenarios report an objective evaluation thanks to quantitative metrics and subjective evaluations based on user-level experiences. The results reveal the strengths and weaknesses of the implemented feedback modalities while providing solid evidence of the increased situational awareness of the operator when the two haptic units are employed.
{"title":"Enhancing human–robot collaborative transportation through obstacle-aware vibrotactile warning and virtual fixtures","authors":"Doganay Sirintuna , Theodora Kastritsi , Idil Ozdamar , Juan M. Gandarias , Arash Ajoudani","doi":"10.1016/j.robot.2024.104725","DOIUrl":"10.1016/j.robot.2024.104725","url":null,"abstract":"<div><p>Transporting large and heavy objects can benefit from Human–Robot Collaboration (HRC), increasing the contribution of robots to our daily tasks and addressing challenges arising from labor shortages. This strategy typically positions the human collaborator as the leader, with the robot assuming the follower role. However, when transporting large objects, the operator’s situational awareness can be compromised as the objects may occlude different parts of the environment, weakening the human leader’s decision-making capacity and leading to failure due to collision. This paper proposes a situational awareness framework for collaborative transportation to face this challenge. The framework integrates a multi-modal haptic-based Obstacle Feedback Module with two units. The first unit consists of a warning module that alerts the operator through a haptic belt with four vibrotactile devices that provide feedback about the location and proximity of the obstacles. The second unit implements virtual fixtures as hard constraints for mobility. The warning feedback and the virtual fixtures act online based on the information given by two Lidars mounted on a mobile manipulator to detect the obstacles in the surroundings. By enhancing the operator’s awareness of the environment, the proposed module improves the safety of the human–robot team in collaborative transportation scenarios by preventing collisions. Experiments with 16 non-expert subjects in four feedback modalities during four scenarios report an objective evaluation thanks to quantitative metrics and subjective evaluations based on user-level experiences. The results reveal the strengths and weaknesses of the implemented feedback modalities while providing solid evidence of the increased situational awareness of the operator when the two haptic units are employed.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104725"},"PeriodicalIF":4.3,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001088/pdfft?md5=e83bacf7a309029949012e8f8a6e240a&pid=1-s2.0-S0921889024001088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141140168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.
{"title":"Long-term navigation for autonomous robots based on spatio-temporal map prediction","authors":"Yanbo Wang, Yaxian Fan, Jingchuan Wang, Weidong Chen","doi":"10.1016/j.robot.2024.104724","DOIUrl":"10.1016/j.robot.2024.104724","url":null,"abstract":"<div><p>The robotics community has witnessed a growing demand for long-term navigation of autonomous robots in diverse environments, including factories, homes, offices, and public places. The core challenge in long-term navigation for autonomous robots lies in effectively adapting to varying degrees of dynamism in the environment. In this paper, we propose a long-term navigation method for autonomous robots based on spatio-temporal map prediction. The time series model is introduced to learn the changing patterns of different environmental structures or objects on multiple time scales based on the historical maps and forecast the future maps for long-term navigation. Then, an improved global path planning algorithm is performed based on the time-variant predicted cost maps. During navigation, the current observations are fused with the predicted map through a modified Bayesian filter to reduce the impact of prediction errors, and the updated map is stored for future predictions. We run simulation and conduct several weeks of experiments in multiple scenarios. The results show that our algorithm is effective and robust for long-term navigation in dynamic environments.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"179 ","pages":"Article 104724"},"PeriodicalIF":4.3,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-21DOI: 10.1016/j.robot.2024.104723
Ambuj, Harsh Nagar, Ayan Paul, Rajendra Machavaram, Peeyush Soni
The advent of autonomous mobile robots has spurred research into efficient trajectory planning methods, particularly in dynamic environments with varied obstacles. This study focuses on optimizing trajectory planning for an Autonomous Ground Vehicle (AGV) using a novel Reinforcement Learning Particle Swarm Optimization (RLPSO) algorithm. Real-time mobile robot localization and map generation are introduced through the utilization of the Hector-SLAM algorithm within the Robot Operating System (ROS) platform, resulting in the creation of a binary occupancy grid. The present research thoroughly investigates the performance of the RLPSO algorithm, juxtaposed against five established Particle Swarm Optimization (PSO) variants, within the context of four distinct physical environments. The experimental design is tailored to emulate real-world scenarios, encompassing a spectrum of challenges posed by static and dynamic obstacles. The AGV, equipped with LiDAR sensors, navigates through diverse environments characterized by obstacles of different geometries. The RLPSO algorithm dynamically adapts its strategies based on feedback, enabling adaptable trajectory planning while effectively avoiding obstacles. Numerical results obtained from extensive experimentation highlight the algorithm's efficacy. The navigational model's validation is achieved within a MATLAB 2D virtual environment, employing 2D Lidar mapping point data. Transitioning to physical experiments with an AGV, RLPSO continues to demonstrate superior performance, showcasing its potential for real-world applications in autonomous navigation. On average, RLPSO achieves a 10–15 % reduction in path distances and traversal time compared to the following best-performing PSO variant across diverse scenarios. The adaptive nature of RLPSO, informed by feedback from the environment, distinguishes it as a promising solution for autonomous navigation in dynamic settings, with implications for practical implementation in real-world scenarios.
{"title":"“Reinforcement learning particle swarm optimization based trajectory planning of autonomous ground vehicle using 2D LiDAR point cloud”","authors":"Ambuj, Harsh Nagar, Ayan Paul, Rajendra Machavaram, Peeyush Soni","doi":"10.1016/j.robot.2024.104723","DOIUrl":"10.1016/j.robot.2024.104723","url":null,"abstract":"<div><p>The advent of autonomous mobile robots has spurred research into efficient trajectory planning methods, particularly in dynamic environments with varied obstacles. This study focuses on optimizing trajectory planning for an Autonomous Ground Vehicle (AGV) using a novel Reinforcement Learning Particle Swarm Optimization (RLPSO) algorithm. Real-time mobile robot localization and map generation are introduced through the utilization of the Hector-SLAM algorithm within the Robot Operating System (ROS) platform, resulting in the creation of a binary occupancy grid. The present research thoroughly investigates the performance of the RLPSO algorithm, juxtaposed against five established Particle Swarm Optimization (PSO) variants, within the context of four distinct physical environments. The experimental design is tailored to emulate real-world scenarios, encompassing a spectrum of challenges posed by static and dynamic obstacles. The AGV, equipped with LiDAR sensors, navigates through diverse environments characterized by obstacles of different geometries. The RLPSO algorithm dynamically adapts its strategies based on feedback, enabling adaptable trajectory planning while effectively avoiding obstacles. Numerical results obtained from extensive experimentation highlight the algorithm's efficacy. The navigational model's validation is achieved within a MATLAB 2D virtual environment, employing 2D Lidar mapping point data. Transitioning to physical experiments with an AGV, RLPSO continues to demonstrate superior performance, showcasing its potential for real-world applications in autonomous navigation. On average, RLPSO achieves a 10–15 % reduction in path distances and traversal time compared to the following best-performing PSO variant across diverse scenarios. The adaptive nature of RLPSO, informed by feedback from the environment, distinguishes it as a promising solution for autonomous navigation in dynamic settings, with implications for practical implementation in real-world scenarios.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104723"},"PeriodicalIF":4.3,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-21DOI: 10.1016/j.robot.2024.104722
Xiaotao Shan, Yichao Jin, Marius Jurt, Peizheng Li
Recent studies in warehouse logistics have highlighted the importance of multi-robot collaboration in collective transport scenarios, where multiple robots work together to lift and transport bulky and heavy items. However, limited attention has been given to task allocation in such scenarios, particularly when dealing with continuously arriving tasks and time constraints. In this paper, we propose a decentralized auction-based method to address this challenge. Our approach involves robots predicting the task choices of their peers, estimating the values and partnerships associated with multi-robot tasks, and ultimately determining their task choices and collaboration partners through an auction process. A unique “suggestion” mechanism is introduced to the auction process to mitigate the decision bias caused by the leader–follower mode inherent in typical auction-based methods. Additionally, an available time update mechanism is designed to prevent the accumulation of schedule deviations during the robots’ operation process. Through extensive simulations, we demonstrate the superior performance and computational efficiency of the proposed algorithm compared to both the Agent-Based Sequential Greedy Algorithm and the Consensus-Based Time Table Algorithm, in both dynamic and static scenarios.
{"title":"A distributed multi-robot task allocation method for time-constrained dynamic collective transport","authors":"Xiaotao Shan, Yichao Jin, Marius Jurt, Peizheng Li","doi":"10.1016/j.robot.2024.104722","DOIUrl":"10.1016/j.robot.2024.104722","url":null,"abstract":"<div><p>Recent studies in warehouse logistics have highlighted the importance of multi-robot collaboration in collective transport scenarios, where multiple robots work together to lift and transport bulky and heavy items. However, limited attention has been given to task allocation in such scenarios, particularly when dealing with continuously arriving tasks and time constraints. In this paper, we propose a decentralized auction-based method to address this challenge. Our approach involves robots predicting the task choices of their peers, estimating the values and partnerships associated with multi-robot tasks, and ultimately determining their task choices and collaboration partners through an auction process. A unique “suggestion” mechanism is introduced to the auction process to mitigate the decision bias caused by the leader–follower mode inherent in typical auction-based methods. Additionally, an available time update mechanism is designed to prevent the accumulation of schedule deviations during the robots’ operation process. Through extensive simulations, we demonstrate the superior performance and computational efficiency of the proposed algorithm compared to both the Agent-Based Sequential Greedy Algorithm and the Consensus-Based Time Table Algorithm, in both dynamic and static scenarios.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104722"},"PeriodicalIF":4.3,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The flexible four-finger gripper, as a specialized robotic end-effector, is highly valued for its ability to passively adapt to the shape of objects and perform non-destructive grasping. However, the development of grasping detection algorithms for flexible four-finger grippers remains relatively unexplored. This paper addresses the unique characteristics of the flexible four-finger gripper by proposing a grasping detection method based on deep learning. Firstly, the Acute Angle Representation model (AAR-model), which is based on the structure of the flexible four-finger gripper and consists of grasp points and angles, is designed as the grasping representation model that reduces unnecessary rotations of the gripper and improves its versatility in grasping objects. Then, the Flexible Gripper Adaptive Attribute model (FGAA-model) is proposed to represent the grasping attributes of objects, calculate the grasp angles that meet the criteria of the AAR-model, and aggregate the AAR-models on the image data into a unified set, thereby circumventing the time-consuming process of pixel-level annotation. Finally, the Adaptive Grasping Neural Net (AGNN), which is based on Adaptive Feature Fusion and the Grasp Aware Network (AFFGA), is introduced by eliminating redundant network detection headers, fusing color and depth images as inputs, and incorporating a Series Atrous Spatial Pyramid (SASP) structure to produce more accurate grasp poses. Our method not only attains a remarkable accuracy of 97.62% on the Cornell dataset but also swiftly completes grasping detection within 25 ms. In practical robotic arm grasping tests, where a robot is outfitted with a flexible four-finger gripper, it successfully grasps unknown objects with a 96% success rate. These results underscore the reliability and real-time performance of our method, significantly enhancing the gripper's adaptability and precision when handling objects of varying sizes and shapes. This advancement provides a powerful technical solution for robots utilizing flexible four-finger grippers, enabling autonomous, real-time, and highly accurate grasping maneuvers. Moreover, it addresses the persistent challenge of the scarcity of efficient grasping detection techniques tailored for flexible four-finger grippers.
{"title":"A method for grasp detection of flexible four-finger gripper","authors":"Jianan Liang, Xingrui Bian, Lina Jia, Meiyan Liang, Ruiling Kong, Jinhua Zhang","doi":"10.1016/j.robot.2024.104721","DOIUrl":"10.1016/j.robot.2024.104721","url":null,"abstract":"<div><p>The flexible four-finger gripper, as a specialized robotic end-effector, is highly valued for its ability to passively adapt to the shape of objects and perform non-destructive grasping. However, the development of grasping detection algorithms for flexible four-finger grippers remains relatively unexplored. This paper addresses the unique characteristics of the flexible four-finger gripper by proposing a grasping detection method based on deep learning. Firstly, the Acute Angle Representation model (AAR-model), which is based on the structure of the flexible four-finger gripper and consists of grasp points and angles, is designed as the grasping representation model that reduces unnecessary rotations of the gripper and improves its versatility in grasping objects. Then, the Flexible Gripper Adaptive Attribute model (FGAA-model) is proposed to represent the grasping attributes of objects, calculate the grasp angles that meet the criteria of the AAR-model, and aggregate the AAR-models on the image data into a unified set, thereby circumventing the time-consuming process of pixel-level annotation. Finally, the Adaptive Grasping Neural Net (AGNN), which is based on Adaptive Feature Fusion and the Grasp Aware Network (AFFGA), is introduced by eliminating redundant network detection headers, fusing color and depth images as inputs, and incorporating a Series Atrous Spatial Pyramid (SASP) structure to produce more accurate grasp poses. Our method not only attains a remarkable accuracy of 97.62% on the Cornell dataset but also swiftly completes grasping detection within 25 ms. In practical robotic arm grasping tests, where a robot is outfitted with a flexible four-finger gripper, it successfully grasps unknown objects with a 96% success rate. These results underscore the reliability and real-time performance of our method, significantly enhancing the gripper's adaptability and precision when handling objects of varying sizes and shapes. This advancement provides a powerful technical solution for robots utilizing flexible four-finger grippers, enabling autonomous, real-time, and highly accurate grasping maneuvers. Moreover, it addresses the persistent challenge of the scarcity of efficient grasping detection techniques tailored for flexible four-finger grippers.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104721"},"PeriodicalIF":4.3,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141032902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1016/j.robot.2024.104711
Omar Itani , Elie Shammas , Dany Abou Jaoude
In this paper, a motion planning algorithm for floating planar under-actuated hyper-redundant snake robots is proposed. The presented algorithm generates locally optimal shape trajectories, i.e., continuous trajectories in the base space of the robot. Such shape trajectories produce a desired rotation of the snake robot, i.e., change in the uncontrolled orientation fiber variable. The proposed method formulates the motion planning problem as an optimization problem where the objective function could be defined to minimize various metrics, such as energy-based cost functions. Additionally, the proposed motion planning algorithm uses a heuristic to generate shape trajectories that avoid self-intersections and obstacle collision. Hence, the motion planning method generates shape trajectories that locally minimize user-defined cost functions and eliminate self-intersections or obstacle collision. The proposed gait generation method is validated using numerical simulations of five-link and seven-link snake robots.
{"title":"Optimal reorientation of planar floating snake robots with collision avoidance","authors":"Omar Itani , Elie Shammas , Dany Abou Jaoude","doi":"10.1016/j.robot.2024.104711","DOIUrl":"10.1016/j.robot.2024.104711","url":null,"abstract":"<div><p>In this paper, a motion planning algorithm for floating planar under-actuated hyper-redundant snake robots is proposed. The presented algorithm generates locally optimal shape trajectories, i.e., continuous trajectories in the base space of the robot. Such shape trajectories produce a desired rotation of the snake robot, i.e., change in the uncontrolled orientation fiber variable. The proposed method formulates the motion planning problem as an optimization problem where the objective function could be defined to minimize various metrics, such as energy-based cost functions. Additionally, the proposed motion planning algorithm uses a heuristic to generate shape trajectories that avoid self-intersections and obstacle collision. Hence, the motion planning method generates shape trajectories that locally minimize user-defined cost functions and eliminate self-intersections or obstacle collision. The proposed gait generation method is validated using numerical simulations of five-link and seven-link snake robots.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104711"},"PeriodicalIF":4.3,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141046287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.1016/j.robot.2024.104714
Simona Gugliermo , David Cáceres Domínguez , Marco Iannotta , Todor Stoyanov , Erik Schaffernicht
Behavior trees (BTs) are increasingly popular in the robotics community. Yet in the growing body of published work on this topic, there is a lack of consensus on what to measure and how to quantify BTs when reporting results. This is not only due to the lack of standardized measures, but due to the sometimes ambiguous use of definitions to describe BT properties. This work provides a comprehensive overview of BT properties the community is interested in, how they relate to each other, the metrics currently used to measure BTs, and whether the metrics appropriately quantify those properties of interest. Finally, we provide the practitioner with a set of metrics to measure, as well as insights into the properties that can be derived from those metrics.
By providing this holistic view of properties and their corresponding evaluation metrics, we hope to improve clarity when using BTs in robotics. This more systematic approach will make reported results more consistent and comparable when evaluating BTs.
{"title":"Evaluating behavior trees","authors":"Simona Gugliermo , David Cáceres Domínguez , Marco Iannotta , Todor Stoyanov , Erik Schaffernicht","doi":"10.1016/j.robot.2024.104714","DOIUrl":"10.1016/j.robot.2024.104714","url":null,"abstract":"<div><p>Behavior trees (BTs) are increasingly popular in the robotics community. Yet in the growing body of published work on this topic, there is a lack of consensus on what to measure and how to quantify BTs when reporting results. This is not only due to the lack of standardized measures, but due to the sometimes ambiguous use of definitions to describe BT properties. This work provides a comprehensive overview of BT properties the community is interested in, how they relate to each other, the metrics currently used to measure BTs, and whether the metrics appropriately quantify those properties of interest. Finally, we provide the practitioner with a set of metrics to measure, as well as insights into the properties that can be derived from those metrics.</p><p>By providing this holistic view of properties and their corresponding evaluation metrics, we hope to improve clarity when using BTs in robotics. This more systematic approach will make reported results more consistent and comparable when evaluating BTs.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104714"},"PeriodicalIF":4.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024000976/pdfft?md5=7998eb672dbe5396dbaca07ed9a817e6&pid=1-s2.0-S0921889024000976-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141041936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-13DOI: 10.1016/j.robot.2024.104710
Kirsty Aquilina, David A.W. Barton , Nathan F. Lepora
We live in a constantly changing world. For robots to fully operate in our world, they need to work in dynamic environments where objects are not fixed in place or may be moved by humans or other agents. This work is based on tactile sensing, as it enables sufficiently responsive robotic systems for contact-based tasks in dynamic environments. Our proposed approach is divided into two parts: (1) a way to perform object following using a shear controller that minimises tactile shear deformation and (2) a switching controller that alternates between the shear controller and a tactile exploration controller that enables contour-following of a moving object. We find that during the object-following task, the robot follows the moving object to sub-millimetre accuracy over a 72 mm range for 5 different velocities in 2D. The switching controller successfully performs 2D contour following on several moving objects at various object speeds whilst keeping an almost constant speed of exploration. We expect our method for minimising sensor deformation using a simple controller will generalise over different kinds of contact scenarios for moving objects. Moreover, the switching controller provides an architecture where velocity information of moving objects is fused with another controller thereby enabling a more holistic use of tactile information to empower robotic systems to perform complex tactile tasks.
{"title":"Tactile control for object tracking and dynamic contour following","authors":"Kirsty Aquilina, David A.W. Barton , Nathan F. Lepora","doi":"10.1016/j.robot.2024.104710","DOIUrl":"10.1016/j.robot.2024.104710","url":null,"abstract":"<div><p>We live in a constantly changing world. For robots to fully operate in our world, they need to work in dynamic environments where objects are not fixed in place or may be moved by humans or other agents. This work is based on tactile sensing, as it enables sufficiently responsive robotic systems for contact-based tasks in dynamic environments. Our proposed approach is divided into two parts: (1) a way to perform object following using a shear controller that minimises tactile shear deformation and (2) a switching controller that alternates between the shear controller and a tactile exploration controller that enables contour-following of a moving object. We find that during the object-following task, the robot follows the moving object to sub-millimetre accuracy over a <span><math><mo>≈</mo></math></span>72 mm range for 5 different velocities in 2D. The switching controller successfully performs 2D contour following on several moving objects at various object speeds whilst keeping an almost constant speed of exploration. We expect our method for minimising sensor deformation using a simple controller will generalise over different kinds of contact scenarios for moving objects. Moreover, the switching controller provides an architecture where velocity information of moving objects is fused with another controller thereby enabling a more holistic use of tactile information to empower robotic systems to perform complex tactile tasks.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"178 ","pages":"Article 104710"},"PeriodicalIF":4.3,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024000939/pdfft?md5=a990a37d647a91ed64db15a16e5cb69c&pid=1-s2.0-S0921889024000939-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141028651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1016/j.robot.2024.104712
Marija Seder , Gregor Klančar
The article presents a novel idea to construct a smooth navigation function for a wheeled robot based on grid-based search, that enables replanning in dynamic environments. Since the dynamic constraints of the robot are also considered, the navigation function is combined with the model predictive control (MPC) to guide the robot safely to the defined goal location. The main novelty of this work is the definition of this navigation function and its MPC application with guaranteed closed-loop convergence in finite time for a non-holonomic robot with speed and acceleration constraints. The navigation function consists of an interpolated potential function derived from the grid-based search and a term that guides the orientation of the robot on continuous gradients. The navigation function guarantees convergent trajectories to the desired goal, results in smooth motion between obstacles, has no local minima, and is computationally efficient. The proposed navigation is also suitable in dynamic environments, as confirmed by experiments with a Husky mobile robot.
{"title":"Convergent wheeled robot navigation based on an interpolated potential function and gradient","authors":"Marija Seder , Gregor Klančar","doi":"10.1016/j.robot.2024.104712","DOIUrl":"https://doi.org/10.1016/j.robot.2024.104712","url":null,"abstract":"<div><p>The article presents a novel idea to construct a smooth navigation function for a wheeled robot based on grid-based search, that enables replanning in dynamic environments. Since the dynamic constraints of the robot are also considered, the navigation function is combined with the model predictive control (MPC) to guide the robot safely to the defined goal location. The main novelty of this work is the definition of this navigation function and its MPC application with guaranteed closed-loop convergence in finite time for a non-holonomic robot with speed and acceleration constraints. The navigation function consists of an interpolated potential function derived from the grid-based search and a term that guides the orientation of the robot on continuous gradients. The navigation function guarantees convergent trajectories to the desired goal, results in smooth motion between obstacles, has no local minima, and is computationally efficient. The proposed navigation is also suitable in dynamic environments, as confirmed by experiments with a Husky mobile robot.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"177 ","pages":"Article 104712"},"PeriodicalIF":4.3,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140918169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}