Pub Date : 2022-05-23DOI: 10.1109/ICRA46639.2022.9812457
Di Wang, Long Ma, Risheng Liu, Xin Fan
Underwater image enhancement has become an attractive topic as a significant technology in marine engi-neering and aquatic robotics. However, the limited number of datasets and imperfect hand-crafted ground truth weaken its robustness to unseen scenarios, and hamper the application to high-level vision tasks. To address the above limitations, we develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model, aiming to exploit its hierarchical feature representation as an auxiliary for the low-level underwater image enhance-ment. Specifically, we tend to characterize the shallow layer features as textures while the deep layer features as structures in the semantic-aware model, and propose a multi-path Contextual Feature Refinement Module (CFRM) to refine features in multiple scales and model the correlation between different features. In addition, a feature dominative network is devised to perform channel-wise modulation on the aggregated texture and structure features for the adaptation to different feature patterns of the enhancement network. Extensive experiments on benchmarks demonstrate that the proposed algorithm achieves more appealing results and outperforms state-of-the-art meth-ods by large margins. We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
{"title":"Semantic-aware Texture-Structure Feature Collaboration for Underwater Image Enhancement","authors":"Di Wang, Long Ma, Risheng Liu, Xin Fan","doi":"10.1109/ICRA46639.2022.9812457","DOIUrl":"https://doi.org/10.1109/ICRA46639.2022.9812457","url":null,"abstract":"Underwater image enhancement has become an attractive topic as a significant technology in marine engi-neering and aquatic robotics. However, the limited number of datasets and imperfect hand-crafted ground truth weaken its robustness to unseen scenarios, and hamper the application to high-level vision tasks. To address the above limitations, we develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model, aiming to exploit its hierarchical feature representation as an auxiliary for the low-level underwater image enhance-ment. Specifically, we tend to characterize the shallow layer features as textures while the deep layer features as structures in the semantic-aware model, and propose a multi-path Contextual Feature Refinement Module (CFRM) to refine features in multiple scales and model the correlation between different features. In addition, a feature dominative network is devised to perform channel-wise modulation on the aggregated texture and structure features for the adaptation to different feature patterns of the enhancement network. Extensive experiments on benchmarks demonstrate that the proposed algorithm achieves more appealing results and outperforms state-of-the-art meth-ods by large margins. We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115697931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811582
Thomas Steinecker, Alexander Kurdas, Nico Mansfeld, Mazin Hamad, R. J. Kirschner, Saeed Abdolshah, S. Haddadin
In physical human-robot interaction (pHRI), safety is a key requirement. As collisions between humans and robots can generally not be avoided, it must be ensured that the human is not harmed. The robot reflected mass, the contact geometry, and the relative velocity between human and robot are the parameters that have the most significant influence on human injury severity during a collision. The reflected mass depends on the robot configuration and can be optimized especially in kinematically redundant robots. In this paper, we propose the Mean Reflected Mass (MRM) metric. The MRM is independent of the direction of contact/motion and enables assessing and optimizing the robot posture w.r.t. safety. In contrast to existing metrics, it is physically interpretable, meaning that it can be related to biomechanical injury data for realistic and model-independent safety analysis. For the Franka Emika Panda, we demonstrate in simulation that an optimization of the robot's MRM reduces the mean collision force. Finally, the relevance of the MRM for real pHRI applications is confirmed through a collision experiment.
{"title":"Mean Reflected Mass: A Physically Interpretable Metric for Safety Assessment and Posture Optimization in Human-Robot Interaction","authors":"Thomas Steinecker, Alexander Kurdas, Nico Mansfeld, Mazin Hamad, R. J. Kirschner, Saeed Abdolshah, S. Haddadin","doi":"10.1109/icra46639.2022.9811582","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811582","url":null,"abstract":"In physical human-robot interaction (pHRI), safety is a key requirement. As collisions between humans and robots can generally not be avoided, it must be ensured that the human is not harmed. The robot reflected mass, the contact geometry, and the relative velocity between human and robot are the parameters that have the most significant influence on human injury severity during a collision. The reflected mass depends on the robot configuration and can be optimized especially in kinematically redundant robots. In this paper, we propose the Mean Reflected Mass (MRM) metric. The MRM is independent of the direction of contact/motion and enables assessing and optimizing the robot posture w.r.t. safety. In contrast to existing metrics, it is physically interpretable, meaning that it can be related to biomechanical injury data for realistic and model-independent safety analysis. For the Franka Emika Panda, we demonstrate in simulation that an optimization of the robot's MRM reduces the mean collision force. Finally, the relevance of the MRM for real pHRI applications is confirmed through a collision experiment.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117199020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811974
Ran Tian, Nan I. Li, A. Girard, I. Kolmanovsky, M. Tomizuka
Goal inference is of great importance for a variety of applications that involve interaction, coordination, and/or competition with goal-oriented agents. Typical goal inference approaches use as many pointwise measurements of the agent's trajectory as possible to pursue a most accurate a-posteriori estimate of the goal. However, taking frequent measurements may not be preferred in situations where sensing is associated with high cost (e.g., sensing + perception may involve high computational/bandwidth cost and sensing may raise security concerns in privacy-critical/data-sensitive applications). In such situations, a sensible tradeoff between the information gained from measurements and the cost associated with sensing actions is highly desirable. This paper introduces a cost-effective sensing strategy for goal inference tasks based on hybrid Kalman filtering and model predictive control. Our key insights include: 1) a model predictive approach can be used to predict the amount of information gained from new measurements over a horizon and thus to optimize the tradeoff between information gain and sensing action cost, and 2) the high computational efficiency of hybrid Kalman filtering can ensure real-time feasibility of such a model predictive approach. We evaluate the proposed cost-effective sensing approach in a goal-oriented task, where we show that compared to standard goal inference approaches, our approach takes a considerably reduced number of measurements while not impairing the speed, accuracy, and reliability of goal inference by taking measurements smartly.
{"title":"Cost-Effective Sensing for Goal Inference: A Model Predictive Approach","authors":"Ran Tian, Nan I. Li, A. Girard, I. Kolmanovsky, M. Tomizuka","doi":"10.1109/icra46639.2022.9811974","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811974","url":null,"abstract":"Goal inference is of great importance for a variety of applications that involve interaction, coordination, and/or competition with goal-oriented agents. Typical goal inference approaches use as many pointwise measurements of the agent's trajectory as possible to pursue a most accurate a-posteriori estimate of the goal. However, taking frequent measurements may not be preferred in situations where sensing is associated with high cost (e.g., sensing + perception may involve high computational/bandwidth cost and sensing may raise security concerns in privacy-critical/data-sensitive applications). In such situations, a sensible tradeoff between the information gained from measurements and the cost associated with sensing actions is highly desirable. This paper introduces a cost-effective sensing strategy for goal inference tasks based on hybrid Kalman filtering and model predictive control. Our key insights include: 1) a model predictive approach can be used to predict the amount of information gained from new measurements over a horizon and thus to optimize the tradeoff between information gain and sensing action cost, and 2) the high computational efficiency of hybrid Kalman filtering can ensure real-time feasibility of such a model predictive approach. We evaluate the proposed cost-effective sensing approach in a goal-oriented task, where we show that compared to standard goal inference approaches, our approach takes a considerably reduced number of measurements while not impairing the speed, accuracy, and reliability of goal inference by taking measurements smartly.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127289911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811700
S. Farsoni, Alessio Sozzi, M. Minelli, C. Secchi, M. Bonfè
In this paper we present a novel strategy for reactive collision-free feasible motion planning for robotic manipulators operating inside an environment populated by moving obstacles. The proposed strategy embeds the Dynamical System (DS) based obstacle avoidance algorithm into a constrained non-linear optimization problem following the Model Predictive Control (MPC) approach. The solution of the problem allows the robot to avoid undesired collision with moving obstacles ensuring at the same time that its motion is feasible and does not overcome the designed constraints on velocity and acceleration. Simulations demonstrate that the introduction of the MPC prediction horizon helps the optimization solver in finding the solution leading to obstacle avoidance in situations where a non predictive implementation of the DS-based method would fail. Finally, the proposed strategy has been validated in an experimental work-cell using a Franka-Emika Panda robot.
{"title":"Improving the Feasibility of DS-based Collision Avoidance Using Non-Linear Model Predictive Control","authors":"S. Farsoni, Alessio Sozzi, M. Minelli, C. Secchi, M. Bonfè","doi":"10.1109/icra46639.2022.9811700","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811700","url":null,"abstract":"In this paper we present a novel strategy for reactive collision-free feasible motion planning for robotic manipulators operating inside an environment populated by moving obstacles. The proposed strategy embeds the Dynamical System (DS) based obstacle avoidance algorithm into a constrained non-linear optimization problem following the Model Predictive Control (MPC) approach. The solution of the problem allows the robot to avoid undesired collision with moving obstacles ensuring at the same time that its motion is feasible and does not overcome the designed constraints on velocity and acceleration. Simulations demonstrate that the introduction of the MPC prediction horizon helps the optimization solver in finding the solution leading to obstacle avoidance in situations where a non predictive implementation of the DS-based method would fail. Finally, the proposed strategy has been validated in an experimental work-cell using a Franka-Emika Panda robot.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124887677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811627
Paris Oikonomou, A. Dometios, M. Khamassi, C. Tzafestas
In this paper we introduce a novel technique that aims to control a two-module bio-inspired soft-robotic arm in order to qualitatively reproduce human demonstrations. The main idea behind the proposed methodology is based on the assumption that a complex trajectory can be derived from the composition and asynchronous activation of learned parameterizable simple movements constituting a knowledge base. The present work capitalises on recent research progress in Movement Primitive (MP) theory in order to initially build a library of Probabilistic MPs (ProMPs), and subsequently to compute on the fly their proper combination in the task space resulting in the requested trajectory. At the same time, a model learning method is assigned with the task to approximate the inverse kinematics, while a replanning procedure handles the sequential and/or parallel ProMPs' asynchronous activation. Taking advantage of the mapping at the primitive-level that the ProMP framework provides, the composition is transferred into the actuation space for execution. The proposed control architecture is experimentally evaluated on a real soft-robotic arm, where its capability to simplify the trajectory control task for robots of complex unmodeled dynamics is exhibited.
{"title":"Reproduction of Human Demonstrations with a Soft-Robotic Arm based on a Library of Learned Probabilistic Movement Primitives","authors":"Paris Oikonomou, A. Dometios, M. Khamassi, C. Tzafestas","doi":"10.1109/icra46639.2022.9811627","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811627","url":null,"abstract":"In this paper we introduce a novel technique that aims to control a two-module bio-inspired soft-robotic arm in order to qualitatively reproduce human demonstrations. The main idea behind the proposed methodology is based on the assumption that a complex trajectory can be derived from the composition and asynchronous activation of learned parameterizable simple movements constituting a knowledge base. The present work capitalises on recent research progress in Movement Primitive (MP) theory in order to initially build a library of Probabilistic MPs (ProMPs), and subsequently to compute on the fly their proper combination in the task space resulting in the requested trajectory. At the same time, a model learning method is assigned with the task to approximate the inverse kinematics, while a replanning procedure handles the sequential and/or parallel ProMPs' asynchronous activation. Taking advantage of the mapping at the primitive-level that the ProMP framework provides, the composition is transferred into the actuation space for execution. The proposed control architecture is experimentally evaluated on a real soft-robotic arm, where its capability to simplify the trajectory control task for robots of complex unmodeled dynamics is exhibited.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124989420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812148
Bruno Belzile, D. St-Onge
Rolling spherical robots have been studied in the past few years as an alternative to legged and wheeled robots in unstructured environments. These systems are of uttermost interest for space exploration: fast, robust to collision and able to handle various terrain topologies. This paper introduces a novel barycentric spherical robot, dubbed the Autonomous Robotic Intelligent Explorer Sphere (ARIES). Equipped with an actuated cylindrical joint acting as a pendulum with two degrees-of-freedom (DoF), the ARIES has a continuous differential transmission to allow simultaneous rolling and steering. This mechanism allows an unprecedented mass allocation optimization, notably to provide a low center of mass. Kinematics and dynamics of this novel system are detailed. An analysis of the steering mechanism proves that it is more efficient than a more conventional 2-DoF tilting mechanism, while also retaining more space for a payload, for instance to host sensors for simultaneous localization and mapping, in the upper part of the sphere. Moreover, the kinematic input/output equations obtained significantly simplify the device's control. Finally, we present a first complete prototype with preliminary experimental tests.
{"title":"Design and Modeling of a Spherical Robot Actuated by a Cylindrical Drive","authors":"Bruno Belzile, D. St-Onge","doi":"10.1109/icra46639.2022.9812148","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812148","url":null,"abstract":"Rolling spherical robots have been studied in the past few years as an alternative to legged and wheeled robots in unstructured environments. These systems are of uttermost interest for space exploration: fast, robust to collision and able to handle various terrain topologies. This paper introduces a novel barycentric spherical robot, dubbed the Autonomous Robotic Intelligent Explorer Sphere (ARIES). Equipped with an actuated cylindrical joint acting as a pendulum with two degrees-of-freedom (DoF), the ARIES has a continuous differential transmission to allow simultaneous rolling and steering. This mechanism allows an unprecedented mass allocation optimization, notably to provide a low center of mass. Kinematics and dynamics of this novel system are detailed. An analysis of the steering mechanism proves that it is more efficient than a more conventional 2-DoF tilting mechanism, while also retaining more space for a payload, for instance to host sensors for simultaneous localization and mapping, in the upper part of the sphere. Moreover, the kinematic input/output equations obtained significantly simplify the device's control. Finally, we present a first complete prototype with preliminary experimental tests.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812403
Sanghun Pyo, Hoyoung Kim, Jungwon Yoon
For immersive interaction in a virtual reality (VR) environment, an omnidirectional treadmill (ODT) can support performance of various locomotive motions (curved walk, side walk, moving with shooting stance) in any direction. When a user performs lateral locomotive motions on an ODT, a control scheme to achieve immersive and safe interaction with the ODT should satisfy robustness in terms of position error of a user to keep a reference position of the ODT by accurately estimating intentional walking speed (IWS) of the user, and it should guarantee postural stability of the user during the control actions. Existing locomotion interface (LI) control focuses on the reference position tracking performance regarding the position of the user's center of mass (COM) in order to respond to forward locomotion that can move at high speed. However, in sideways walking, the movement of the lower extremities is different from that of forward walking, and when the conventional LI control was directly applied to sideways walking, it was observed that excessive acceleration commands caused postural instability. For appropriate interface of sideways walking, we propose an estimation scheme based on an accurate walking model including the movement of the ankle joint. The proposed observer estimates the acting torque generated by the force of both lower extremities through the position information of COM and ankle joint to more accurately predict the user's intentional walking speed (IWS). In the sideways walking experiment conducted using a 1-dimensional user-driven treadmill (UDT), the proposed method allowed more natural interface of the lateral-side locomotion with better postural stability compared to the conventional estimation method that uses only the COM position information.
{"title":"Control Scheme for Sideways Walking on a User-driven Treadmill","authors":"Sanghun Pyo, Hoyoung Kim, Jungwon Yoon","doi":"10.1109/icra46639.2022.9812403","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812403","url":null,"abstract":"For immersive interaction in a virtual reality (VR) environment, an omnidirectional treadmill (ODT) can support performance of various locomotive motions (curved walk, side walk, moving with shooting stance) in any direction. When a user performs lateral locomotive motions on an ODT, a control scheme to achieve immersive and safe interaction with the ODT should satisfy robustness in terms of position error of a user to keep a reference position of the ODT by accurately estimating intentional walking speed (IWS) of the user, and it should guarantee postural stability of the user during the control actions. Existing locomotion interface (LI) control focuses on the reference position tracking performance regarding the position of the user's center of mass (COM) in order to respond to forward locomotion that can move at high speed. However, in sideways walking, the movement of the lower extremities is different from that of forward walking, and when the conventional LI control was directly applied to sideways walking, it was observed that excessive acceleration commands caused postural instability. For appropriate interface of sideways walking, we propose an estimation scheme based on an accurate walking model including the movement of the ankle joint. The proposed observer estimates the acting torque generated by the force of both lower extremities through the position information of COM and ankle joint to more accurately predict the user's intentional walking speed (IWS). In the sideways walking experiment conducted using a 1-dimensional user-driven treadmill (UDT), the proposed method allowed more natural interface of the lateral-side locomotion with better postural stability compared to the conventional estimation method that uses only the COM position information.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125924374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811701
K. Xie, Jia-Ju Bai, Yong-Hao Zou, Yuping Wang
ROS is popular in robotic-software development, and thus detecting bugs in ROS programs is important for modern robots. Fuzzing is a promising technique of runtime testing. But existing fuzzing approaches are limited in testing ROS programs, due to neglecting ROS properties, such as multi-dimensional inputs, temporal features of inputs and the distributed node model. In this paper, we develop a new fuzzing framework named ROZZ, to effectively test ROS programs and detect bugs based on ROS properties. ROZZ has three key techniques: (1) a multi-dimensional generation method to generate test cases of ROS programs from multiple dimensions, including user data, configuration parameters and sensor messages; (2) a distributed branch coverage to describe the overall code coverage of multiple ROS nodes in the robot task; (3) a temporal mutation strategy to generate test cases with temporal information. We evaluate ROZZ on 10 common robotic programs in ROS2, and it finds 43 real bugs. 20 of these bugs have been confirmed and fixed by related ROS developers. We compare ROZZ to existing approaches for testing robotic programs, and ROZZ finds more bugs with higher code coverage.
{"title":"ROZZ: Property-based Fuzzing for Robotic Programs in ROS","authors":"K. Xie, Jia-Ju Bai, Yong-Hao Zou, Yuping Wang","doi":"10.1109/icra46639.2022.9811701","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811701","url":null,"abstract":"ROS is popular in robotic-software development, and thus detecting bugs in ROS programs is important for modern robots. Fuzzing is a promising technique of runtime testing. But existing fuzzing approaches are limited in testing ROS programs, due to neglecting ROS properties, such as multi-dimensional inputs, temporal features of inputs and the distributed node model. In this paper, we develop a new fuzzing framework named ROZZ, to effectively test ROS programs and detect bugs based on ROS properties. ROZZ has three key techniques: (1) a multi-dimensional generation method to generate test cases of ROS programs from multiple dimensions, including user data, configuration parameters and sensor messages; (2) a distributed branch coverage to describe the overall code coverage of multiple ROS nodes in the robot task; (3) a temporal mutation strategy to generate test cases with temporal information. We evaluate ROZZ on 10 common robotic programs in ROS2, and it finds 43 real bugs. 20 of these bugs have been confirmed and fixed by related ROS developers. We compare ROZZ to existing approaches for testing robotic programs, and ROZZ finds more bugs with higher code coverage.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126069861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811580
Florian Kennel-Maushart, Roi Poranne, Stelian Coros
Multi-Robot Systems (MRS) present many advantages over single robots, e.g. improved stability and payload capacity. Being able to operate or teleoperate these systems is therefore of high interest in industries such as construction or logistics. However, controlling the collective motion of a MRS can place a significant cognitive burden on the operator. We present a Mixed Reality (MR) control interface, which allows an operator to specify payload target poses for a MRS in real-time, while effectively keeping the system away from unfavorable configurations. To this end, we solve the inverse kinematics problem for each arm individually and leverage redundant degrees of freedom to optimize for a secondary objective. Using the manipulability index as a secondary objective in particular, allows us to significantly improve the tracking and singularity avoidance capabilities of our MRS in comparison to the unoptimized scenario. This enables more secure and intuitive teleoperation. We simulate and test our approach on different setups and over different input trajectories, and analyse the convergence properties of our method. Finally, we show that the method also works well when deployed on to a dual-arm ABB YuMi robot.
{"title":"Multi-Arm Payload Manipulation via Mixed Reality","authors":"Florian Kennel-Maushart, Roi Poranne, Stelian Coros","doi":"10.1109/icra46639.2022.9811580","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811580","url":null,"abstract":"Multi-Robot Systems (MRS) present many advantages over single robots, e.g. improved stability and payload capacity. Being able to operate or teleoperate these systems is therefore of high interest in industries such as construction or logistics. However, controlling the collective motion of a MRS can place a significant cognitive burden on the operator. We present a Mixed Reality (MR) control interface, which allows an operator to specify payload target poses for a MRS in real-time, while effectively keeping the system away from unfavorable configurations. To this end, we solve the inverse kinematics problem for each arm individually and leverage redundant degrees of freedom to optimize for a secondary objective. Using the manipulability index as a secondary objective in particular, allows us to significantly improve the tracking and singularity avoidance capabilities of our MRS in comparison to the unoptimized scenario. This enables more secure and intuitive teleoperation. We simulate and test our approach on different setups and over different input trajectories, and analyse the convergence properties of our method. Finally, we show that the method also works well when deployed on to a dual-arm ABB YuMi robot.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125689896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811811
H. Wang, Rui Fan, Peide Cai, Ming Liu, Lujia Wang
Disparity and optical flow estimation are respectively 1D and 2D dense correspondence matching (DCM) tasks in nature. Unsupervised domain adaptation (UDA) is crucial for their success in new and unseen scenarios, enabling networks to draw inferences across different domains without manually-labeled ground truth. In this paper, we propose a general UDA framework (UnDAF) for disparity or optical flow estimation. Unlike existing approaches based on adversarial learning that suffers from pixel distortion and dense correspondence mismatch after domain alignment, our UnDAF adopts a straightforward but effective coarse-to-fine strategy, where a co-teaching strategy (two networks evolve by complementing each other) refines DCM estimations after Fourier transform initializes domain alignment. The simplicity of our approach makes it extremely easy to guide adaptation across different domains, or more practically, from synthetic to real-world domains. Extensive experiments carried out on the KITTI and MPI Sintel benchmarks demonstrate the accuracy and robustness of our UnDAF, advancing all other state-of-the-art UDA approaches for disparity or optical flow estimation. Our project page is available at https://sites.google.com/view/undaf.
{"title":"UnDAF: A General Unsupervised Domain Adaptation Framework for Disparity or Optical Flow Estimation","authors":"H. Wang, Rui Fan, Peide Cai, Ming Liu, Lujia Wang","doi":"10.1109/icra46639.2022.9811811","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811811","url":null,"abstract":"Disparity and optical flow estimation are respectively 1D and 2D dense correspondence matching (DCM) tasks in nature. Unsupervised domain adaptation (UDA) is crucial for their success in new and unseen scenarios, enabling networks to draw inferences across different domains without manually-labeled ground truth. In this paper, we propose a general UDA framework (UnDAF) for disparity or optical flow estimation. Unlike existing approaches based on adversarial learning that suffers from pixel distortion and dense correspondence mismatch after domain alignment, our UnDAF adopts a straightforward but effective coarse-to-fine strategy, where a co-teaching strategy (two networks evolve by complementing each other) refines DCM estimations after Fourier transform initializes domain alignment. The simplicity of our approach makes it extremely easy to guide adaptation across different domains, or more practically, from synthetic to real-world domains. Extensive experiments carried out on the KITTI and MPI Sintel benchmarks demonstrate the accuracy and robustness of our UnDAF, advancing all other state-of-the-art UDA approaches for disparity or optical flow estimation. Our project page is available at https://sites.google.com/view/undaf.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116020241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}