首页 > 最新文献

2020 IEEE International Conference on Robotics and Automation (ICRA)最新文献

英文 中文
Deep Visual Heuristics: Learning Feasibility of Mixed-Integer Programs for Manipulation Planning 深度视觉启发式:操作规划混合整数方案的学习可行性
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197291
Danny Driess, Ozgur S. Oguz, Jung-Su Ha, M. Toussaint
In this paper, we propose a deep neural network that predicts the feasibility of a mixed-integer program from visual input for robot manipulation planning. Integrating learning into task and motion planning is challenging, since it is unclear how the scene and goals can be encoded as input to the learning algorithm in a way that enables to generalize over a variety of tasks in environments with changing numbers of objects and goals. To achieve this, we propose to encode the scene and the target object directly in the image space.Our experiments show that our proposed network generalizes to scenes with multiple objects, although during training only two objects are present at the same time. By using the learned network as a heuristic to guide the search over the discrete variables of the mixed-integer program, the number of optimization problems that have to be solved to find a feasible solution or to detect infeasibility can greatly be reduced.
在本文中,我们提出了一种深度神经网络,从视觉输入预测混合整数方案的可行性,用于机器人操作规划。将学习整合到任务和运动规划中是具有挑战性的,因为目前尚不清楚如何将场景和目标编码为学习算法的输入,从而能够在对象和目标数量不断变化的环境中对各种任务进行概括。为此,我们建议在图像空间中直接对场景和目标物体进行编码。我们的实验表明,我们提出的网络可以推广到具有多个对象的场景,尽管在训练期间只有两个对象同时存在。通过使用学习到的网络作为启发式方法来指导对混合整数规划离散变量的搜索,可以大大减少为寻找可行解或检测不可行性而必须解决的优化问题数量。
{"title":"Deep Visual Heuristics: Learning Feasibility of Mixed-Integer Programs for Manipulation Planning","authors":"Danny Driess, Ozgur S. Oguz, Jung-Su Ha, M. Toussaint","doi":"10.1109/ICRA40945.2020.9197291","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197291","url":null,"abstract":"In this paper, we propose a deep neural network that predicts the feasibility of a mixed-integer program from visual input for robot manipulation planning. Integrating learning into task and motion planning is challenging, since it is unclear how the scene and goals can be encoded as input to the learning algorithm in a way that enables to generalize over a variety of tasks in environments with changing numbers of objects and goals. To achieve this, we propose to encode the scene and the target object directly in the image space.Our experiments show that our proposed network generalizes to scenes with multiple objects, although during training only two objects are present at the same time. By using the learned network as a heuristic to guide the search over the discrete variables of the mixed-integer program, the number of optimization problems that have to be solved to find a feasible solution or to detect infeasibility can greatly be reduced.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87353587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Towards Noise Resilient SLAM 抗噪音SLAM
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196745
Anirud Thyagharajan, O. J. Omer, D. Mandal, S. Subramoney
Sparse-indirect SLAM systems have been dominantly popular due to their computational efficiency and photometric invariance properties. Depth sensors are critical to SLAM frameworks for providing scale information to the 3D world, yet known to be plagued by a wide variety of noise sources, possessing lateral and axial components. In this work, we demonstrate the detrimental impact of these depth noise components on the performance of the state-of-the-art sparse-indirect SLAM system (ORB-SLAM2). We propose (i) Map-Point Consensus based Outlier Rejection (MC-OR) to counter lateral noise, and (ii) Adaptive Virtual Camera (AVC) to combat axial noise accurately. MC-OR utilizes consensus information between multiple sightings of the same landmark to disambiguate noisy depth and filter it out before pose optimization. In AVC, we introduce an error vector as an accurate representation of the axial depth error. We additionally propose an adaptive algorithm to find the virtual camera location for projecting the error used in the objective function of the pose optimization. Our techniques work equally well for stereo image pairs and RGB-D input directly used by sparse-indirect SLAM systems. Our methods were tested on the TUM (RGB-D) and EuRoC (stereo) datasets and we show that they outperform existing state-of-the-art ORB-SLAM2 by 2-3x, especially in sequences critically affected by depth noise.
稀疏-间接SLAM系统由于其计算效率和光度不变性而广受欢迎。深度传感器对于SLAM框架至关重要,可以为3D世界提供尺度信息,但众所周知,深度传感器受到各种噪声源的困扰,这些噪声源具有横向和轴向分量。在这项工作中,我们展示了这些深度噪声成分对最先进的稀疏-间接SLAM系统(ORB-SLAM2)性能的有害影响。我们提出(i)基于地图点共识的离群值抑制(MC-OR)来对抗横向噪声,以及(ii)自适应虚拟摄像机(AVC)来准确地对抗轴向噪声。MC-OR利用同一地标的多个目击之间的共识信息来消除噪声深度的歧义,并在姿态优化之前将其过滤掉。在AVC中,我们引入误差向量作为轴向深度误差的精确表示。此外,我们还提出了一种自适应算法来寻找虚拟摄像机位置,用于投影姿态优化目标函数中的误差。我们的技术同样适用于稀疏间接SLAM系统直接使用的立体图像对和RGB-D输入。我们的方法在TUM (RGB-D)和EuRoC(立体)数据集上进行了测试,结果表明它们比现有的最先进的ORB-SLAM2性能好2-3倍,特别是在受深度噪声严重影响的序列中。
{"title":"Towards Noise Resilient SLAM","authors":"Anirud Thyagharajan, O. J. Omer, D. Mandal, S. Subramoney","doi":"10.1109/ICRA40945.2020.9196745","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196745","url":null,"abstract":"Sparse-indirect SLAM systems have been dominantly popular due to their computational efficiency and photometric invariance properties. Depth sensors are critical to SLAM frameworks for providing scale information to the 3D world, yet known to be plagued by a wide variety of noise sources, possessing lateral and axial components. In this work, we demonstrate the detrimental impact of these depth noise components on the performance of the state-of-the-art sparse-indirect SLAM system (ORB-SLAM2). We propose (i) Map-Point Consensus based Outlier Rejection (MC-OR) to counter lateral noise, and (ii) Adaptive Virtual Camera (AVC) to combat axial noise accurately. MC-OR utilizes consensus information between multiple sightings of the same landmark to disambiguate noisy depth and filter it out before pose optimization. In AVC, we introduce an error vector as an accurate representation of the axial depth error. We additionally propose an adaptive algorithm to find the virtual camera location for projecting the error used in the objective function of the pose optimization. Our techniques work equally well for stereo image pairs and RGB-D input directly used by sparse-indirect SLAM systems. Our methods were tested on the TUM (RGB-D) and EuRoC (stereo) datasets and we show that they outperform existing state-of-the-art ORB-SLAM2 by 2-3x, especially in sequences critically affected by depth noise.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80607052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast Adaptation of Deep Reinforcement Learning-Based Navigation Skills to Human Preference 基于深度强化学习的导航技能对人类偏好的快速适应
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197159
Jinyoung Choi, C. Dance, Jung-Eun Kim, Kyungsik Park, Jaehun Han, Joonho Seo, Minsu Kim
Deep reinforcement learning (RL) is being actively studied for robot navigation due to its promise of superior performance and robustness. However, most existing deep RL navigation agents are trained using fixed parameters, such as maximum velocities and weightings of reward components. Since the optimal choice of parameters depends on the use-case, it can be difficult to deploy such existing methods in a variety of real-world service scenarios. In this paper, we propose a novel deep RL navigation method that can adapt its policy to a wide range of parameters and reward functions without expensive retraining. Additionally, we explore a Bayesian deep learning method to optimize these parameters that requires only a small amount of preference data. We empirically show that our method can learn diverse navigation skills and quickly adapt its policy to a given performance metric or to human preference. We also demonstrate our method in real-world scenarios.
深度强化学习(RL)由于其优越的性能和鲁棒性而被积极研究用于机器人导航。然而,大多数现有的深度强化学习导航代理都是使用固定参数进行训练的,比如最大速度和奖励成分的权重。由于参数的最佳选择取决于用例,因此很难在各种实际服务场景中部署这种现有方法。在本文中,我们提出了一种新的深度强化学习导航方法,该方法可以使其策略适应广泛的参数和奖励函数,而无需昂贵的再训练。此外,我们还探索了一种贝叶斯深度学习方法来优化这些只需要少量偏好数据的参数。我们的经验表明,我们的方法可以学习不同的导航技能,并快速调整其策略以适应给定的性能指标或人类偏好。我们还在实际场景中演示了我们的方法。
{"title":"Fast Adaptation of Deep Reinforcement Learning-Based Navigation Skills to Human Preference","authors":"Jinyoung Choi, C. Dance, Jung-Eun Kim, Kyungsik Park, Jaehun Han, Joonho Seo, Minsu Kim","doi":"10.1109/ICRA40945.2020.9197159","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197159","url":null,"abstract":"Deep reinforcement learning (RL) is being actively studied for robot navigation due to its promise of superior performance and robustness. However, most existing deep RL navigation agents are trained using fixed parameters, such as maximum velocities and weightings of reward components. Since the optimal choice of parameters depends on the use-case, it can be difficult to deploy such existing methods in a variety of real-world service scenarios. In this paper, we propose a novel deep RL navigation method that can adapt its policy to a wide range of parameters and reward functions without expensive retraining. Additionally, we explore a Bayesian deep learning method to optimize these parameters that requires only a small amount of preference data. We empirically show that our method can learn diverse navigation skills and quickly adapt its policy to a given performance metric or to human preference. We also demonstrate our method in real-world scenarios.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80652815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An Actuation Fault Tolerance Approach to Reconfiguration Planning of Modular Self-folding Robots 模块化自折叠机器人重构规划的驱动容错方法
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196574
Meibao Yao, Xueming Xiao, Yang Tian, Hutao Cui, J. Paik
This paper presents a novel approach to fault tolerant reconfiguration of modular self-folding robots. Among various types of faults that probably occur in the modular system, we focus on the tolerance of complete actuation failure of active modules that might cause imprecise robotic motion and even reconfiguration failure. Our approach is to utilize the reconfigurability of modular self-folding robots and investigate intra-module connection to determine initial patterns that are inherently fault tolerant. We exploit the redundancy of actuation and distribute active modules in both layout-based and target-based scenarios, such that reconfiguration schemes with user-specified fault tolerant capability can be generated for an arbitrary input initial pattern or 3D configuration. Our methods are demonstrated in computer-aided simulation on the robotic platform of Mori, a modular origami robot. The simulation results validate that the proposed algorithms yield fault tolerant initial patterns and distribution schemes of active modules for several 2D and 3D configurations with Mori, while retaining generalizability for a large number of modular self-folding robots.
提出了一种模块化自折叠机器人容错重构的新方法。在模块化系统中可能发生的各种类型的故障中,我们重点研究了主动模块完全驱动故障的容错性,这种故障可能导致机器人运动不精确甚至重构失败。我们的方法是利用模块化自折叠机器人的可重构性,并研究模块内连接,以确定固有容错的初始模式。我们利用驱动的冗余性,并在基于布局和基于目标的场景中分布主动模块,从而可以为任意输入初始模式或3D配置生成具有用户指定容错能力的重构方案。我们的方法在模块化折纸机器人Mori的机器人平台上进行了计算机辅助仿真。仿真结果验证了所提算法在具有Mori的二维和三维构型下产生的容错初始模式和主动模块的分布方案,同时保留了对大量模块化自折叠机器人的可泛化性。
{"title":"An Actuation Fault Tolerance Approach to Reconfiguration Planning of Modular Self-folding Robots","authors":"Meibao Yao, Xueming Xiao, Yang Tian, Hutao Cui, J. Paik","doi":"10.1109/ICRA40945.2020.9196574","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196574","url":null,"abstract":"This paper presents a novel approach to fault tolerant reconfiguration of modular self-folding robots. Among various types of faults that probably occur in the modular system, we focus on the tolerance of complete actuation failure of active modules that might cause imprecise robotic motion and even reconfiguration failure. Our approach is to utilize the reconfigurability of modular self-folding robots and investigate intra-module connection to determine initial patterns that are inherently fault tolerant. We exploit the redundancy of actuation and distribute active modules in both layout-based and target-based scenarios, such that reconfiguration schemes with user-specified fault tolerant capability can be generated for an arbitrary input initial pattern or 3D configuration. Our methods are demonstrated in computer-aided simulation on the robotic platform of Mori, a modular origami robot. The simulation results validate that the proposed algorithms yield fault tolerant initial patterns and distribution schemes of active modules for several 2D and 3D configurations with Mori, while retaining generalizability for a large number of modular self-folding robots.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80667277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Tiercel: A novel autonomous micro aerial vehicle that can map the environment by flying into obstacles Tiercel:一种新型的自主微型飞行器,可以通过飞入障碍物来绘制环境地图
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197269
Yash Mulgaonkar, Wenxin Liu, Dinesh Thakur, Kostas Daniilidis, C. J. Taylor, Vijay R. Kumar
Autonomous flight through unknown environments in the presence of obstacles is a challenging problem for micro aerial vehicles (MAVs). A majority of the current state-of-art research assumes obstacles as opaque objects that can be easily sensed by optical sensors such as cameras or LiDARs. However in indoor environments with glass walls and windows, or scenarios with smoke and dust, robots (even birds) have a difficult time navigating through the unknown space.In this paper, we present the design of a new class of micro aerial vehicles that achieves autonomous navigation and are robust to collisions. In particular, we present the Tiercel MAV: a small, agile, light weight and collision-resilient robot powered by a cellphone grade CPU. Our design exploits contact to infer the presence of transparent or reflective obstacles like glass walls, integrating touch with visual perception for SLAM. The Tiercel is able to localize using visual-inertial odometry (VIO) running on board the robot with a single downward facing fisheye camera and an IMU. We show how our collision detector design and experimental set up enable us to characterize the impact of collisions on VIO. We further develop a planning strategy to enable the Tiercel to fly autonomously in an unknown space, sustaining collisions and creating a 2D map of the environment. Finally we demonstrate a swarm of three autonomous Tiercel robots safely navigating and colliding through an obstacle field to reach their objectives.
在存在障碍物的未知环境中自主飞行是微型飞行器(MAVs)面临的一个具有挑战性的问题。目前最先进的研究大多假设障碍物是不透明的物体,可以很容易地被相机或激光雷达等光学传感器探测到。然而,在有玻璃墙和窗户的室内环境中,或者有烟雾和灰尘的场景中,机器人(甚至鸟类)很难在未知的空间中导航。在本文中,我们提出了一种新型的微型飞行器的设计,实现自主导航和抗碰撞的鲁棒性。特别地,我们提出了Tiercel MAV:一个小,灵活,重量轻,碰撞弹性的机器人,由一个手机级的CPU供电。我们的设计利用接触来推断透明或反射障碍物(如玻璃墙)的存在,将触觉与视觉感知结合起来。Tiercel能够使用搭载在机器人上的视觉惯性里程计(VIO)进行定位,该机器人带有一个向下的鱼眼摄像头和一个IMU。我们展示了我们的碰撞检测器设计和实验设置如何使我们能够表征碰撞对VIO的影响。我们进一步开发了一种规划策略,使Tiercel能够在未知空间自主飞行,承受碰撞并创建环境的2D地图。最后,我们展示了一个由三个自主Tiercel机器人组成的群体,它们安全导航并通过障碍场碰撞以达到目标。
{"title":"The Tiercel: A novel autonomous micro aerial vehicle that can map the environment by flying into obstacles","authors":"Yash Mulgaonkar, Wenxin Liu, Dinesh Thakur, Kostas Daniilidis, C. J. Taylor, Vijay R. Kumar","doi":"10.1109/ICRA40945.2020.9197269","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197269","url":null,"abstract":"Autonomous flight through unknown environments in the presence of obstacles is a challenging problem for micro aerial vehicles (MAVs). A majority of the current state-of-art research assumes obstacles as opaque objects that can be easily sensed by optical sensors such as cameras or LiDARs. However in indoor environments with glass walls and windows, or scenarios with smoke and dust, robots (even birds) have a difficult time navigating through the unknown space.In this paper, we present the design of a new class of micro aerial vehicles that achieves autonomous navigation and are robust to collisions. In particular, we present the Tiercel MAV: a small, agile, light weight and collision-resilient robot powered by a cellphone grade CPU. Our design exploits contact to infer the presence of transparent or reflective obstacles like glass walls, integrating touch with visual perception for SLAM. The Tiercel is able to localize using visual-inertial odometry (VIO) running on board the robot with a single downward facing fisheye camera and an IMU. We show how our collision detector design and experimental set up enable us to characterize the impact of collisions on VIO. We further develop a planning strategy to enable the Tiercel to fly autonomously in an unknown space, sustaining collisions and creating a 2D map of the environment. Finally we demonstrate a swarm of three autonomous Tiercel robots safely navigating and colliding through an obstacle field to reach their objectives.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82026179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Acoustofluidic Tweezers for the 3D Manipulation of Microparticles 用于微粒子三维操作的声流控镊子
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197265
Xinyi Guo, Zhichao Ma, R. Goyal, Moonkwang Jeong, W. Pang, P. Fischer, X. Duan, T. Qiu
Non-contact manipulation is of great importance in the actuation of micro-robotics. It is challenging to contactless manipulate micro-scale objects over large spatial distance in fluid. Here, we describe a novel approach for the dynamic position control of microparticles in three-dimensional (3D) space, based on high-speed acoustic streaming generated by a micro-fabricated gigahertz transducer. The hydrodynamic force generated by the streaming flow field has a vertical component against gravity and a lateral component towards the center, thus the microparticle is able to be stably trapped at a position far from the transducer surface, and to be manipulated over centimeter distance in 3D. Only the hydrodynamic force is utilized in the system for particle manipulation, making it a versatile tool regardless the material properties of the trapped particle. The system shows high reliability and manipulation velocity, revealing its potentials for the applications in robotics and automation at small scales.
非接触操作在微型机器人的驱动中具有重要的意义。在流体中对大空间距离的微尺度物体进行非接触式操作具有挑战性。在这里,我们描述了一种基于由微制造的千兆赫换能器产生的高速声流的三维(3D)空间中微粒动态位置控制的新方法。流场产生的水动力具有反重力的垂直分量和向中心的横向分量,因此微粒能够稳定地捕获在远离传感器表面的位置,并且可以在三维中进行厘米距离的操作。系统中仅利用水动力来操纵颗粒,使其成为一种多功能工具,而不考虑捕获颗粒的材料特性。该系统显示出高可靠性和操作速度,揭示了其在小尺度机器人和自动化中的应用潜力。
{"title":"Acoustofluidic Tweezers for the 3D Manipulation of Microparticles","authors":"Xinyi Guo, Zhichao Ma, R. Goyal, Moonkwang Jeong, W. Pang, P. Fischer, X. Duan, T. Qiu","doi":"10.1109/ICRA40945.2020.9197265","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197265","url":null,"abstract":"Non-contact manipulation is of great importance in the actuation of micro-robotics. It is challenging to contactless manipulate micro-scale objects over large spatial distance in fluid. Here, we describe a novel approach for the dynamic position control of microparticles in three-dimensional (3D) space, based on high-speed acoustic streaming generated by a micro-fabricated gigahertz transducer. The hydrodynamic force generated by the streaming flow field has a vertical component against gravity and a lateral component towards the center, thus the microparticle is able to be stably trapped at a position far from the transducer surface, and to be manipulated over centimeter distance in 3D. Only the hydrodynamic force is utilized in the system for particle manipulation, making it a versatile tool regardless the material properties of the trapped particle. The system shows high reliability and manipulation velocity, revealing its potentials for the applications in robotics and automation at small scales.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90685247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient Communication in Large Multi-robot Networks 大型多机器人网络中的高效通信
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196672
Ayan Dutta, Anirban Ghosh, Stephen Sisley, O. P. Kreidl
To achieve coordination in a multi-robot system, the robots typically resort to some form of communication among each other. In most of the multi-robot coordination frameworks, high-level coordination strategies are studied but ‘how’ the ground-level communication takes place, is assumed to be taken care of by another program. In this paper, we study the communication routing problem for large multi-robot systems where the robots have limited communication ranges. The objective is to send a message from a robot to another in the network, routed through a low number of other robots. To this end, we propose a communication model between any pair of robots using peer-to-peer radio communication. Our proposed model is generic to any type of message and guarantees a low hop routing between any pair of robots in this network. These help the robots to exchange large messages (e.g., multi-spectral images) in a short amount of time. Results show that our proposed approach easily scales up to 1000 robots while drastically reducing the space complexity for maintaining the network information.
为了在多机器人系统中实现协调,机器人之间通常采用某种形式的通信。在大多数多机器人协调框架中,研究了高层协调策略,但假设地面通信如何发生,则由另一个程序来处理。本文研究了机器人间通信范围有限的大型多机器人系统的通信路由问题。目标是将消息从一个机器人发送到网络中的另一个机器人,通过少量其他机器人路由。为此,我们提出了一种使用点对点无线电通信的任意一对机器人之间的通信模型。我们提出的模型对任何类型的消息都是通用的,并保证了该网络中任何一对机器人之间的低跳路由。这有助于机器人在短时间内交换大量信息(例如,多光谱图像)。结果表明,我们提出的方法可以轻松地扩展到1000个机器人,同时大大降低了维护网络信息的空间复杂度。
{"title":"Efficient Communication in Large Multi-robot Networks","authors":"Ayan Dutta, Anirban Ghosh, Stephen Sisley, O. P. Kreidl","doi":"10.1109/ICRA40945.2020.9196672","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196672","url":null,"abstract":"To achieve coordination in a multi-robot system, the robots typically resort to some form of communication among each other. In most of the multi-robot coordination frameworks, high-level coordination strategies are studied but ‘how’ the ground-level communication takes place, is assumed to be taken care of by another program. In this paper, we study the communication routing problem for large multi-robot systems where the robots have limited communication ranges. The objective is to send a message from a robot to another in the network, routed through a low number of other robots. To this end, we propose a communication model between any pair of robots using peer-to-peer radio communication. Our proposed model is generic to any type of message and guarantees a low hop routing between any pair of robots in this network. These help the robots to exchange large messages (e.g., multi-spectral images) in a short amount of time. Results show that our proposed approach easily scales up to 1000 robots while drastically reducing the space complexity for maintaining the network information.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91023206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gradient and Log-based Active Learning for Semantic Segmentation of Crop and Weed for Agricultural Robots 基于梯度和日志的农业机器人作物和杂草语义分割主动学习
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196722
Rasha Sheikh, Andres Milioto, Philipp Lottes, C. Stachniss, Maren Bennewitz, T. Schultz
Annotated datasets are essential for supervised learning. However, annotating large datasets is a tedious and time-intensive task. This paper addresses active learning in the context of semantic segmentation with the goal of reducing the human labeling effort. Our application is agricultural robotics and we focus on the task of distinguishing between crop and weed plants from image data. A key challenge in this application is the transfer of an existing semantic segmentation CNN to a new field, in which growth stage, weeds, soil, and weather conditions differ. We propose a novel approach that, given a trained model on one field together with rough foreground segmentation, refines the network on a substantially different field providing an effective method of selecting samples to annotate for supporting the transfer. We evaluated our approach on two challenging datasets from the agricultural robotics domain and show that we achieve a higher accuracy with a smaller number of samples compared to random sampling as well as entropy based sampling, which consequently reduces the required human labeling effort.
带注释的数据集对于监督学习是必不可少的。然而,注释大型数据集是一项冗长且耗时的任务。本文讨论了语义分割背景下的主动学习,目的是减少人类标记的工作量。我们的应用是农业机器人,我们专注于从图像数据中区分作物和杂草植物的任务。这个应用的一个关键挑战是将现有的语义分割CNN转移到一个新的领域,在这个领域中,生长阶段、杂草、土壤和天气条件是不同的。我们提出了一种新的方法,即给定一个领域的训练模型以及粗略的前景分割,在一个完全不同的领域上改进网络,提供一种有效的方法来选择样本进行注释以支持转移。我们在来自农业机器人领域的两个具有挑战性的数据集上评估了我们的方法,并表明与随机抽样和基于熵的抽样相比,我们用更少的样本实现了更高的精度,从而减少了所需的人类标记工作。
{"title":"Gradient and Log-based Active Learning for Semantic Segmentation of Crop and Weed for Agricultural Robots","authors":"Rasha Sheikh, Andres Milioto, Philipp Lottes, C. Stachniss, Maren Bennewitz, T. Schultz","doi":"10.1109/ICRA40945.2020.9196722","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196722","url":null,"abstract":"Annotated datasets are essential for supervised learning. However, annotating large datasets is a tedious and time-intensive task. This paper addresses active learning in the context of semantic segmentation with the goal of reducing the human labeling effort. Our application is agricultural robotics and we focus on the task of distinguishing between crop and weed plants from image data. A key challenge in this application is the transfer of an existing semantic segmentation CNN to a new field, in which growth stage, weeds, soil, and weather conditions differ. We propose a novel approach that, given a trained model on one field together with rough foreground segmentation, refines the network on a substantially different field providing an effective method of selecting samples to annotate for supporting the transfer. We evaluated our approach on two challenging datasets from the agricultural robotics domain and show that we achieve a higher accuracy with a smaller number of samples compared to random sampling as well as entropy based sampling, which consequently reduces the required human labeling effort.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91099625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Reactive Control and Metric-Topological Planning for Exploration 勘探中的被动控制和度量拓扑规划
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197381
Michael T. Ohradzansky, Andrew B. Mills, Eugene R. Rush, Danny G. Riley, E. Frew, J. Humbert
Autonomous navigation in unknown environments with the intent of exploring all traversable areas is a significant challenge for robotic platforms. In this paper, a simple yet reliable method for exploring unknown environments is presented based on bio-inspired reactive control and metric-topological planning. The reactive control algorithm is modeled after the spatial decomposition of wide and small-field patterns of optic flow in the insect visuomotor system. Centering behaviour and small obstacle detection and avoidance are achieved through wide-field integration and Fourier residual analysis of instantaneous measured nearness respectively. A topological graph is estimated using image processing techniques on a continuous occupancy grid. Node paths are rapidly generated to navigate to the nearest unexplored edge in the graph. It is shown through rigorous field-testing that the proposed control and planning method is robust, reliable, and computationally efficient.
在未知环境中探索所有可穿越区域的自主导航是机器人平台面临的重大挑战。本文提出了一种基于仿生反应控制和度量拓扑规划的探索未知环境的简单而可靠的方法。对昆虫视觉运动系统中的宽视场和小视场光流模式进行空间分解,建立了响应式控制算法模型。分别通过宽场积分和瞬时测量接近度的傅里叶残差分析实现定心行为和小障碍物检测与避障。在连续占用网格上使用图像处理技术估计拓扑图。快速生成节点路径,以导航到图中最近的未探索边缘。通过严格的现场测试表明,所提出的控制和规划方法具有鲁棒性、可靠性和计算效率。
{"title":"Reactive Control and Metric-Topological Planning for Exploration","authors":"Michael T. Ohradzansky, Andrew B. Mills, Eugene R. Rush, Danny G. Riley, E. Frew, J. Humbert","doi":"10.1109/ICRA40945.2020.9197381","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197381","url":null,"abstract":"Autonomous navigation in unknown environments with the intent of exploring all traversable areas is a significant challenge for robotic platforms. In this paper, a simple yet reliable method for exploring unknown environments is presented based on bio-inspired reactive control and metric-topological planning. The reactive control algorithm is modeled after the spatial decomposition of wide and small-field patterns of optic flow in the insect visuomotor system. Centering behaviour and small obstacle detection and avoidance are achieved through wide-field integration and Fourier residual analysis of instantaneous measured nearness respectively. A topological graph is estimated using image processing techniques on a continuous occupancy grid. Node paths are rapidly generated to navigate to the nearest unexplored edge in the graph. It is shown through rigorous field-testing that the proposed control and planning method is robust, reliable, and computationally efficient.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89848184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Eye-in-Hand 3D Visual Servoing of Helical Swimmers Using Parallel Mobile Coils 利用平行移动线圈的螺旋游泳者眼手三维视觉伺服
Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197276
Zhengxin Yang, Lidong Yang, Li Zhang
Magnetic helical microswimmers can be propelled by rotating magnetic field and are adept at passing through narrow space. To date, various magnetic actuation systems and control methods have been developed to drive these microswimmers. However, steering their spacial movement in a large workspace is still challenging, which could be significant for potential medical applications. In this regard, this paper designs an eye-in-hand stereo-vision module and corresponding refraction-rectified location algorithm. Combined with the motor module and the coil module, the mobile-coil system is capable of generating dynamic magnetic fields in a large 3D workspace. Based on the system, a robust triple-loop stereo visual servoing strategy is proposed that operates simultaneous tracking, locating, and steering, through which the helical swimmer is able to follow a long-distance 3D path. A scaled-up magnetic helical swimmer is employed in the path following experiment. Our prototype system reaches a cylindrical workspace with a diameter more than 200 mm, and the mean error of path tracking is less than 2 mm.
磁螺旋微游泳者可以通过旋转磁场推进,并擅长通过狭窄的空间。迄今为止,已经开发了各种磁致动系统和控制方法来驱动这些微型游泳者。然而,在大型工作空间中控制它们的空间运动仍然具有挑战性,这对潜在的医疗应用可能具有重要意义。为此,本文设计了手眼立体视觉模块及相应的折射校正定位算法。结合电机模块和线圈模块,移动线圈系统能够在大型3D工作空间中产生动态磁场。在此基础上,提出了一种同时进行跟踪、定位和转向的鲁棒三环立体视觉伺服策略,使螺旋游泳者能够沿着长距离的三维路径运动。采用按比例放大的磁性螺旋游泳器进行路径跟踪实验。我们的原型系统达到了直径大于200mm的圆柱形工作空间,轨迹跟踪的平均误差小于2mm。
{"title":"Eye-in-Hand 3D Visual Servoing of Helical Swimmers Using Parallel Mobile Coils","authors":"Zhengxin Yang, Lidong Yang, Li Zhang","doi":"10.1109/ICRA40945.2020.9197276","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197276","url":null,"abstract":"Magnetic helical microswimmers can be propelled by rotating magnetic field and are adept at passing through narrow space. To date, various magnetic actuation systems and control methods have been developed to drive these microswimmers. However, steering their spacial movement in a large workspace is still challenging, which could be significant for potential medical applications. In this regard, this paper designs an eye-in-hand stereo-vision module and corresponding refraction-rectified location algorithm. Combined with the motor module and the coil module, the mobile-coil system is capable of generating dynamic magnetic fields in a large 3D workspace. Based on the system, a robust triple-loop stereo visual servoing strategy is proposed that operates simultaneous tracking, locating, and steering, through which the helical swimmer is able to follow a long-distance 3D path. A scaled-up magnetic helical swimmer is employed in the path following experiment. Our prototype system reaches a cylindrical workspace with a diameter more than 200 mm, and the mean error of path tracking is less than 2 mm.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89979761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 IEEE International Conference on Robotics and Automation (ICRA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1