首页 > 最新文献

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)最新文献

英文 中文
Localization from visual landmarks on a free-flying robot 自由飞行机器人的视觉地标定位
Pub Date : 2016-10-09 DOI: 10.1109/IROS.2016.7759644
B. Coltin, Jesse Fusco, Z. Moratto, O. Alexandrov, Robert Nakamura
We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.
我们提出了Astrobee的定位方法,这是一种新的自由飞行机器人,旨在在国际空间站(ISS)上自主导航。Astrobee将容纳各种有效载荷,使客座科学家能够在零重力下进行实验,并协助宇航员和地面控制人员。Astrobee将取代目前在国际空间站上工作的SPHERES机器人,后者使用固定的超声波信标进行定位,限制了它们在一个2米的立方体中工作。阿斯图比用单目视觉和IMU定位,没有任何环境改变。在预先构建的地图上检测到的视觉特征、光流信息和IMU读数都集成到扩展卡尔曼滤波器(EKF)中,以估计机器人的姿势。我们对滤波器进行了一些修改,使其对噪声具有更强的鲁棒性,并对定位算法进行了广泛的评估。
{"title":"Localization from visual landmarks on a free-flying robot","authors":"B. Coltin, Jesse Fusco, Z. Moratto, O. Alexandrov, Robert Nakamura","doi":"10.1109/IROS.2016.7759644","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759644","url":null,"abstract":"We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130727386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Can you pick a broccoli? 3D-vision based detection and localisation of broccoli heads in the field 你能摘花椰菜吗?基于3d视觉的西兰花头检测和定位
Pub Date : 2016-10-09 DOI: 10.1109/IROS.2016.7759121
Keerthy Kusumam, T. Krajník, S. Pearson, Grzegorz Cielniak, T. Duckett
This paper presents a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with 95.2% precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field.
本文介绍了一种采用低成本RGB-D传感器的西兰花机器人收割3D视觉系统。所提出的方法解决了在田间检测成熟西兰花头并提供其相对于车辆的3D位置的任务。本文评估了用于检测西兰花头部的不同3D特征、机器学习和时间过滤方法。我们的实验表明,结合视点特征直方图、支持向量机分类器和时间滤波器来跟踪检测到的头部,系统检测西兰花头部的准确率达到95.2%。我们还表明,时间滤波可以用来生成西兰花在田间头部位置的3D地图。
{"title":"Can you pick a broccoli? 3D-vision based detection and localisation of broccoli heads in the field","authors":"Keerthy Kusumam, T. Krajník, S. Pearson, Grzegorz Cielniak, T. Duckett","doi":"10.1109/IROS.2016.7759121","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759121","url":null,"abstract":"This paper presents a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with 95.2% precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114521058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
ALPHA: A hybrid self-adaptable hand for a social humanoid robot 阿尔法:用于社交类人机器人的混合自适应手
Pub Date : 2016-10-09 DOI: 10.1109/IROS.2016.7759157
Giulio Cerruti, D. Chablat, D. Gouaillier, S. Sakka
This paper presents a novel design of a compact and light-weight robotic hand for a social humanoid robot. The proposed system is able to perform common hand gestures and self-adaptable grasps by mixing under-actuated and self-adaptable hand kinematics in a unique design. The hand answers the need for precise finger postures and sensor-less force feedback during gestures and for finger adaptation and autonomous force distribution during grasps. These are provided by a dual actuation system embodied within the palm and the fingers. Coexistence is ensured by compliant transmissions based on elastomer bars rather than classical tension springs, thanks to their high elastic coefficient at reduced sizes and strains. The proposed solution significantly reduces the weight and the size of the hand by using a reduced number of small actuators for gesturing and a single motor for grasping. The hand prototype (ALPHA) is realized to confirm the design feasibility and functional capabilities. It is controlled to provide safe human-robot interaction and preserve mechanical integrity in order to be embodied on a humanoid robot.
本文提出了一种用于社交类人机器人的结构紧凑、重量轻的机械手。该系统通过在独特的设计中混合欠驱动和自适应的手部运动学,能够执行常见的手势和自适应抓取。这只手满足了在手势过程中对精确手指姿势和无传感器力反馈的需求,以及在抓取过程中对手指的适应和自主力分配的需求。这些是由手掌和手指内的双重驱动系统提供的。由于弹性体杆在减小尺寸和应变时具有较高的弹性系数,因此基于弹性体杆而不是传统的张力弹簧的柔性传动确保了共存。提出的解决方案通过使用减少数量的小致动器进行手势和单个电机进行抓取,显着减少了手的重量和尺寸。实现了手样机(ALPHA),验证了设计的可行性和功能。控制它是为了提供安全的人机交互和保持机械完整性,以体现在人形机器人。
{"title":"ALPHA: A hybrid self-adaptable hand for a social humanoid robot","authors":"Giulio Cerruti, D. Chablat, D. Gouaillier, S. Sakka","doi":"10.1109/IROS.2016.7759157","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759157","url":null,"abstract":"This paper presents a novel design of a compact and light-weight robotic hand for a social humanoid robot. The proposed system is able to perform common hand gestures and self-adaptable grasps by mixing under-actuated and self-adaptable hand kinematics in a unique design. The hand answers the need for precise finger postures and sensor-less force feedback during gestures and for finger adaptation and autonomous force distribution during grasps. These are provided by a dual actuation system embodied within the palm and the fingers. Coexistence is ensured by compliant transmissions based on elastomer bars rather than classical tension springs, thanks to their high elastic coefficient at reduced sizes and strains. The proposed solution significantly reduces the weight and the size of the hand by using a reduced number of small actuators for gesturing and a single motor for grasping. The hand prototype (ALPHA) is realized to confirm the design feasibility and functional capabilities. It is controlled to provide safe human-robot interaction and preserve mechanical integrity in order to be embodied on a humanoid robot.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123598952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Controlling a multi-joint arm actuated by pneumatic muscles with quasi-DDP optimal control 用准ddp最优控制控制气动肌肉驱动的多关节臂
Pub Date : 2016-10-09 DOI: 10.1109/IROS.2016.7759103
Ganesh Kumar Hari Shankar Lal Das, B. Tondu, Florent Forget, Jérôme Manhes, O. Stasse, P. Souéres
Pneumatic actuators have inherent compliance and hence they are very interesting for applications involving interaction with environment or human. But controlling such kind of actuators is not trivial. The paper presents an implementation of iterative Linear Quadratic regulator (iLQR) based optimal control framework to control an anthropomorphic arm with each joint actuated by an agonist-antagonistic pair of Mckibben artificial muscles. The method is applied to positioning tasks and generation of explosive movements by maximizing the link speed. It is then compared to traditional control strategies to justify that optimal control is effective in controlling the position in highly non-linear pneumatic systems. Also the importance of varying compliance is highlighted by repeating the tasks at different compliance level. The algorithm validation is reported here by several simulations and hardware experiments in which the shoulder and elbow flexion are controlled simultaneously.
气动执行器具有固有的顺应性,因此它们对于涉及与环境或人的相互作用的应用非常有趣。但是控制这类驱动器并非易事。本文提出了一种基于迭代线性二次调节器(iLQR)的最优控制框架,用于控制每个关节由一对激动-拮抗Mckibben人造肌肉驱动的拟人手臂。将该方法应用于定位任务和以最大连杆速度生成爆炸动作。然后将其与传统控制策略进行比较,证明了最优控制在高度非线性气动系统中的位置控制是有效的。此外,通过在不同的遵从性级别重复任务,强调了不同遵从性的重要性。本文通过仿真和硬件实验对该算法进行了验证,其中肩关节和肘关节的屈曲同时得到了控制。
{"title":"Controlling a multi-joint arm actuated by pneumatic muscles with quasi-DDP optimal control","authors":"Ganesh Kumar Hari Shankar Lal Das, B. Tondu, Florent Forget, Jérôme Manhes, O. Stasse, P. Souéres","doi":"10.1109/IROS.2016.7759103","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759103","url":null,"abstract":"Pneumatic actuators have inherent compliance and hence they are very interesting for applications involving interaction with environment or human. But controlling such kind of actuators is not trivial. The paper presents an implementation of iterative Linear Quadratic regulator (iLQR) based optimal control framework to control an anthropomorphic arm with each joint actuated by an agonist-antagonistic pair of Mckibben artificial muscles. The method is applied to positioning tasks and generation of explosive movements by maximizing the link speed. It is then compared to traditional control strategies to justify that optimal control is effective in controlling the position in highly non-linear pneumatic systems. Also the importance of varying compliance is highlighted by repeating the tasks at different compliance level. The algorithm validation is reported here by several simulations and hardware experiments in which the shoulder and elbow flexion are controlled simultaneously.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126570945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
RL-IAC: An exploration policy for online saliency learning on an autonomous mobile robot 自主移动机器人在线显著性学习的探索策略
Pub Date : 2016-10-09 DOI: 10.1109/IROS.2016.7759716
Céline Craye, David Filliat, Jean-François Goudou
In the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligent Adaptive Curiosity algorithm (RL-IAC), a mechanism based on IAC (Intelligent Adaptive Curiosity) able to guide the robot through areas of the space where learning progress is high, while minimizing the time spent to move in its environment without learning. We demonstrate first that our saliency approach is an efficient tool to generate relevant object boxes proposal in the input image and significantly outperforms the state-of-the-art EdgeBoxes algorithm. Second, we show that RL-IAC can drastically decrease the required time for learning saliency compared to random exploration.
在视觉对象搜索和定位的背景下,显著性图提供了一种有效的方法来查找图像中的候选对象。与大多数方法不同,我们提出了一种直接在机器人上学习显著性地图的方法,通过探索环境,使用几何线索发现显著物体,并学习它们的视觉方面。更重要的是,我们提供了一种自主探索策略,能够驱动机器人完成学习显著性的任务。为此,我们描述了强化学习-智能自适应好奇心算法(RL-IAC),这是一种基于IAC(智能自适应好奇心)的机制,能够引导机器人通过学习进度高的空间区域,同时最大限度地减少在其环境中移动而不学习的时间。我们首先证明了我们的显著性方法是一种有效的工具,可以在输入图像中生成相关的对象框建议,并且显著优于最先进的EdgeBoxes算法。其次,我们表明,与随机探索相比,RL-IAC可以大大减少学习显著性所需的时间。
{"title":"RL-IAC: An exploration policy for online saliency learning on an autonomous mobile robot","authors":"Céline Craye, David Filliat, Jean-François Goudou","doi":"10.1109/IROS.2016.7759716","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759716","url":null,"abstract":"In the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligent Adaptive Curiosity algorithm (RL-IAC), a mechanism based on IAC (Intelligent Adaptive Curiosity) able to guide the robot through areas of the space where learning progress is high, while minimizing the time spent to move in its environment without learning. We demonstrate first that our saliency approach is an efficient tool to generate relevant object boxes proposal in the input image and significantly outperforms the state-of-the-art EdgeBoxes algorithm. Second, we show that RL-IAC can drastically decrease the required time for learning saliency compared to random exploration.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129179685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multi-robot path planning for budgeted active perception with self-organising maps 带有自组织地图的预算主动感知多机器人路径规划
Pub Date : 2016-10-04 DOI: 10.1109/IROS.2016.7759489
Graeme Best, J. Faigl, R. Fitch
We propose a self-organising map (SOM) algorithm as a solution to a new multi-goal path planning problem for active perception and data collection tasks. We optimise paths for a multi-robot team that aims to maximally observe a set of nodes in the environment. The selected nodes are observed by visiting associated viewpoint regions defined by a sensor model. The key problem characteristics are that the viewpoint regions are overlapping polygonal continuous regions, each node has an observation reward, and the robots are constrained by travel budgets. The SOM algorithm jointly selects and allocates nodes to the robots and finds favourable sequences of sensing locations. The algorithm has polynomial-bounded runtime independent of the number of robots. We demonstrate feasibility for the active perception task of observing a set of 3D objects. The viewpoint regions consider sensing ranges and self-occlusions, and the rewards are measured as discriminability in the ensemble of shape functions feature space. Simulations were performed using a 3D point cloud dataset from a real robot in a large outdoor environment. Our results show the proposed methods enable multi-robot planning for budgeted active perception tasks with continuous sets of candidate viewpoints and long planning horizons.
我们提出了一种自组织映射(SOM)算法,作为主动感知和数据收集任务的新多目标路径规划问题的解决方案。我们为一个多机器人团队优化路径,目的是最大限度地观察环境中的一组节点。通过访问由传感器模型定义的相关视点区域来观察所选节点。该问题的关键特征是视点区域是重叠的多边形连续区域,每个节点都有一个观察奖励,机器人受旅行预算的约束。SOM算法共同选择和分配节点给机器人,并找到合适的传感位置序列。该算法具有与机器人数量无关的多项式有界运行时间。我们论证了观察一组3D物体的主动感知任务的可行性。视点区域考虑感知范围和自遮挡,奖励以形状函数特征空间集合中的可判别性来衡量。仿真是在大型室外环境中使用真实机器人的三维点云数据集进行的。我们的研究结果表明,所提出的方法使多机器人能够对具有连续候选视点集和长期规划视野的预算主动感知任务进行规划。
{"title":"Multi-robot path planning for budgeted active perception with self-organising maps","authors":"Graeme Best, J. Faigl, R. Fitch","doi":"10.1109/IROS.2016.7759489","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759489","url":null,"abstract":"We propose a self-organising map (SOM) algorithm as a solution to a new multi-goal path planning problem for active perception and data collection tasks. We optimise paths for a multi-robot team that aims to maximally observe a set of nodes in the environment. The selected nodes are observed by visiting associated viewpoint regions defined by a sensor model. The key problem characteristics are that the viewpoint regions are overlapping polygonal continuous regions, each node has an observation reward, and the robots are constrained by travel budgets. The SOM algorithm jointly selects and allocates nodes to the robots and finds favourable sequences of sensing locations. The algorithm has polynomial-bounded runtime independent of the number of robots. We demonstrate feasibility for the active perception task of observing a set of 3D objects. The viewpoint regions consider sensing ranges and self-occlusions, and the rewards are measured as discriminability in the ensemble of shape functions feature space. Simulations were performed using a 3D point cloud dataset from a real robot in a large outdoor environment. Our results show the proposed methods enable multi-robot planning for budgeted active perception tasks with continuous sets of candidate viewpoints and long planning horizons.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130501200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Dynamic arrival rate estimation for campus Mobility On Demand network graphs 校园交通随需应变网络图的动态到达率估计
Pub Date : 2016-10-01 DOI: 10.1109/IROS.2016.7759357
Justin Miller, Andres Hasfura, Shih‐Yuan Liu, J. How
Mobility On Demand (MOD) systems are revolutionizing transportation in urban settings by improving vehicle utilization and reducing parking congestion. A key factor in the success of an MOD system is the ability to measure and respond to real-time customer arrival data. Real time traffic arrival rate data is traditionally difficult to obtain due to the need to install fixed sensors throughout the MOD network. This paper presents a framework for measuring pedestrian traffic arrival rates using sensors onboard the vehicles that make up the MOD fleet. A novel distributed fusion algorithm is presented which combines onboard LIDAR and camera sensor measurements to detect trajectories of pedestrians with a 90% detection hit rate with 1.5 false positives per minute. A novel moving observer method is introduced to estimate pedestrian arrival rates from pedestrian trajectories collected from mobile sensors. The moving observer method is evaluated in both simulation and hardware and is shown to achieve arrival rate estimates comparable to those that would be obtained with multiple stationary sensors.
按需移动(MOD)系统通过提高车辆利用率和减少停车拥堵,正在彻底改变城市交通。MOD系统成功的一个关键因素是测量和响应实时客户到达数据的能力。由于需要在整个MOD网络中安装固定的传感器,传统上很难获得实时交通到达率数据。本文提出了一个框架,用于测量行人交通到达率使用传感器上的车辆,组成国防部车队。提出了一种新型的分布式融合算法,该算法将车载激光雷达和摄像头传感器测量相结合,以90%的检测命中率检测行人轨迹,每分钟1.5个误报。提出了一种基于移动传感器采集的行人轨迹估计行人到达率的运动观测器方法。移动观测器方法在仿真和硬件上都进行了评估,并被证明可以实现与多个固定传感器相媲美的到达率估计。
{"title":"Dynamic arrival rate estimation for campus Mobility On Demand network graphs","authors":"Justin Miller, Andres Hasfura, Shih‐Yuan Liu, J. How","doi":"10.1109/IROS.2016.7759357","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759357","url":null,"abstract":"Mobility On Demand (MOD) systems are revolutionizing transportation in urban settings by improving vehicle utilization and reducing parking congestion. A key factor in the success of an MOD system is the ability to measure and respond to real-time customer arrival data. Real time traffic arrival rate data is traditionally difficult to obtain due to the need to install fixed sensors throughout the MOD network. This paper presents a framework for measuring pedestrian traffic arrival rates using sensors onboard the vehicles that make up the MOD fleet. A novel distributed fusion algorithm is presented which combines onboard LIDAR and camera sensor measurements to detect trajectories of pedestrians with a 90% detection hit rate with 1.5 false positives per minute. A novel moving observer method is introduced to estimate pedestrian arrival rates from pedestrian trajectories collected from mobile sensors. The moving observer method is evaluated in both simulation and hardware and is shown to achieve arrival rate estimates comparable to those that would be obtained with multiple stationary sensors.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114971408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Runtime SES planning: Online motion planning in environments with stochastic dynamics and uncertainty 运行时SES规划:随机动力学和不确定性环境下的在线运动规划
Pub Date : 2016-10-01 DOI: 10.1109/IROS.2016.7759705
H. Chiang, N. Rackley, Lydia Tapia
Motion planning in stochastic dynamic uncertain environments is critical in several applications such as human interacting robots, autonomous vehicles and assistive robots. In order to address these complex applications, several methods have been developed. The most successful methods often predict future obstacle locations in order identify collision-free paths. Since prediction can be computationally expensive, offline computations are commonly used, and simplifications such as the inability to consider the dynamics of interacting obstacles or possible stochastic dynamics are often applied. Online methods can be preferable to simulate potential obstacle interactions, but recent methods have been restricted to Gaussian interaction processes and uncertainty. In this paper we present an online motion planning method, Runtime Stochastic Ensemble Simulation (Runtime SES) planning, an inexpensive method for predicting obstacle motion with generic stochastic dynamics while maintaining a high planning success rate despite the potential presence of obstacle position error. Runtime SES planning evaluates the likelihood of collision for any state-time coordinate around the robot by performing Monte Carlo simulations online. This prediction is used to construct a customized Rapidly Exploring Random Tree (RRT) in order to quickly identify paths that avoid obstacles while moving toward a goal. We demonstrate Runtime SES planning in problems that benefit from online predictions, environments with strongly-interacting obstacles with stochastic dynamics and positional error. Through experiments that explore the impact of various parametrizations, robot dynamics and obstacle interaction models, we show that real-time capable planning with a high success rate is achievable in several complex environments.
随机动态不确定环境中的运动规划在人机交互机器人、自动驾驶汽车和辅助机器人等应用中至关重要。为了解决这些复杂的应用,已经开发了几种方法。最成功的方法通常是预测未来障碍物的位置,以确定无碰撞的路径。由于预测在计算上可能很昂贵,所以通常使用离线计算,并且通常应用诸如无法考虑相互作用障碍的动力学或可能的随机动力学等简化方法。在线方法可以更好地模拟潜在障碍相互作用,但最近的方法仅限于高斯相互作用过程和不确定性。在本文中,我们提出了一种在线运动规划方法,运行时随机集成仿真(Runtime SES)规划,这是一种廉价的方法,可以预测具有一般随机动力学的障碍物运动,同时在可能存在障碍物位置误差的情况下保持较高的规划成功率。运行时SES规划通过在线执行蒙特卡洛模拟来评估机器人周围任何状态-时间坐标的碰撞可能性。该预测用于构建定制的快速探索随机树(RRT),以便快速识别在向目标移动时避开障碍物的路径。我们演示了运行时SES规划的问题,这些问题受益于在线预测,具有随机动力学和位置误差的强相互作用障碍的环境。通过探索各种参数化、机器人动力学和障碍物交互模型的影响的实验,我们表明,在一些复杂的环境中,具有高成功率的实时能力规划是可以实现的。
{"title":"Runtime SES planning: Online motion planning in environments with stochastic dynamics and uncertainty","authors":"H. Chiang, N. Rackley, Lydia Tapia","doi":"10.1109/IROS.2016.7759705","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759705","url":null,"abstract":"Motion planning in stochastic dynamic uncertain environments is critical in several applications such as human interacting robots, autonomous vehicles and assistive robots. In order to address these complex applications, several methods have been developed. The most successful methods often predict future obstacle locations in order identify collision-free paths. Since prediction can be computationally expensive, offline computations are commonly used, and simplifications such as the inability to consider the dynamics of interacting obstacles or possible stochastic dynamics are often applied. Online methods can be preferable to simulate potential obstacle interactions, but recent methods have been restricted to Gaussian interaction processes and uncertainty. In this paper we present an online motion planning method, Runtime Stochastic Ensemble Simulation (Runtime SES) planning, an inexpensive method for predicting obstacle motion with generic stochastic dynamics while maintaining a high planning success rate despite the potential presence of obstacle position error. Runtime SES planning evaluates the likelihood of collision for any state-time coordinate around the robot by performing Monte Carlo simulations online. This prediction is used to construct a customized Rapidly Exploring Random Tree (RRT) in order to quickly identify paths that avoid obstacles while moving toward a goal. We demonstrate Runtime SES planning in problems that benefit from online predictions, environments with strongly-interacting obstacles with stochastic dynamics and positional error. Through experiments that explore the impact of various parametrizations, robot dynamics and obstacle interaction models, we show that real-time capable planning with a high success rate is achievable in several complex environments.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"774 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115642669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Snake robots in contact with the environment: Influence of the configuration on the applied wrench 蛇形机器人与环境的接触:结构对所用扳手的影响
Pub Date : 2016-10-01 DOI: 10.1109/IROS.2016.7759567
F. Reyes, Shugen Ma
Robots capable of both locomotion and interaction with the environment are necessary for robots to move from ideal laboratory situations to real applications. Snake robots have been researched for locomotion in unstructured environments due to its unique and adaptable gaits. However, they have not been used to interact with the environment in a dexterous manner, for example to grasp or push an object. In this paper, the model of a snake robot in contact with external objects (or the environment) is derived. Metrics that are coordinate-independent are derived in order to quantify the wrench that a snake robot could exert into these objects. In particular, we show that the configuration of the robot plays a significant role in these metrics. The model and metrics are tested in a study case.
机器人既能运动又能与环境互动,是机器人从理想的实验室环境走向实际应用的必要条件。蛇形机器人由于其独特且适应性强的步态,在非结构化环境中进行运动研究。然而,它们还没有被用来以灵巧的方式与环境互动,例如抓住或推动物体。本文推导了蛇形机器人与外界物体(或环境)接触的模型。与坐标无关的度量是为了量化蛇形机器人对这些物体施加的压力。特别是,我们表明机器人的结构在这些指标中起着重要的作用。在一个研究案例中对模型和度量进行了测试。
{"title":"Snake robots in contact with the environment: Influence of the configuration on the applied wrench","authors":"F. Reyes, Shugen Ma","doi":"10.1109/IROS.2016.7759567","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759567","url":null,"abstract":"Robots capable of both locomotion and interaction with the environment are necessary for robots to move from ideal laboratory situations to real applications. Snake robots have been researched for locomotion in unstructured environments due to its unique and adaptable gaits. However, they have not been used to interact with the environment in a dexterous manner, for example to grasp or push an object. In this paper, the model of a snake robot in contact with external objects (or the environment) is derived. Metrics that are coordinate-independent are derived in order to quantify the wrench that a snake robot could exert into these objects. In particular, we show that the configuration of the robot plays a significant role in these metrics. The model and metrics are tested in a study case.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123079680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A deep-network solution towards model-less obstacle avoidance 面向无模型避障的深度网络解决方案
Pub Date : 2016-10-01 DOI: 10.1109/IROS.2016.7759428
L. Tai, Shaohua Li, Ming Liu
Obstacle avoidance is the core problem for mobile robots. Its objective is to allow mobile robots to explore an unknown environment without colliding into other objects. It is the basis for various tasks, e.g. surveillance and rescue, etc. Previous approaches mainly focused on geometric models (such as constructing local cost-maps) which could be regarded as low-level intelligence without any cognitive process. Recently, deep learning has made great breakthroughs in computer vision, especially for recognition and cognitive tasks. It takes advantage of the hierarchical models inspired by human brain structures. However, it is a fact that deep learning, up till now, has seldom been used for controlling and decision making. Inspired by the advantages of deep learning, we take indoor obstacle avoidance as example to show the effectiveness of a hierarchical structure that fuses a convolutional neural network (CNN) with a decision process. It is a highly compact network structure that takes raw depth images as input, and generates control commands as network output, by which a model-less obstacle avoidance behavior is achieved. We test our approach in real-world indoor environments. The new findings and results are reported at the end of the paper.
避障是移动机器人的核心问题。它的目标是允许移动机器人探索未知环境而不会碰撞到其他物体。它是各种任务的基础,例如监视和救援等。以前的方法主要集中在几何模型(如构建局部成本图)上,这可以被视为不需要任何认知过程的低水平智力。近年来,深度学习在计算机视觉方面取得了很大的突破,特别是在识别和认知任务方面。它利用了受人类大脑结构启发的分层模型。然而,到目前为止,深度学习还很少被用于控制和决策。受深度学习优势的启发,我们以室内避障为例,展示了将卷积神经网络(CNN)与决策过程融合的层次结构的有效性。它是一种高度紧凑的网络结构,以原始深度图像为输入,生成控制命令作为网络输出,从而实现无模型的避障行为。我们在真实的室内环境中测试了我们的方法。本文最后报告了新的发现和结果。
{"title":"A deep-network solution towards model-less obstacle avoidance","authors":"L. Tai, Shaohua Li, Ming Liu","doi":"10.1109/IROS.2016.7759428","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759428","url":null,"abstract":"Obstacle avoidance is the core problem for mobile robots. Its objective is to allow mobile robots to explore an unknown environment without colliding into other objects. It is the basis for various tasks, e.g. surveillance and rescue, etc. Previous approaches mainly focused on geometric models (such as constructing local cost-maps) which could be regarded as low-level intelligence without any cognitive process. Recently, deep learning has made great breakthroughs in computer vision, especially for recognition and cognitive tasks. It takes advantage of the hierarchical models inspired by human brain structures. However, it is a fact that deep learning, up till now, has seldom been used for controlling and decision making. Inspired by the advantages of deep learning, we take indoor obstacle avoidance as example to show the effectiveness of a hierarchical structure that fuses a convolutional neural network (CNN) with a decision process. It is a highly compact network structure that takes raw depth images as input, and generates control commands as network output, by which a model-less obstacle avoidance behavior is achieved. We test our approach in real-world indoor environments. The new findings and results are reported at the end of the paper.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123093133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 177
期刊
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1