首页 > 最新文献

2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)最新文献

英文 中文
On 3D simulators for multi-robot systems in ROS: MORSE or Gazebo? 多机器人系统的三维模拟器在ROS: MORSE或Gazebo?
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088134
F. Noori, David Portugal, R. Rocha, M. Couceiro
Realistically simulating a population of robots has been an important subject to the robotics community for the last couple of decades. Multi-robot systems are often challenging to deploy in the real world due to the complexity involved, and researchers often develop and validate coordination mechanisms and collaborative robotic behavior preliminarily in simulations. Thus, choosing a useful, flexible and realistic simulator becomes an important task. In this paper, we overview several 3D multi-robot simulators, focusing on those that support the Robot Operating System (ROS). We also provide a comparative analysis, discussing two popular open-source 3D simulators compatible with ROS - MORSE and Gazebo -, using a multi-robot patrolling application, i.e. a distributed security task, as a case study.
在过去的几十年里,真实地模拟机器人群体一直是机器人社区的一个重要课题。由于多机器人系统涉及的复杂性,在现实世界中部署往往具有挑战性,研究人员经常在仿真中初步开发和验证协调机制和协作机器人行为。因此,选择一个实用、灵活、逼真的仿真器成为一项重要任务。在本文中,我们概述了几种三维多机器人模拟器,重点是那些支持机器人操作系统(ROS)。我们还提供了比较分析,讨论了两种流行的开源3D模拟器兼容ROS - MORSE和Gazebo -,使用多机器人巡逻应用程序,即分布式安全任务,作为案例研究。
{"title":"On 3D simulators for multi-robot systems in ROS: MORSE or Gazebo?","authors":"F. Noori, David Portugal, R. Rocha, M. Couceiro","doi":"10.1109/SSRR.2017.8088134","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088134","url":null,"abstract":"Realistically simulating a population of robots has been an important subject to the robotics community for the last couple of decades. Multi-robot systems are often challenging to deploy in the real world due to the complexity involved, and researchers often develop and validate coordination mechanisms and collaborative robotic behavior preliminarily in simulations. Thus, choosing a useful, flexible and realistic simulator becomes an important task. In this paper, we overview several 3D multi-robot simulators, focusing on those that support the Robot Operating System (ROS). We also provide a comparative analysis, discussing two popular open-source 3D simulators compatible with ROS - MORSE and Gazebo -, using a multi-robot patrolling application, i.e. a distributed security task, as a case study.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122479752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Tempered point clouds and octomaps: A step towards true 3D temperature measurement in unknown environments 调温点云和八层地图:在未知环境中实现真正的3D温度测量的一步
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088145
B. Zeise, Bernardo Wagner
Although the generation of 3D temperature maps has become a frequently used technique, not only in search and rescue applications but also during inspection tasks, the remote measurement of a surface's true temperature is still a huge challenge. In this work, we face the problem of creating corrected 3D temperature maps in unknown environments without prior knowledge of surface emissivities. Using a calibrated sensor stack consisting of a 3D laser range finder and a thermal imaging camera, we generate Tempered Point Clouds (TPCs). With the help of the TPCs, we show how to perform a basic material classification, i.e. to make a distinction between metal and dielectric surface areas. For this purpose, we investigate measurements taken from different viewing angles. With the help of this approach, it is also possible to estimate corrected surface temperatures. The presented methods are evaluated making use of the OctoMap framework.
尽管三维温度图的生成已经成为一种常用的技术,不仅在搜索和救援应用中,而且在检查任务中,远程测量表面的真实温度仍然是一个巨大的挑战。在这项工作中,我们面临的问题是,在没有事先了解表面发射率的情况下,在未知环境中创建校正的3D温度图。使用由3D激光测距仪和热成像相机组成的校准传感器堆栈,我们生成了调温点云(TPCs)。在tpc的帮助下,我们展示了如何进行基本的材料分类,即区分金属和介电表面积。为此,我们研究了从不同视角进行的测量。在这种方法的帮助下,还可以估计修正后的表面温度。利用OctoMap框架对所提出的方法进行了评估。
{"title":"Tempered point clouds and octomaps: A step towards true 3D temperature measurement in unknown environments","authors":"B. Zeise, Bernardo Wagner","doi":"10.1109/SSRR.2017.8088145","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088145","url":null,"abstract":"Although the generation of 3D temperature maps has become a frequently used technique, not only in search and rescue applications but also during inspection tasks, the remote measurement of a surface's true temperature is still a huge challenge. In this work, we face the problem of creating corrected 3D temperature maps in unknown environments without prior knowledge of surface emissivities. Using a calibrated sensor stack consisting of a 3D laser range finder and a thermal imaging camera, we generate Tempered Point Clouds (TPCs). With the help of the TPCs, we show how to perform a basic material classification, i.e. to make a distinction between metal and dielectric surface areas. For this purpose, we investigate measurements taken from different viewing angles. With the help of this approach, it is also possible to estimate corrected surface temperatures. The presented methods are evaluated making use of the OctoMap framework.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127253077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Position estimation of tethered micro unmanned aerial vehicle by observing the slack tether 基于松弛系绳的系绳微型无人机位置估计
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088157
Seiga Kiribayashi, Kaede Yakushigawa, K. Nagatani
At disaster sites, the use of Micro Unmanned Aerial Vehicles (MUAVs) is expected for human safety. One application is to support first-phase emergency restoration work conducted by teleoperated construction machines. To extend the operation time of a MUAV, the authors proposed a powerfeeding tethered MUAV to provide an overhead view of the site to operators. The target application is to be used outdoors, so a robust and simple position estimation method for the MUAV is required. Therefore, in this paper, the authors propose a position estimation method for the MUAV by observing the slack tether instead of using the Global Positioning System (GPS), vision sensors, or a laser rangefinder. The tether shape is assumed to be a catenary curve that can be estimated by measuring the tether's length, tension, and outlet direction. To evaluate the proposed method, the authors developed a prototype of a helipad with a tether winding mechanism for the tethered MUAV, which contains a measurement function of the tether status. Some indoor experimental results proved the feasibility of the proposed method.
在灾难现场,微型无人机(MUAVs)的使用有望保障人类安全。一个应用是支持由遥控施工机械进行的第一阶段紧急恢复工作。为了延长MUAV的作业时间,作者提出了一种动力馈送系留MUAV,为操作员提供现场的俯视视图。目标应用将在户外使用,因此需要一种鲁棒且简单的MUAV位置估计方法。因此,在本文中,作者提出了一种通过观察松弛系绳而不是使用全球定位系统(GPS)、视觉传感器或激光测距仪对无人机进行位置估计的方法。假设系绳的形状为悬链线曲线,可以通过测量系绳的长度、张力和出口方向来估计。为了评估所提出的方法,作者开发了一个带有系绳缠绕机构的直升机停机坪原型,其中包含一个系绳状态的测量函数。一些室内实验结果证明了该方法的可行性。
{"title":"Position estimation of tethered micro unmanned aerial vehicle by observing the slack tether","authors":"Seiga Kiribayashi, Kaede Yakushigawa, K. Nagatani","doi":"10.1109/SSRR.2017.8088157","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088157","url":null,"abstract":"At disaster sites, the use of Micro Unmanned Aerial Vehicles (MUAVs) is expected for human safety. One application is to support first-phase emergency restoration work conducted by teleoperated construction machines. To extend the operation time of a MUAV, the authors proposed a powerfeeding tethered MUAV to provide an overhead view of the site to operators. The target application is to be used outdoors, so a robust and simple position estimation method for the MUAV is required. Therefore, in this paper, the authors propose a position estimation method for the MUAV by observing the slack tether instead of using the Global Positioning System (GPS), vision sensors, or a laser rangefinder. The tether shape is assumed to be a catenary curve that can be estimated by measuring the tether's length, tension, and outlet direction. To evaluate the proposed method, the authors developed a prototype of a helipad with a tether winding mechanism for the tethered MUAV, which contains a measurement function of the tether status. Some indoor experimental results proved the feasibility of the proposed method.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132533599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Safe navigation in dynamic, unknown, continuous, and cluttered environments 在动态、未知、连续和混乱的环境中安全导航
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088169
Mike D'Arcy, Pooyan Fazli, D. Simon
We introduce ProbLP, a probabilistic local planner, for safe navigation of an autonomous robot in dynamic, unknown, continuous, and cluttered environments. We combine the proposed reactive planner with an existing global planner and evaluate the hybrid in challenging simulated environments. The experiments show that our method achieves a 77% reduction in collisions over the straight-line local planner we use as a benchmark.
本文介绍了概率局部规划器ProbLP,用于自主机器人在动态、未知、连续和杂乱环境中的安全导航。我们将所提出的响应式规划器与现有的全局规划器相结合,并在具有挑战性的模拟环境中对其进行了评估。实验表明,我们的方法比我们用作基准的直线局部规划器减少了77%的碰撞。
{"title":"Safe navigation in dynamic, unknown, continuous, and cluttered environments","authors":"Mike D'Arcy, Pooyan Fazli, D. Simon","doi":"10.1109/SSRR.2017.8088169","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088169","url":null,"abstract":"We introduce ProbLP, a probabilistic local planner, for safe navigation of an autonomous robot in dynamic, unknown, continuous, and cluttered environments. We combine the proposed reactive planner with an existing global planner and evaluate the hybrid in challenging simulated environments. The experiments show that our method achieves a 77% reduction in collisions over the straight-line local planner we use as a benchmark.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122773240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Autonomous observation of multiple USVs from UAV while prioritizing camera tilt and yaw over UAV motion 在优先考虑相机倾斜和偏航而不是无人机运动的情况下,无人机自主观察多个usv
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088154
C. Krishna, Mengdie Cao, R. Murphy
This paper proposes a scheme for observing cooperative Unmanned Surface Vehicles (USV), using a rotorcraft Unmanned Aerial Vehicle (UAV) with camera movements (tilt and yaw) prioritized over UAV movements. Most of the current researches consider a fixed-wing type UAV for surveillance of multiple moving targets (MMT), whose functionality is limited to just UAV movements. Experiments in simulation are conducted and verified that, prioritizing camera movements increased the number of times each USV is visited (on an average by 5.68 times more), decreased the percentage of the duration that the UAV is not observing any USV (on an average by 19.8%) and increased the efficiency by decreasing the distance traveled by the UAV (on an average by 747 pixels) for the six test cases. Autonomous repositioning of the UAV at regular intervals to observe USVs during a disaster scenario will provide the operator with better situational awareness. Using a rotorcraft over a fixed-wing type UAV provides the operator with a flexibility of observing the target for the required duration by hovering and freedom of unrestricted movements, which help improve the efficiency of target observation.
本文提出了一种观测协同无人水面飞行器(USV)的方案,该方案使用旋翼无人机(UAV),相机运动(倾斜和偏航)优先于无人机运动。目前的研究大多考虑固定翼无人机用于多运动目标监视,其功能仅限于无人机的运动。在模拟实验中进行并验证了,优先考虑相机运动增加了每个USV被访问的次数(平均5.68倍以上),减少了无人机不观察任何USV的持续时间百分比(平均19.8%),并通过减少无人机的飞行距离(平均747像素)来提高效率。在灾难场景中,无人机定期自动重新定位以观察usv,将为操作员提供更好的态势感知能力。在固定翼型无人机上使用旋翼飞机为操作员提供了通过悬停和不受限制运动的自由来观察目标所需持续时间的灵活性,这有助于提高目标观察的效率。
{"title":"Autonomous observation of multiple USVs from UAV while prioritizing camera tilt and yaw over UAV motion","authors":"C. Krishna, Mengdie Cao, R. Murphy","doi":"10.1109/SSRR.2017.8088154","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088154","url":null,"abstract":"This paper proposes a scheme for observing cooperative Unmanned Surface Vehicles (USV), using a rotorcraft Unmanned Aerial Vehicle (UAV) with camera movements (tilt and yaw) prioritized over UAV movements. Most of the current researches consider a fixed-wing type UAV for surveillance of multiple moving targets (MMT), whose functionality is limited to just UAV movements. Experiments in simulation are conducted and verified that, prioritizing camera movements increased the number of times each USV is visited (on an average by 5.68 times more), decreased the percentage of the duration that the UAV is not observing any USV (on an average by 19.8%) and increased the efficiency by decreasing the distance traveled by the UAV (on an average by 747 pixels) for the six test cases. Autonomous repositioning of the UAV at regular intervals to observe USVs during a disaster scenario will provide the operator with better situational awareness. Using a rotorcraft over a fixed-wing type UAV provides the operator with a flexibility of observing the target for the required duration by hovering and freedom of unrestricted movements, which help improve the efficiency of target observation.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116279673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Vehicle detection and localization on bird's eye view elevation images using convolutional neural network 基于卷积神经网络的鸟瞰高程图像车辆检测与定位
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088147
Shang-Lin Yu, Thomas Westfechtel, Ryunosuke Hamada, K. Ohno, S. Tadokoro
For autonomous vehicles, the ability to detect and localize surrounding vehicles is critical. It is fundamental for further processing steps like collision avoidance or path planning. This paper introduces a convolutional neural network- based vehicle detection and localization method using point cloud data acquired by a LIDAR sensor. Acquired point clouds are transformed into bird's eye view elevation images, where each pixel represents a grid cell of the horizontal x-y plane. We intentionally encode each pixel using three channels, namely the maximal, median and minimal height value of all points within the respective grid. A major advantage of this three channel representation is that it allows us to utilize common RGB image-based detection networks without modification. The bird's eye view elevation images are processed by a two stage detector. Due to the nature of the bird's eye view, each pixel of the image represent ground coordinates, meaning that the bounding box of detected vehicles correspond directly to the horizontal position of the vehicles. Therefore, in contrast to RGB-based detectors, we not just detect the vehicles, but simultaneously localize them in ground coordinates. To evaluate the accuracy of our method and the usefulness for further high-level applications like path planning, we evaluate the detection results based on the localization error in ground coordinates. Our proposed method achieves an average precision of 87.9% for an intersection over union (IoU) value of 0.5. In addition, 75% of the detected cars are localized with an absolute positioning error of below 0.2m.
对于自动驾驶汽车来说,检测和定位周围车辆的能力至关重要。它是进一步处理步骤的基础,如避免碰撞或路径规划。本文介绍了一种基于卷积神经网络的车辆检测与定位方法,该方法利用激光雷达传感器获取的点云数据进行车辆检测与定位。将获取的点云转换为鸟瞰高程图像,其中每个像素代表水平x-y平面的一个网格单元。我们有意使用三个通道对每个像素进行编码,即各自网格内所有点的最大,中位数和最小高度值。这种三通道表示的一个主要优点是,它允许我们使用普通的基于RGB图像的检测网络而无需修改。鸟瞰仰角图像由二级探测器处理。由于鸟瞰的性质,图像的每个像素代表地面坐标,这意味着检测到的车辆的边界框直接对应于车辆的水平位置。因此,与基于rgb的探测器相比,我们不仅可以检测车辆,还可以同时在地面坐标中对其进行定位。为了评估我们的方法的准确性和对路径规划等进一步高级应用的有用性,我们基于地面坐标的定位误差来评估检测结果。当IoU值为0.5时,该方法的平均精度为87.9%。此外,75%的检测车辆被定位,绝对定位误差在0.2m以下。
{"title":"Vehicle detection and localization on bird's eye view elevation images using convolutional neural network","authors":"Shang-Lin Yu, Thomas Westfechtel, Ryunosuke Hamada, K. Ohno, S. Tadokoro","doi":"10.1109/SSRR.2017.8088147","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088147","url":null,"abstract":"For autonomous vehicles, the ability to detect and localize surrounding vehicles is critical. It is fundamental for further processing steps like collision avoidance or path planning. This paper introduces a convolutional neural network- based vehicle detection and localization method using point cloud data acquired by a LIDAR sensor. Acquired point clouds are transformed into bird's eye view elevation images, where each pixel represents a grid cell of the horizontal x-y plane. We intentionally encode each pixel using three channels, namely the maximal, median and minimal height value of all points within the respective grid. A major advantage of this three channel representation is that it allows us to utilize common RGB image-based detection networks without modification. The bird's eye view elevation images are processed by a two stage detector. Due to the nature of the bird's eye view, each pixel of the image represent ground coordinates, meaning that the bounding box of detected vehicles correspond directly to the horizontal position of the vehicles. Therefore, in contrast to RGB-based detectors, we not just detect the vehicles, but simultaneously localize them in ground coordinates. To evaluate the accuracy of our method and the usefulness for further high-level applications like path planning, we evaluate the detection results based on the localization error in ground coordinates. Our proposed method achieves an average precision of 87.9% for an intersection over union (IoU) value of 0.5. In addition, 75% of the detected cars are localized with an absolute positioning error of below 0.2m.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129558129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation 无人机导航三维大尺度场景单目视觉惯性状态估计
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088162
J. Su, Xutao Li, Yunming Ye, Yan Li
Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.
视觉里程计的直接方法不需要计算特征描述符,直接使用相机传感器的实际值,因此得到了广泛的应用。因此,它非常快。然而,其准确性和一致性并不令人满意。基于此,提出了一种紧密耦合的、基于优化的惯性测量单元(IMU)与视觉测量融合方法,利用IMU预积分为半直接方法跟踪提供先验状态,并利用视觉里程计的精确状态估计来优化IMU状态估计。此外,我们还结合了Kanade-Lucas-Tomasi跟踪和概率深度滤波器,从而可以有效地跟踪低频率或高频纹理环境中的像素。我们的方法可以利用单目摄像机和IMU获得初始IMU体框架的重力方向和尺度信息。更重要的是,我们不需要任何预先的里程碑点。我们的单目视觉惯性状态估计在基准数据集上速度更快,精度更高。
{"title":"Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation","authors":"J. Su, Xutao Li, Yunming Ye, Yan Li","doi":"10.1109/SSRR.2017.8088162","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088162","url":null,"abstract":"Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130893144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual pose stabilization of tethered small unmanned aerial system to assist drowning victim recovery 系绳小型无人机系统的视觉姿态稳定,以协助溺水者恢复
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088149
J. Dufek, Xuesu Xiao, R. Murphy
This paper proposes a method for visual pose stabilization of Fotokite, a tethered small unmanned aerial system, using a forward facing monocular camera. Conventionally, Fotokite stabilizes itself only relative to its tether and not relative to the global frame. It is, therefore, susceptible to environmental disturbances (especially wind) or motion of its ground station. Related work proposed visual stabilization for unmanned aerial systems using a downward facing camera and homography estimation. The major disadvantage of this approach is that all the features used in the homography estimation must be in the same plane. The method proposed in this paper works for features in different planes and can be used with a forward-facing camera. This paper is the part of a bigger project on saving drowning victims using lifesaving unmanned surface vehicle visually servoed by Fotokite to reach the victims. Some of the used algorithms are motion sensitive and, therefore, it is desirable for Fotokite to keep its pose relative to the world. The method presented in this paper will enable to prevent gradual drifting of Fotokite in windy conditions typical for coastal areas or when the ground station is on a boat. The quality of pose stabilization was quantitatively analyzed in 9 trials by measuring metric displacement from the initial pose. The achieved mean metric displacement was 34 cm. The results were also compared to 3 trials with no stabilization.
提出了一种利用前置单目摄像机实现系留小型无人机Fotokite视觉姿态稳定的方法。通常,Fotokite只相对于它的系绳而不是相对于整体框架稳定自己。因此,它容易受到环境干扰(特别是风)或地面站运动的影响。相关工作提出了使用下视摄像头和单应性估计实现无人机系统的视觉稳定。这种方法的主要缺点是在单应性估计中使用的所有特征必须在同一平面上。本文提出的方法适用于不同平面的特征,可以与前置相机一起使用。本文是一个更大的项目的一部分,该项目是利用Fotokite视觉伺服的救生无人水面飞行器来救助溺水受害者。使用的一些算法是运动敏感的,因此,Fotokite保持相对于世界的姿态是可取的。本文提出的方法可以防止Fotokite在沿海地区典型的多风条件下或地面站在船上时逐渐漂移。通过测量初始姿态的度量位移,对9个试验的姿态稳定质量进行了定量分析。实现的平均公制位移为34厘米。结果还与3个没有稳定的试验进行了比较。
{"title":"Visual pose stabilization of tethered small unmanned aerial system to assist drowning victim recovery","authors":"J. Dufek, Xuesu Xiao, R. Murphy","doi":"10.1109/SSRR.2017.8088149","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088149","url":null,"abstract":"This paper proposes a method for visual pose stabilization of Fotokite, a tethered small unmanned aerial system, using a forward facing monocular camera. Conventionally, Fotokite stabilizes itself only relative to its tether and not relative to the global frame. It is, therefore, susceptible to environmental disturbances (especially wind) or motion of its ground station. Related work proposed visual stabilization for unmanned aerial systems using a downward facing camera and homography estimation. The major disadvantage of this approach is that all the features used in the homography estimation must be in the same plane. The method proposed in this paper works for features in different planes and can be used with a forward-facing camera. This paper is the part of a bigger project on saving drowning victims using lifesaving unmanned surface vehicle visually servoed by Fotokite to reach the victims. Some of the used algorithms are motion sensitive and, therefore, it is desirable for Fotokite to keep its pose relative to the world. The method presented in this paper will enable to prevent gradual drifting of Fotokite in windy conditions typical for coastal areas or when the ground station is on a boat. The quality of pose stabilization was quantitatively analyzed in 9 trials by measuring metric displacement from the initial pose. The achieved mean metric displacement was 34 cm. The results were also compared to 3 trials with no stabilization.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124515582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Robotic bridge statics assessment within strategic flood evacuation planning using low-cost sensors 利用低成本传感器进行洪水疏散规划中的机器人桥梁静力学评估
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088133
Maik Benndorf, T. Haenselmann, Maximilian Garsch, N. Gebbeken, Christian A. Mueller, Tobias Fromm, T. Luczynski, A. Birk
Scenario: A rescue team needs to cross a partially damaged bridge in a flooded area. It is unknown whether the construction is still able to carry a vehicle. Assessing the construction's integrity can be accomplished by the analysis of the bridge's eigenfrequencies. Rather than using proprietary expensive Vibration Measurement Systems (VMS) we propose to utilize off-the-shelf smartphones as sensors - which still require to be placed at the spot on the bridge best suited for picking up vibrations. Within this work, we use an Unmanned Ground Vehicle (UGV) featuring a robotic manipulator. It allows a non-technician operator to optimally place the device semi- automatically. We evaluate our approach in a real-life scenario. Demo video: https://youtu.be/u_3pe0nZ5tw
场景:一支救援队需要穿过洪水地区的一座部分受损的桥梁。目前尚不清楚该建筑是否仍然能够携带车辆。评估结构的完整性可以通过分析桥梁的特征频率来完成。与其使用昂贵的专有振动测量系统(VMS),我们建议使用现成的智能手机作为传感器——它仍然需要放置在桥上最适合采集振动的位置。在这项工作中,我们使用了一种具有机器人操纵器的无人地面车辆(UGV)。它允许非技术操作员半自动地最佳放置设备。我们在一个真实的场景中评估我们的方法。演示视频:https://youtu.be/u_3pe0nZ5tw
{"title":"Robotic bridge statics assessment within strategic flood evacuation planning using low-cost sensors","authors":"Maik Benndorf, T. Haenselmann, Maximilian Garsch, N. Gebbeken, Christian A. Mueller, Tobias Fromm, T. Luczynski, A. Birk","doi":"10.1109/SSRR.2017.8088133","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088133","url":null,"abstract":"Scenario: A rescue team needs to cross a partially damaged bridge in a flooded area. It is unknown whether the construction is still able to carry a vehicle. Assessing the construction's integrity can be accomplished by the analysis of the bridge's eigenfrequencies. Rather than using proprietary expensive Vibration Measurement Systems (VMS) we propose to utilize off-the-shelf smartphones as sensors - which still require to be placed at the spot on the bridge best suited for picking up vibrations. Within this work, we use an Unmanned Ground Vehicle (UGV) featuring a robotic manipulator. It allows a non-technician operator to optimally place the device semi- automatically. We evaluate our approach in a real-life scenario. Demo video: https://youtu.be/u_3pe0nZ5tw","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116238468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Intelligent vehicle for search, rescue and transportation purposes 用于搜索、救援和运输的智能车辆
Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088148
Abdulla Al-Kaff, Francisco Miguel Moreno, A. D. L. Escalera, Jose M. Armingol
Recent development in micro-electronics technologies as well as the computer vision techniques increased demand to use Unmanned Aerial Vehicles (UAVs) in several industrial and civil applications. This paper proposed a vision based system, that is used in UAVs for search, rescue and transportation purposes. The proposed system is divided into two main parts: Vision-based object detection and classification, in which, a Kinect V2 sensor is used; to extract the objects from the ground plane, and estimate the distance to the UAV. In addition, Support Vector Machine (SVM) human detector based on Histograms of Oriented Gradients (HOG) features is applied to classify the human bodies from the all detected objects. Secondly, a semi-autonomous reactive control for visual servoing system is implemented; to control the position and the velocity of the UAV for performing safe approaching maneuvers to the detected objects. The proposed system has been validated by performing several real flights, and the obtained results show the high robustness and accuracy of the system.
微电子技术以及计算机视觉技术的最新发展增加了在几个工业和民用应用中使用无人驾驶飞行器(uav)的需求。本文提出了一种用于无人机搜索、救援和运输的基于视觉的系统。该系统主要分为两个部分:基于视觉的物体检测与分类,其中使用Kinect V2传感器;从地平面提取目标,并估计到无人机的距离。此外,采用基于HOG特征的支持向量机(SVM)人体检测器,从所有检测对象中对人体进行分类。其次,实现了视觉伺服系统的半自主响应控制;控制UAV的位置和速度以执行安全接近被探测物体的机动。仿真结果表明,该系统具有较高的鲁棒性和精度。
{"title":"Intelligent vehicle for search, rescue and transportation purposes","authors":"Abdulla Al-Kaff, Francisco Miguel Moreno, A. D. L. Escalera, Jose M. Armingol","doi":"10.1109/SSRR.2017.8088148","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088148","url":null,"abstract":"Recent development in micro-electronics technologies as well as the computer vision techniques increased demand to use Unmanned Aerial Vehicles (UAVs) in several industrial and civil applications. This paper proposed a vision based system, that is used in UAVs for search, rescue and transportation purposes. The proposed system is divided into two main parts: Vision-based object detection and classification, in which, a Kinect V2 sensor is used; to extract the objects from the ground plane, and estimate the distance to the UAV. In addition, Support Vector Machine (SVM) human detector based on Histograms of Oriented Gradients (HOG) features is applied to classify the human bodies from the all detected objects. Secondly, a semi-autonomous reactive control for visual servoing system is implemented; to control the position and the velocity of the UAV for performing safe approaching maneuvers to the detected objects. The proposed system has been validated by performing several real flights, and the obtained results show the high robustness and accuracy of the system.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124874345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1