Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8794365
Mohan Krishna Nutalapati, Shruti Joshi, K. Rajawat
This paper considers the problem of online optimal trajectory design under time-varying environments. Of particular interest is the design of energy-efficient trajectories under strong and uncertain disturbances in ocean environments and time-varying goal location. We formulate the problem within the constrained online convex optimization formalism, and a modified online gradient descent algorithm is motivated. The mobility constraints are met using a carefully chosen stepsize, and the proposed algorithm is shown to incur sublinear regret. Different from the state-of-the-art algorithms that entail planning and re-planning the full trajectory using forecast data at each time instant, the proposed algorithm is entirely online and relies mostly on the current ocean velocity measurements at the vehicle locations. The trade-off between excess delay incurred in reaching the goal and the overall energy consumption is examined via numerical tests carried out on real data obtained from the regional ocean modelling system. As compared to the state-of-the-art algorithms, the proposed algorithm is not only energy-efficient but also several orders of magnitude computationally efficient.
{"title":"Online Utility-Optimal Trajectory Design for Time-Varying Ocean Environments","authors":"Mohan Krishna Nutalapati, Shruti Joshi, K. Rajawat","doi":"10.1109/ICRA.2019.8794365","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794365","url":null,"abstract":"This paper considers the problem of online optimal trajectory design under time-varying environments. Of particular interest is the design of energy-efficient trajectories under strong and uncertain disturbances in ocean environments and time-varying goal location. We formulate the problem within the constrained online convex optimization formalism, and a modified online gradient descent algorithm is motivated. The mobility constraints are met using a carefully chosen stepsize, and the proposed algorithm is shown to incur sublinear regret. Different from the state-of-the-art algorithms that entail planning and re-planning the full trajectory using forecast data at each time instant, the proposed algorithm is entirely online and relies mostly on the current ocean velocity measurements at the vehicle locations. The trade-off between excess delay incurred in reaching the goal and the overall energy consumption is examined via numerical tests carried out on real data obtained from the regional ocean modelling system. As compared to the state-of-the-art algorithms, the proposed algorithm is not only energy-efficient but also several orders of magnitude computationally efficient.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"44 1","pages":"6853-6859"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73784598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8793571
D. Li, Zerui Wang, Bo Ouyang, Yunhui Liu
Inherent compliance plays an enabling role in soft robots, which rely on it to mechanically conform to the environment. However, it also limits the payload of the robots. Various variable stiffness approaches have been adopted to limit compliance and provide structural stability, but most of them can only achieve stiffening of discrete fixed regions which means compliance cannot be precisely adjusted for different needs. This paper offers an approach to enhance the payload with finely adjusted compliance for different needs. We have developed a manipulator that incorporates a novel variable stiffness mechanism and a sliding layer mechanism. The variable stiffness mechanism can achieve a 6.4 stiffness changing ratio with a miniaturized size (10mm diameter for the testing prototype) through interlocking jamming layers with a honeycomb core. The sliding layer mechanism can actively shift the position of the stiffening regions through sliding of jamming layers. A model to predict the robot shape is derived with verifications via an experiment. The stiffening capacity of the variable stiffness mechanism is also empirically evaluated. A case study of a potential application in laparoscopic surgeries is showcased. The payload of the manipulator is investigated, and the prototype shows up to 57.8 percentage decrease of the vertical deflection due to an external load after reconfigurations.
{"title":"A Reconfigurable Variable Stiffness Manipulator by a Sliding Layer Mechanism","authors":"D. Li, Zerui Wang, Bo Ouyang, Yunhui Liu","doi":"10.1109/ICRA.2019.8793571","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793571","url":null,"abstract":"Inherent compliance plays an enabling role in soft robots, which rely on it to mechanically conform to the environment. However, it also limits the payload of the robots. Various variable stiffness approaches have been adopted to limit compliance and provide structural stability, but most of them can only achieve stiffening of discrete fixed regions which means compliance cannot be precisely adjusted for different needs. This paper offers an approach to enhance the payload with finely adjusted compliance for different needs. We have developed a manipulator that incorporates a novel variable stiffness mechanism and a sliding layer mechanism. The variable stiffness mechanism can achieve a 6.4 stiffness changing ratio with a miniaturized size (10mm diameter for the testing prototype) through interlocking jamming layers with a honeycomb core. The sliding layer mechanism can actively shift the position of the stiffening regions through sliding of jamming layers. A model to predict the robot shape is derived with verifications via an experiment. The stiffening capacity of the variable stiffness mechanism is also empirically evaluated. A case study of a potential application in laparoscopic surgeries is showcased. The payload of the manipulator is investigated, and the prototype shows up to 57.8 percentage decrease of the vertical deflection due to an external load after reconfigurations.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"170 1","pages":"3976-3982"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90388100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8794115
Guillermo Duenas Arana, O. A. Hafez, M. Joerger, M. Spenko
This paper presents a new methodology to quantify robot localization safety by evaluating integrity risk, a performance metric widely used in open-sky aviation applications that has been recently extended to mobile ground robots. Here, a robot is localized by feeding relative measurements to mapped landmarks into an Extended Kalman Filter while a sequence of innovations is evaluated for fault detection. The main contribution is the derivation of a sequential chi-squared integrity monitoring methodology that maintains constant computation requirements by employing a preceding time window and, at the same time, is robust against faults occurring prior to the window. Additionally, no assumptions are made on either the nature or shape of the faults because safety is evaluated under the worst possible combination of sensor faults.
{"title":"Recursive Integrity Monitoring for Mobile Robot Localization Safety","authors":"Guillermo Duenas Arana, O. A. Hafez, M. Joerger, M. Spenko","doi":"10.1109/ICRA.2019.8794115","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794115","url":null,"abstract":"This paper presents a new methodology to quantify robot localization safety by evaluating integrity risk, a performance metric widely used in open-sky aviation applications that has been recently extended to mobile ground robots. Here, a robot is localized by feeding relative measurements to mapped landmarks into an Extended Kalman Filter while a sequence of innovations is evaluated for fault detection. The main contribution is the derivation of a sequential chi-squared integrity monitoring methodology that maintains constant computation requirements by employing a preceding time window and, at the same time, is robust against faults occurring prior to the window. Additionally, no assumptions are made on either the nature or shape of the faults because safety is evaluated under the worst possible combination of sensor faults.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"200 1","pages":"305-311"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72836091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8794295
Kevin Eckenhoff, Patrick Geneva, G. Huang
In this paper, we present a real-time multi-IMU visual-inertial navigation system (mi-VINS) that utilizes the information from multiple inertial measurement units (IMUs) and thus is resilient to IMU sensor failures. In particular, in the proposed mi-VINS formulation, one of the IMUs serves as the “base” of the system, while the rest act as auxiliary sensors aiding in state estimation. A key advantage of this architecture is the ability to seamlessly “promote” an auxiliary IMU as a new base, for example, upon detection of the base IMU failure, thus being resilient to the single point of sensor failure as seen in conventional VINS. Moreover, in order to properly fuse the information of multiple IMUs, both the spatial (relative pose) and temporal (time offset) calibration parameters between each sensor and the base IMU are estimated online. The proposed miVINS with online spatial and temporal calibration is validated in both simulations and real-world experiments, and is shown to be able to provide accurate localization and calibration even in scenarios with IMU sensor failures.
{"title":"Sensor-Failure-Resilient Multi-IMU Visual-Inertial Navigation","authors":"Kevin Eckenhoff, Patrick Geneva, G. Huang","doi":"10.1109/ICRA.2019.8794295","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794295","url":null,"abstract":"In this paper, we present a real-time multi-IMU visual-inertial navigation system (mi-VINS) that utilizes the information from multiple inertial measurement units (IMUs) and thus is resilient to IMU sensor failures. In particular, in the proposed mi-VINS formulation, one of the IMUs serves as the “base” of the system, while the rest act as auxiliary sensors aiding in state estimation. A key advantage of this architecture is the ability to seamlessly “promote” an auxiliary IMU as a new base, for example, upon detection of the base IMU failure, thus being resilient to the single point of sensor failure as seen in conventional VINS. Moreover, in order to properly fuse the information of multiple IMUs, both the spatial (relative pose) and temporal (time offset) calibration parameters between each sensor and the base IMU are estimated online. The proposed miVINS with online spatial and temporal calibration is validated in both simulations and real-world experiments, and is shown to be able to provide accurate localization and calibration even in scenarios with IMU sensor failures.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"49 1","pages":"3542-3548"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76791244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8793976
Wolfgang Wiedmeyer, M. Mende, Dennis Hartmann, R. Bischoff, C. Ledermann, T. Kröger
This paper introduces the KUKA Robot Learning Lab at KIT – a remotely accessible robotics testbed. The motivation behind the laboratory is to make state-of-the-art industrial lightweight robots more accessible for education and research. Such expensive hardware is usually not available to students or less privileged researchers to conduct experiments. This paper describes the design and operation of the Robot Learning Lab and discusses the challenges that one faces when making experimental robot cells remotely accessible. Especially safety and security must be ensured, while giving users as much freedom as possible when developing programs to control the robots. A fully automated and efficient processing pipeline for experiments makes the lab suitable for a large amount of users and allows a high usage rate of the robots.
{"title":"Robotics Education and Research at Scale: A Remotely Accessible Robotics Development Platform","authors":"Wolfgang Wiedmeyer, M. Mende, Dennis Hartmann, R. Bischoff, C. Ledermann, T. Kröger","doi":"10.1109/ICRA.2019.8793976","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793976","url":null,"abstract":"This paper introduces the KUKA Robot Learning Lab at KIT – a remotely accessible robotics testbed. The motivation behind the laboratory is to make state-of-the-art industrial lightweight robots more accessible for education and research. Such expensive hardware is usually not available to students or less privileged researchers to conduct experiments. This paper describes the design and operation of the Robot Learning Lab and discusses the challenges that one faces when making experimental robot cells remotely accessible. Especially safety and security must be ensured, while giving users as much freedom as possible when developing programs to control the robots. A fully automated and efficient processing pipeline for experiments makes the lab suitable for a large amount of users and allows a high usage rate of the robots.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"3679-3685"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88503731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8794466
Bin Han, Zihao Du, Tiantian Huang, Tao Zhang, Zhiyuan Li, Ou Bai, Xuedong Chen
In this study, a human-chair model was developed as the basis for a wearable chair design. A prototype chair, HUST-EC, was fabricated and evaluated. Employing the optimization under an inner point penalty function, an optimized simulation of the operating mode with the lowest chair height was implemented. The solid models were established by using the finite element analysis program embedded in Solidworks, which revealed that the support from the designed chair was steady to the user. An electromyography (EMG) test platform has been developed, consisting of four EMG sensors, a MATLAB-based acquisition software, and a loaded vest. Four healthy subjects participated in the evaluation experiment, in which EMGs were collected from the muscle groups of rectus femoris, biceps femoris, vastus medialis, and vastus lateralis under different loads and chair angles. The experimental data demonstrate that (1) the HUST-EC can greatly reduce muscle activation at a variety of loads and bending angles; (2) under the same load, the muscle activation decreases slightly with an increased bending angle; and (3) at the same bending angle, muscle activation increases slightly with an increased load. The results show that the designed chair can effectively reduce the physical burden in workers and may improve work efficiency.
{"title":"Mechanical Framework Design with Experimental Verification of a Wearable Exoskeleton Chair","authors":"Bin Han, Zihao Du, Tiantian Huang, Tao Zhang, Zhiyuan Li, Ou Bai, Xuedong Chen","doi":"10.1109/ICRA.2019.8794466","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794466","url":null,"abstract":"In this study, a human-chair model was developed as the basis for a wearable chair design. A prototype chair, HUST-EC, was fabricated and evaluated. Employing the optimization under an inner point penalty function, an optimized simulation of the operating mode with the lowest chair height was implemented. The solid models were established by using the finite element analysis program embedded in Solidworks, which revealed that the support from the designed chair was steady to the user. An electromyography (EMG) test platform has been developed, consisting of four EMG sensors, a MATLAB-based acquisition software, and a loaded vest. Four healthy subjects participated in the evaluation experiment, in which EMGs were collected from the muscle groups of rectus femoris, biceps femoris, vastus medialis, and vastus lateralis under different loads and chair angles. The experimental data demonstrate that (1) the HUST-EC can greatly reduce muscle activation at a variety of loads and bending angles; (2) under the same load, the muscle activation decreases slightly with an increased bending angle; and (3) at the same bending angle, muscle activation increases slightly with an increased load. The results show that the designed chair can effectively reduce the physical burden in workers and may improve work efficiency.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"66 1","pages":"4040-4045"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84035925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8794385
Mahmoud Mokhtar Elfar, Haibei Zhu, M. L. Cummings, M. Pajic
In this work, we synthesize collaboration protocols for human-unmanned aerial vehicle (H-UAV) command and control systems, where the human operator aids in securing the UAV by intermittently performing geolocation tasks to confirm its reported location. We first present a stochastic game-based model for the system that accounts for both the operator and an adversary capable of launching stealthy false-data injection attacks, causing the UAV to deviate from its path. We also describe a synthesis challenge due to the UAV’s hidden-information constraint. Next, we perform human experiments using a developed RESCHU-SA testbed to recognize the geolocation strategies that operators adopt. Furthermore, we deploy machine learning techniques on the collected experimental data to predict the correctness of a geolocation task at a given location based on its geographical features. By representing the model as a delayed-action game and formalizing the system objectives, we utilize off-the-shelf model checkers to synthesize protocols for the human-UAV coalition that satisfy these objectives. Finally, we demonstrate the usefulness of the H-UAV protocol synthesis through a case study where the protocols are experimentally analyzed and further evaluated by human operators.
{"title":"Security-Aware Synthesis of Human-UAV Protocols","authors":"Mahmoud Mokhtar Elfar, Haibei Zhu, M. L. Cummings, M. Pajic","doi":"10.1109/ICRA.2019.8794385","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794385","url":null,"abstract":"In this work, we synthesize collaboration protocols for human-unmanned aerial vehicle (H-UAV) command and control systems, where the human operator aids in securing the UAV by intermittently performing geolocation tasks to confirm its reported location. We first present a stochastic game-based model for the system that accounts for both the operator and an adversary capable of launching stealthy false-data injection attacks, causing the UAV to deviate from its path. We also describe a synthesis challenge due to the UAV’s hidden-information constraint. Next, we perform human experiments using a developed RESCHU-SA testbed to recognize the geolocation strategies that operators adopt. Furthermore, we deploy machine learning techniques on the collected experimental data to predict the correctness of a geolocation task at a given location based on its geographical features. By representing the model as a delayed-action game and formalizing the system objectives, we utilize off-the-shelf model checkers to synthesize protocols for the human-UAV coalition that satisfy these objectives. Finally, we demonstrate the usefulness of the H-UAV protocol synthesis through a case study where the protocols are experimentally analyzed and further evaluated by human operators.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"7 1","pages":"8011-8017"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91470152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8793588
B. Arain, C. McCool, P. Rigby, Daniel Cagara, M. Dunbabin
This paper presents two novel approaches for improving image-based underwater obstacle detection by combining sparse stereo point clouds with monocular semantic image segmentation. Generating accurate image-based obstacle maps in cluttered underwater environments, such as coral reefs, are essential for robust robotic path planning and navigation. However, these maps can be challenged by factors including visibility, lighting and dynamic objects (e.g. fish) that may lead to falsely identified free space or dynamic objects which trajectory planners may react to undesirably. We propose combining feature-based stereo matching with learning-based segmentation to produce a more robust obstacle map. This approach considers direct binary learning of the presence or absence of underwater obstacles, as well as a multiclass learning approach to classify their distance (near, mid and far) in the scene. An enhancement to the binary map is also shown by including depth information from sparse stereo matching to produce 3D obstacle maps of the scene. The performance is evaluated using field data collected in cluttered, and at times, visually degraded coral reef environments. The results show improved image-wide obstacle detection, rejection of transient objects (such as fish), and range estimation compared to feature-based sparse and dense stereo point clouds alone.
{"title":"Improving Underwater Obstacle Detection using Semantic Image Segmentation","authors":"B. Arain, C. McCool, P. Rigby, Daniel Cagara, M. Dunbabin","doi":"10.1109/ICRA.2019.8793588","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793588","url":null,"abstract":"This paper presents two novel approaches for improving image-based underwater obstacle detection by combining sparse stereo point clouds with monocular semantic image segmentation. Generating accurate image-based obstacle maps in cluttered underwater environments, such as coral reefs, are essential for robust robotic path planning and navigation. However, these maps can be challenged by factors including visibility, lighting and dynamic objects (e.g. fish) that may lead to falsely identified free space or dynamic objects which trajectory planners may react to undesirably. We propose combining feature-based stereo matching with learning-based segmentation to produce a more robust obstacle map. This approach considers direct binary learning of the presence or absence of underwater obstacles, as well as a multiclass learning approach to classify their distance (near, mid and far) in the scene. An enhancement to the binary map is also shown by including depth information from sparse stereo matching to produce 3D obstacle maps of the scene. The performance is evaluated using field data collected in cluttered, and at times, visually degraded coral reef environments. The results show improved image-wide obstacle detection, rejection of transient objects (such as fish), and range estimation compared to feature-based sparse and dense stereo point clouds alone.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"20 1","pages":"9271-9277"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90510389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8794424
Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot
Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.
{"title":"Uncertainty Estimation for Projecting Lidar Points onto Camera Images for Moving Platforms","authors":"Charika De Alvis, Mao Shan, Stewart Worrall, E. Nebot","doi":"10.1109/ICRA.2019.8794424","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794424","url":null,"abstract":"Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables precise range information that can be projected onto the visual image data. This gives a high level understanding of the scene which can be used to enable context based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected in to the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"21 1","pages":"6637-6643"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84838791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-20DOI: 10.1109/ICRA.2019.8793859
Andrei Haidu, M. Beetz
In this paper, we will describe AMEvA (Automated Models of Everyday Activities), a special-purpose knowledge acquisition, interpretation, and processing system for human everyday manipulation activity that can automatically: (1) create and simulate virtual human living and working environments (such as kitchens and apartments) with a scope, extent, level of detail, physics, and close to photorealism that facilitates and promotes the natural and realistic execution of human everyday manipulation activities; (2) record human manipulation activities performed in the respective virtual reality environment as well as their effects on the environment and detect force-dynamic states and events; (3) decompose and segment the recorded activity data into meaningful motions and categorize the motions according to action models used in cognitive science; and (4) represent the interpreted activities symbolically in KNOWROB [1] using a first-order time interval logic representation.
在本文中,我们将描述AMEvA (Automated Models of Everyday Activities),这是一个专门用于人类日常操作活动的知识获取、解释和处理系统,可以自动:(1)创建和模拟虚拟的人类生活和工作环境(如厨房和公寓),其范围,程度,细节水平,物理,接近逼真,促进和促进人类日常操作活动的自然和真实执行;(2)记录在相应的虚拟现实环境中进行的人类操作活动及其对环境的影响,并检测力动态状态和事件;(3)将记录的活动数据分解并分割为有意义的动作,并根据认知科学中的动作模型对动作进行分类;(4)在KNOWROB[1]中使用一阶时间间隔逻辑表示法符号表示被解释的活动。
{"title":"Automated Models of Human Everyday Activity based on Game and Virtual Reality Technology","authors":"Andrei Haidu, M. Beetz","doi":"10.1109/ICRA.2019.8793859","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793859","url":null,"abstract":"In this paper, we will describe AMEvA (Automated Models of Everyday Activities), a special-purpose knowledge acquisition, interpretation, and processing system for human everyday manipulation activity that can automatically: (1) create and simulate virtual human living and working environments (such as kitchens and apartments) with a scope, extent, level of detail, physics, and close to photorealism that facilitates and promotes the natural and realistic execution of human everyday manipulation activities; (2) record human manipulation activities performed in the respective virtual reality environment as well as their effects on the environment and detect force-dynamic states and events; (3) decompose and segment the recorded activity data into meaningful motions and categorize the motions according to action models used in cognitive science; and (4) represent the interpreted activities symbolically in KNOWROB [1] using a first-order time interval logic representation.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"38 1","pages":"2606-2612"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81350306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}