Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197154
Eranda Tennakoon, T. Peynot, Jonathan M. Roberts, N. Kottege
Multi-legged robots are effective at traversing rough terrain. However, terrains that include collapsible footholds (i.e. regions that can collapse when stepped on) remain a significant challenge, especially since such situations can be extremely difficult to anticipate using only exteroceptive sensing. State-of-the-art methods typically use various stabilisation techniques to regain balance and counter changing footholds. However, these methods are likely to fail if safe footholds are sparse and spread out or if the robot does not respond quickly enough after a foothold collapse. This paper presents a novel method for multi-legged robots to probe and test the terrain for collapses using its legs while walking. The proposed method improves on existing terrain probing approaches, and integrates the probing action into a walking cycle. A follow-the-leader strategy with a suitable gait and stance is presented and implemented on a hexapod robot. The proposed method is experimentally validated, demonstrating the robot can safely traverse terrain containing collapsible footholds.
{"title":"Probe-before-step walking strategy for multi-legged robots on terrain with risk of collapse","authors":"Eranda Tennakoon, T. Peynot, Jonathan M. Roberts, N. Kottege","doi":"10.1109/ICRA40945.2020.9197154","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197154","url":null,"abstract":"Multi-legged robots are effective at traversing rough terrain. However, terrains that include collapsible footholds (i.e. regions that can collapse when stepped on) remain a significant challenge, especially since such situations can be extremely difficult to anticipate using only exteroceptive sensing. State-of-the-art methods typically use various stabilisation techniques to regain balance and counter changing footholds. However, these methods are likely to fail if safe footholds are sparse and spread out or if the robot does not respond quickly enough after a foothold collapse. This paper presents a novel method for multi-legged robots to probe and test the terrain for collapses using its legs while walking. The proposed method improves on existing terrain probing approaches, and integrates the probing action into a walking cycle. A follow-the-leader strategy with a suitable gait and stance is presented and implemented on a hexapod robot. The proposed method is experimentally validated, demonstrating the robot can safely traverse terrain containing collapsible footholds.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82885168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196627
Yufeng Zhu, Chenghui Li, Yubo Zhang
As a crucial step of sensor data fusion, sensor calibration plays a vital role in many cutting-edge machine vision applications, such as autonomous vehicles and AR/VR. Existing techniques either require quite amount of manual work and complex settings, or are unrobust and prone to produce suboptimal results. In this paper, we investigate the extrinsic calibration of an RGB camera and a light detection and ranging (LiDAR) sensor, which are two of the most widely used sensors in autonomous vehicles for perceiving the outdoor environment. Specifically, we introduce an online calibration technique that automatically computes the optimal rigid motion transformation between the aforementioned two sensors and maximizes their mutual information of perceived data, without the need of tuning environment settings. By formulating the calibration as an optimization problem with a novel calibration quality metric based on semantic features, we successfully and robustly align pairs of temporally synchronized camera and LiDAR frames in real time. Demonstrated on several autonomous driving tasks, our method outperforms state-of-the-art edge feature based auto-calibration approaches in terms of robustness and accuracy.
{"title":"Online Camera-LiDAR Calibration with Sensor Semantic Information","authors":"Yufeng Zhu, Chenghui Li, Yubo Zhang","doi":"10.1109/ICRA40945.2020.9196627","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196627","url":null,"abstract":"As a crucial step of sensor data fusion, sensor calibration plays a vital role in many cutting-edge machine vision applications, such as autonomous vehicles and AR/VR. Existing techniques either require quite amount of manual work and complex settings, or are unrobust and prone to produce suboptimal results. In this paper, we investigate the extrinsic calibration of an RGB camera and a light detection and ranging (LiDAR) sensor, which are two of the most widely used sensors in autonomous vehicles for perceiving the outdoor environment. Specifically, we introduce an online calibration technique that automatically computes the optimal rigid motion transformation between the aforementioned two sensors and maximizes their mutual information of perceived data, without the need of tuning environment settings. By formulating the calibration as an optimization problem with a novel calibration quality metric based on semantic features, we successfully and robustly align pairs of temporally synchronized camera and LiDAR frames in real time. Demonstrated on several autonomous driving tasks, our method outperforms state-of-the-art edge feature based auto-calibration approaches in terms of robustness and accuracy.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82909903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197083
Liang Zhang, Zexu Zhang, R. Siegwart, Jen Jen Chung
This paper presents a method for predicting the probability of future connectivity between mobile robots with range-limited communication. In particular, we focus on its application to active motion planning for cooperative localization (CL). The probability of connection is modeled by the distribution of quadratic forms in random normal variables and is computed by the infinite power series expansion theorem. A finite-term approximation is made to realize the computational feasibility and three more modifications are designed to handle the adverse impacts introduced by the omission of the higher order series terms. On the basis of this algorithm, an active and CL problem with leader-follower architecture is then reformulated into a Markov Decision Process (MDP) with a one-step planning horizon, and the optimal motion strategy is generated by minimizing the expected cost of the MDP. Extensive simulations and comparisons are presented to show the effectiveness and efficiency of both the proposed prediction algorithm and the MDP model.
{"title":"A Connectivity-Prediction Algorithm and its Application in Active Cooperative Localization for Multi-Robot Systems","authors":"Liang Zhang, Zexu Zhang, R. Siegwart, Jen Jen Chung","doi":"10.1109/ICRA40945.2020.9197083","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197083","url":null,"abstract":"This paper presents a method for predicting the probability of future connectivity between mobile robots with range-limited communication. In particular, we focus on its application to active motion planning for cooperative localization (CL). The probability of connection is modeled by the distribution of quadratic forms in random normal variables and is computed by the infinite power series expansion theorem. A finite-term approximation is made to realize the computational feasibility and three more modifications are designed to handle the adverse impacts introduced by the omission of the higher order series terms. On the basis of this algorithm, an active and CL problem with leader-follower architecture is then reformulated into a Markov Decision Process (MDP) with a one-step planning horizon, and the optimal motion strategy is generated by minimizing the expected cost of the MDP. Extensive simulations and comparisons are presented to show the effectiveness and efficiency of both the proposed prediction algorithm and the MDP model.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82927446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197446
Gayane Kazhoyan, Arthur Niedzwiecki, M. Beetz
In this paper, we present an approach and an implemented framework for applying plan transformations to real-world mobile manipulation plans, in order to specialize them to the specific situation at hand. The framework can improve execution cost and achieve better performance by autonomously transforming robot’s behavior at runtime. To demonstrate the feasibility of our approach, we apply three example transformations to the plan of a PR2 robot performing simple table setting and cleaning tasks in the real world. Based on a large amount of experiments in a fast plan projection simulator, we make conclusions on improved execution performance.
{"title":"Towards Plan Transformations for Real-World Mobile Fetch and Place","authors":"Gayane Kazhoyan, Arthur Niedzwiecki, M. Beetz","doi":"10.1109/ICRA40945.2020.9197446","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197446","url":null,"abstract":"In this paper, we present an approach and an implemented framework for applying plan transformations to real-world mobile manipulation plans, in order to specialize them to the specific situation at hand. The framework can improve execution cost and achieve better performance by autonomously transforming robot’s behavior at runtime. To demonstrate the feasibility of our approach, we apply three example transformations to the plan of a PR2 robot performing simple table setting and cleaning tasks in the real world. Based on a large amount of experiments in a fast plan projection simulator, we make conclusions on improved execution performance.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90200450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197356
Chun-Yu Chai, Wen-Hsiao Peng, Shiao-Li Tsao
Studies on object rearrangement planning typically consider known objects. Some learning-based methods can predict the movement of an unknown object after single-step interaction, but require intermediate targets, which are generated manually, to achieve the rearrangement task. In this work, we propose a framework for unknown object rearrangement. Our system first models an object through a small-amount of identification actions and adjust the model parameters during task execution. We implement the proposed framework based on a low-cost tabletop robot (under 180 USD) to demonstrate the advantages of using a physics engine to assist action prediction. Experimental results reveal that after running our adaptive learning procedure, the robot can successfully arrange a novel object using an average of five discrete pushes on our tabletop environment and satisfy a precise 3.5 cm translation and 5° rotation criterion.
{"title":"Adaptive Unknown Object Rearrangement Using Low-Cost Tabletop Robot","authors":"Chun-Yu Chai, Wen-Hsiao Peng, Shiao-Li Tsao","doi":"10.1109/ICRA40945.2020.9197356","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197356","url":null,"abstract":"Studies on object rearrangement planning typically consider known objects. Some learning-based methods can predict the movement of an unknown object after single-step interaction, but require intermediate targets, which are generated manually, to achieve the rearrangement task. In this work, we propose a framework for unknown object rearrangement. Our system first models an object through a small-amount of identification actions and adjust the model parameters during task execution. We implement the proposed framework based on a low-cost tabletop robot (under 180 USD) to demonstrate the advantages of using a physics engine to assist action prediction. Experimental results reveal that after running our adaptive learning procedure, the robot can successfully arrange a novel object using an average of five discrete pushes on our tabletop environment and satisfy a precise 3.5 cm translation and 5° rotation criterion.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88815231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9197068
Matthew Berg, Deniz Bayazit, Rebecca Mathew, Ariel Rotter-Aboyoun, Ellie Pavlick, Stefanie Tellex
Robots operating in outdoor, urban environments need the ability to follow complex natural language commands which refer to never-before-seen landmarks. Existing approaches to this problem are limited because they require training a language model for the landmarks of a particular environment before a robot can understand commands referring to those landmarks. To generalize to new environments outside of the training set, we present a framework that parses references to landmarks, then assesses semantic similarities between the referring expression and landmarks in a predefined semantic map of the world, and ultimately translates natural language commands to motion plans for a drone. This framework allows the robot to ground natural language phrases to landmarks in a map when both the referring expressions to landmarks and the landmarks themselves have not been seen during training. We test our framework with a 14-person user evaluation demonstrating an end-to-end accuracy of 76.19% in an unseen environment. Subjective measures show that users find our system to have high performance and low workload. These results demonstrate our approach enables untrained users to control a robot in large unseen outdoor environments with unconstrained natural language.
{"title":"Grounding Language to Landmarks in Arbitrary Outdoor Environments","authors":"Matthew Berg, Deniz Bayazit, Rebecca Mathew, Ariel Rotter-Aboyoun, Ellie Pavlick, Stefanie Tellex","doi":"10.1109/ICRA40945.2020.9197068","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197068","url":null,"abstract":"Robots operating in outdoor, urban environments need the ability to follow complex natural language commands which refer to never-before-seen landmarks. Existing approaches to this problem are limited because they require training a language model for the landmarks of a particular environment before a robot can understand commands referring to those landmarks. To generalize to new environments outside of the training set, we present a framework that parses references to landmarks, then assesses semantic similarities between the referring expression and landmarks in a predefined semantic map of the world, and ultimately translates natural language commands to motion plans for a drone. This framework allows the robot to ground natural language phrases to landmarks in a map when both the referring expressions to landmarks and the landmarks themselves have not been seen during training. We test our framework with a 14-person user evaluation demonstrating an end-to-end accuracy of 76.19% in an unseen environment. Subjective measures show that users find our system to have high performance and low workload. These results demonstrate our approach enables untrained users to control a robot in large unseen outdoor environments with unconstrained natural language.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79470739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196810
Guo Ye, Qinjie Lin, Tzung-Han Juang, Han Liu
We exploit Markov games as a framework for collision-free navigation of human-centered robots. Unlike the classical methods which formulate robot navigation as a single-agent Markov decision process with a static environment, our framework of Markov games adopts a multi-agent formulation with one primary agent representing the robot and the remaining auxiliary agents form a dynamic or even competing environment. Such a framework allows us to develop a path-following type adversarial training strategy to learn a robust decentralized collision avoidance policy. Through thorough experiments on both simulated and real-world mobile robots, we show that the learnt policy outperforms the state-of-the-art algorithms in both sample complexity and runtime robustness.
{"title":"Collision-free Navigation of Human-centered Robots via Markov Games","authors":"Guo Ye, Qinjie Lin, Tzung-Han Juang, Han Liu","doi":"10.1109/ICRA40945.2020.9196810","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196810","url":null,"abstract":"We exploit Markov games as a framework for collision-free navigation of human-centered robots. Unlike the classical methods which formulate robot navigation as a single-agent Markov decision process with a static environment, our framework of Markov games adopts a multi-agent formulation with one primary agent representing the robot and the remaining auxiliary agents form a dynamic or even competing environment. Such a framework allows us to develop a path-following type adversarial training strategy to learn a robust decentralized collision avoidance policy. Through thorough experiments on both simulated and real-world mobile robots, we show that the learnt policy outperforms the state-of-the-art algorithms in both sample complexity and runtime robustness.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83203406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196802
A. Wu, Sadra Sadraddini, Russ Tedrake
We introduce R3T, a reachability-based variant of the rapidly-exploring random tree (RRT) algorithm that is suitable for (optimal) kinodynamic planning in nonlinear and hybrid systems. We developed tools to approximate reachable sets using polytopes and perform sampling-based planning with them. This method has a unique advantage in hybrid systems: different dynamic modes in the reachable set can be explicitly represented using multiple polytopes. We prove that under mild assumptions, R3T is probabilistically complete in kinodynamic systems, and asymptotically optimal through rewiring. Moreover, R3T provides a formal verification method for reachability analysis of nonlinear systems. The advantages of R3T are demonstrated with case studies on nonlinear, hybrid, and contact-rich robotic systems.
{"title":"R3T: Rapidly-exploring Random Reachable Set Tree for Optimal Kinodynamic Planning of Nonlinear Hybrid Systems","authors":"A. Wu, Sadra Sadraddini, Russ Tedrake","doi":"10.1109/ICRA40945.2020.9196802","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196802","url":null,"abstract":"We introduce R3T, a reachability-based variant of the rapidly-exploring random tree (RRT) algorithm that is suitable for (optimal) kinodynamic planning in nonlinear and hybrid systems. We developed tools to approximate reachable sets using polytopes and perform sampling-based planning with them. This method has a unique advantage in hybrid systems: different dynamic modes in the reachable set can be explicitly represented using multiple polytopes. We prove that under mild assumptions, R3T is probabilistically complete in kinodynamic systems, and asymptotically optimal through rewiring. Moreover, R3T provides a formal verification method for reachability analysis of nonlinear systems. The advantages of R3T are demonstrated with case studies on nonlinear, hybrid, and contact-rich robotic systems.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84635417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196691
Andrew Lewis, David Drajeske, J. Raiti, A. Berens, J. Rosen, B. Hannaford
The International Space Station (ISS) serves as a research lab for a wide variety of experiments including some that study the biological effects of microgravity and spaceflight using the Rodent Habitat and Microgravity Science Glovebox (MSG). Astronauts train for onboard dissections of rodents following basic training. An alternative approach for conducting these experiments is teleoperation of a robot located on the ISS from earth by a scientist who is proficient in rodent dissection. This pilot study addresses (1) the effects of extreme time delay on skill degradation during Fundamentals of Laparoscopic Surgery (FLS) tasks and rodent dissections using RAVEN II; (2) derivation and testing of rudimentary interaction force estimation; (3) elicitation of design requirements for an onboard dissection robot, RAVEN-S; and (4) simulation of the RAVEN-S prototype design with dissection data. The results indicate that the tasks’ completion times increased by a factor of up to 9 for a 3 s time delay while performing manipulation and cutting tasks (FLS model) and by a factor of up to 3 for a 0.75 s time delay during mouse dissection tasks (animal model). Average robot forces/torques of 14N/0.1Nm (peak 90N/0.75Nm) were measured along with average linear/angular velocities of 0.02m/s / 4rad/s (peak 0.1m/s / 40rad/s) during dissection. A triangular configuration of three arms with respect to the operation site showed the best configuration given the MSG geometry and the dissection tasks. In conclusion, the results confirm the feasibility of utilizing a surgically-inspired RAVEN-S robot for teleoperated rodent dissection for successful completion of the predefined tasks in the presence of communications time delay between the ISS and ground control.
{"title":"RAVEN-S: Design and Simulation of a Robot for Teleoperated Microgravity Rodent Dissection Under Time Delay","authors":"Andrew Lewis, David Drajeske, J. Raiti, A. Berens, J. Rosen, B. Hannaford","doi":"10.1109/ICRA40945.2020.9196691","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196691","url":null,"abstract":"The International Space Station (ISS) serves as a research lab for a wide variety of experiments including some that study the biological effects of microgravity and spaceflight using the Rodent Habitat and Microgravity Science Glovebox (MSG). Astronauts train for onboard dissections of rodents following basic training. An alternative approach for conducting these experiments is teleoperation of a robot located on the ISS from earth by a scientist who is proficient in rodent dissection. This pilot study addresses (1) the effects of extreme time delay on skill degradation during Fundamentals of Laparoscopic Surgery (FLS) tasks and rodent dissections using RAVEN II; (2) derivation and testing of rudimentary interaction force estimation; (3) elicitation of design requirements for an onboard dissection robot, RAVEN-S; and (4) simulation of the RAVEN-S prototype design with dissection data. The results indicate that the tasks’ completion times increased by a factor of up to 9 for a 3 s time delay while performing manipulation and cutting tasks (FLS model) and by a factor of up to 3 for a 0.75 s time delay during mouse dissection tasks (animal model). Average robot forces/torques of 14N/0.1Nm (peak 90N/0.75Nm) were measured along with average linear/angular velocities of 0.02m/s / 4rad/s (peak 0.1m/s / 40rad/s) during dissection. A triangular configuration of three arms with respect to the operation site showed the best configuration given the MSG geometry and the dissection tasks. In conclusion, the results confirm the feasibility of utilizing a surgically-inspired RAVEN-S robot for teleoperated rodent dissection for successful completion of the predefined tasks in the presence of communications time delay between the ISS and ground control.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88719069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICRA40945.2020.9196940
Mi Tian, Qiong Nie, Hao Shen
Camera localization is a fundamental and key component of autonomous driving vehicles and mobile robots to localize themselves globally for further environment perception, path planning and motion control. Recently end-to-end approaches based on convolutional neural network have been much studied to achieve or even exceed 3D-geometry based traditional methods. In this work, we propose a compact network for absolute camera pose regression. Inspired from those traditional methods, a 3D scene geometry-aware constraint is also introduced by exploiting all available information including motion, depth and image contents. We add this constraint as a regularization term to our proposed network by defining a pixel-level photometric loss and an image-level structural similarity loss. To benchmark our method, different challenging scenes including indoor and outdoor environment are tested with our proposed approach and state-of-the-arts. And the experimental results demonstrate significant performance improvement of our method on both prediction accuracy and convergence efficiency.
{"title":"3D Scene Geometry-Aware Constraint for Camera Localization with Deep Learning","authors":"Mi Tian, Qiong Nie, Hao Shen","doi":"10.1109/ICRA40945.2020.9196940","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196940","url":null,"abstract":"Camera localization is a fundamental and key component of autonomous driving vehicles and mobile robots to localize themselves globally for further environment perception, path planning and motion control. Recently end-to-end approaches based on convolutional neural network have been much studied to achieve or even exceed 3D-geometry based traditional methods. In this work, we propose a compact network for absolute camera pose regression. Inspired from those traditional methods, a 3D scene geometry-aware constraint is also introduced by exploiting all available information including motion, depth and image contents. We add this constraint as a regularization term to our proposed network by defining a pixel-level photometric loss and an image-level structural similarity loss. To benchmark our method, different challenging scenes including indoor and outdoor environment are tested with our proposed approach and state-of-the-arts. And the experimental results demonstrate significant performance improvement of our method on both prediction accuracy and convergence efficiency.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87387131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}