Frederik Hagelskjær, T. Savarimuthu, N. Krüger, A. Buch
{"title":"Using spatial constraints for fast set-up of precise pose estimation in an industrial setting","authors":"Frederik Hagelskjær, T. Savarimuthu, N. Krüger, A. Buch","doi":"10.1109/COASE.2019.8842876","DOIUrl":null,"url":null,"abstract":"This paper presents a method for high precision visual pose estimation along with a simple setup procedure. Robotics for industrial solutions is a rapidly growing field and these robots require very precise position information to perform manipulations. This is usually accomplished using e.g. fixtures or feeders, both expensive hardware solutions. To enable fast changes in production, more flexible solutions are required, one possibility being visual pose estimation. Although many current pose estimation algorithms show increased performance in terms of recognition rates on public datasets, they do not focus on actual applications, neither in setup complexity or high accuracy during object localization. In contrast, our method focuses on solving a number of specific pose estimation problems in a seamless manner with a simple setup procedure. Our method relies on a number of workcell constraints and employs a novel method for automatically finding stable object poses. In addition, we use an active rendering method for refining the estimated object poses, giving a very fine localization, suitable for robotic manipulation. Experiments with current state-of-the-art 2D algorithms and our method show an average improvement from 9 mm to 0.95 mm uncertainty. The method was also used by the winning team at the 2018 World Robot Summit Assembly Challenge.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"144 1","pages":"1308-1314"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2019.8842876","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
This paper presents a method for high precision visual pose estimation along with a simple setup procedure. Robotics for industrial solutions is a rapidly growing field and these robots require very precise position information to perform manipulations. This is usually accomplished using e.g. fixtures or feeders, both expensive hardware solutions. To enable fast changes in production, more flexible solutions are required, one possibility being visual pose estimation. Although many current pose estimation algorithms show increased performance in terms of recognition rates on public datasets, they do not focus on actual applications, neither in setup complexity or high accuracy during object localization. In contrast, our method focuses on solving a number of specific pose estimation problems in a seamless manner with a simple setup procedure. Our method relies on a number of workcell constraints and employs a novel method for automatically finding stable object poses. In addition, we use an active rendering method for refining the estimated object poses, giving a very fine localization, suitable for robotic manipulation. Experiments with current state-of-the-art 2D algorithms and our method show an average improvement from 9 mm to 0.95 mm uncertainty. The method was also used by the winning team at the 2018 World Robot Summit Assembly Challenge.