{"title":"Object pose estimation method for robotic arm grasping","authors":"Cheng Huang, Shuyu Hou","doi":"10.3233/jifs-234351","DOIUrl":null,"url":null,"abstract":"To address the issue of target detection in the planar grasping task, a position and attitude estimation method based on YOLO-Pose is proposed. The aim is to detect the three-dimensional position of the spacecraft’s center point and the planar two-dimensional attitude in real time. First, the weight is trained through transfer learning, and the number of key points is optimized by analyzing the shape characteristics of the spacecraft to improve the representation of pose information. Second, the CBAM dual-channel attention mechanism is integrated into the C3 module of the backbone network to improve the accuracy of pose estimation. Furthermore, the Wing Loss function is used to mitigate the problem of random offset in key points. The incorporation of the bi-directional feature pyramid network (BiFPN) structure into the neck network further improves the accuracy of target detection. The experimental results show that the average accuracy value of the optimized algorithm has increased. The average detection speed can meet the speed and accuracy requirements of the actual capture task and has practical application value.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Fuzzy Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/jifs-234351","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
To address the issue of target detection in the planar grasping task, a position and attitude estimation method based on YOLO-Pose is proposed. The aim is to detect the three-dimensional position of the spacecraft’s center point and the planar two-dimensional attitude in real time. First, the weight is trained through transfer learning, and the number of key points is optimized by analyzing the shape characteristics of the spacecraft to improve the representation of pose information. Second, the CBAM dual-channel attention mechanism is integrated into the C3 module of the backbone network to improve the accuracy of pose estimation. Furthermore, the Wing Loss function is used to mitigate the problem of random offset in key points. The incorporation of the bi-directional feature pyramid network (BiFPN) structure into the neck network further improves the accuracy of target detection. The experimental results show that the average accuracy value of the optimized algorithm has increased. The average detection speed can meet the speed and accuracy requirements of the actual capture task and has practical application value.