一种有效的机器人抓取姿态分类方法

Cobot Pub Date : 2022-03-03 DOI:10.12688/cobot.17440.1
Wenlong Ji, Yunhan Lin, Huasong Min
{"title":"一种有效的机器人抓取姿态分类方法","authors":"Wenlong Ji, Yunhan Lin, Huasong Min","doi":"10.12688/cobot.17440.1","DOIUrl":null,"url":null,"abstract":"Background: The unstructured environment, the different geometric shapes of objects, and the uncertainty of sensor noise have brought many challenges to robotic grasping. PointNetGPD (Grasp Pose Detection) which was published in 2019 proposes a point cloud-based grasping pose detection method, which detects reliable grasping poses from the point cloud, and provides an effective process to generate and evaluate grasping poses. However, PointNetGPD uses the point cloud inside the parallel-gripper and the network only uses three channels of information when classifying grasping poses. Methods: In order to improve the accuracy of grasping pose classification, the concept of grasping confidence region was proposed in this paper, which shows the hotspot area of the object can be grasped successfully, and there will be higher success rate when performing grasping in this area. Based on the concept of grasping confidence regions, the grasping dataset in PointNetGPD is improved, which can provide richer information to the classification network. Using our dataset, we trained a scoring network that can score the point cloud collected by the depth camera. We added this scoring network to the classification network of PointNetGPD, and carried out the experiment of grasping poses classification. Results: The experimental results show that the classification accuracy increases by 4% after calculating the score channel on the original dataset; the classification accuracy increases by nearly 1% after using the trained scoring network to score the original dataset. Conclusions: The concept of positive grasp center area is proposed in this paper. Based on this concept, we improve the dataset in PointNetGPD, and use this dataset to train a scoring network to add the score information to the point cloud. The experiments show that our proposed method can effectively improve the accuracy of grasping poses classification network.","PeriodicalId":29807,"journal":{"name":"Cobot","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An efficient pose classification method for robotic grasping\",\"authors\":\"Wenlong Ji, Yunhan Lin, Huasong Min\",\"doi\":\"10.12688/cobot.17440.1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: The unstructured environment, the different geometric shapes of objects, and the uncertainty of sensor noise have brought many challenges to robotic grasping. PointNetGPD (Grasp Pose Detection) which was published in 2019 proposes a point cloud-based grasping pose detection method, which detects reliable grasping poses from the point cloud, and provides an effective process to generate and evaluate grasping poses. However, PointNetGPD uses the point cloud inside the parallel-gripper and the network only uses three channels of information when classifying grasping poses. Methods: In order to improve the accuracy of grasping pose classification, the concept of grasping confidence region was proposed in this paper, which shows the hotspot area of the object can be grasped successfully, and there will be higher success rate when performing grasping in this area. Based on the concept of grasping confidence regions, the grasping dataset in PointNetGPD is improved, which can provide richer information to the classification network. Using our dataset, we trained a scoring network that can score the point cloud collected by the depth camera. We added this scoring network to the classification network of PointNetGPD, and carried out the experiment of grasping poses classification. Results: The experimental results show that the classification accuracy increases by 4% after calculating the score channel on the original dataset; the classification accuracy increases by nearly 1% after using the trained scoring network to score the original dataset. Conclusions: The concept of positive grasp center area is proposed in this paper. Based on this concept, we improve the dataset in PointNetGPD, and use this dataset to train a scoring network to add the score information to the point cloud. The experiments show that our proposed method can effectively improve the accuracy of grasping poses classification network.\",\"PeriodicalId\":29807,\"journal\":{\"name\":\"Cobot\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cobot\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12688/cobot.17440.1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cobot","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12688/cobot.17440.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:非结构化环境、物体几何形状的差异以及传感器噪声的不确定性给机器人抓取带来了诸多挑战。2019年发表的PointNetGPD (Grasp Pose Detection)提出了一种基于点云的抓取姿态检测方法,该方法从点云中检测出可靠的抓取姿态,并提供了一种有效的抓取姿态生成和评估过程。然而,PointNetGPD使用了并行抓取器内部的点云,并且在抓取姿势分类时只使用了三个信息通道。方法:为了提高抓取姿态分类的准确性,本文提出了抓取置信区域的概念,表示物体的热点区域能够被成功抓取,在该区域进行抓取时成功率会更高。基于抓取置信区域的概念,对PointNetGPD中的抓取数据集进行了改进,可以为分类网络提供更丰富的信息。利用我们的数据集,我们训练了一个评分网络,可以对深度相机收集的点云进行评分。将该评分网络加入到PointNetGPD分类网络中,进行抓取姿势分类实验。结果:实验结果表明,在原始数据集上计算分数通道后,分类准确率提高了4%;使用训练好的评分网络对原始数据集进行评分后,分类准确率提高了近1%。结论:提出了正抓心面积的概念。基于这一概念,我们改进了PointNetGPD中的数据集,并使用该数据集训练得分网络,将得分信息添加到点云中。实验结果表明,该方法能有效提高抓取姿态分类网络的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An efficient pose classification method for robotic grasping
Background: The unstructured environment, the different geometric shapes of objects, and the uncertainty of sensor noise have brought many challenges to robotic grasping. PointNetGPD (Grasp Pose Detection) which was published in 2019 proposes a point cloud-based grasping pose detection method, which detects reliable grasping poses from the point cloud, and provides an effective process to generate and evaluate grasping poses. However, PointNetGPD uses the point cloud inside the parallel-gripper and the network only uses three channels of information when classifying grasping poses. Methods: In order to improve the accuracy of grasping pose classification, the concept of grasping confidence region was proposed in this paper, which shows the hotspot area of the object can be grasped successfully, and there will be higher success rate when performing grasping in this area. Based on the concept of grasping confidence regions, the grasping dataset in PointNetGPD is improved, which can provide richer information to the classification network. Using our dataset, we trained a scoring network that can score the point cloud collected by the depth camera. We added this scoring network to the classification network of PointNetGPD, and carried out the experiment of grasping poses classification. Results: The experimental results show that the classification accuracy increases by 4% after calculating the score channel on the original dataset; the classification accuracy increases by nearly 1% after using the trained scoring network to score the original dataset. Conclusions: The concept of positive grasp center area is proposed in this paper. Based on this concept, we improve the dataset in PointNetGPD, and use this dataset to train a scoring network to add the score information to the point cloud. The experiments show that our proposed method can effectively improve the accuracy of grasping poses classification network.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cobot
Cobot collaborative robots-
自引率
0.00%
发文量
0
期刊介绍: Cobot is a rapid multidisciplinary open access publishing platform for research focused on the interdisciplinary field of collaborative robots. The aim of Cobot is to enhance knowledge and share the results of the latest innovative technologies for the technicians, researchers and experts engaged in collaborative robot research. The platform will welcome submissions in all areas of scientific and technical research related to collaborative robots, and all articles will benefit from open peer review. The scope of Cobot includes, but is not limited to: ● Intelligent robots ● Artificial intelligence ● Human-machine collaboration and integration ● Machine vision ● Intelligent sensing ● Smart materials ● Design, development and testing of collaborative robots ● Software for cobots ● Industrial applications of cobots ● Service applications of cobots ● Medical and health applications of cobots ● Educational applications of cobots As well as research articles and case studies, Cobot accepts a variety of article types including method articles, study protocols, software tools, systematic reviews, data notes, brief reports, and opinion articles.
期刊最新文献
Load torque observation and compensation for permanent magnet synchronous motor based on sliding mode observer Design and optimization of soft colonoscopy robot with variable cross section Robot-assisted homecare for older adults: A user study on needs and challenges Machine vision-based automatic focusing method for robot laser welding system A dynamic obstacle avoidance method for collaborative robots based on trajectory optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1