基于 U2-Net 的机器人平面抓取姿势检测

Qingsong Yu, Xiangrong Xu, Yinzhen Liu, Hui Zhang
{"title":"基于 U2-Net 的机器人平面抓取姿势检测","authors":"Qingsong Yu, Xiangrong Xu, Yinzhen Liu, Hui Zhang","doi":"10.1109/ROBIO58561.2023.10354980","DOIUrl":null,"url":null,"abstract":"Since the current grasping success rate of robots is low when performing grasping tasks in complex environments, in order to improve this problem, this paper proposes a robot grasping detection network SA-U2GNet combining U2-Net and Shuffle Attention networks. The network can not only achieve information communication between different sub-features through the attention mechanism, but also capture more contextual information from RGB-D images through the two-level nested U-shaped structure. Training and testing were performed on the Cornell and Jacquard grasp datasets, the accuracy rates reached 97.9% and 94.7% respectively, and the time required to process RGB-D images was 30ms. Compared with other methods, this method improves the accuracy and time efficiency, and the experiment verifies the feasibility and effectiveness of this method.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"41 8","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robot Plane Grasping Pose Detection Based on U2-Net\",\"authors\":\"Qingsong Yu, Xiangrong Xu, Yinzhen Liu, Hui Zhang\",\"doi\":\"10.1109/ROBIO58561.2023.10354980\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since the current grasping success rate of robots is low when performing grasping tasks in complex environments, in order to improve this problem, this paper proposes a robot grasping detection network SA-U2GNet combining U2-Net and Shuffle Attention networks. The network can not only achieve information communication between different sub-features through the attention mechanism, but also capture more contextual information from RGB-D images through the two-level nested U-shaped structure. Training and testing were performed on the Cornell and Jacquard grasp datasets, the accuracy rates reached 97.9% and 94.7% respectively, and the time required to process RGB-D images was 30ms. Compared with other methods, this method improves the accuracy and time efficiency, and the experiment verifies the feasibility and effectiveness of this method.\",\"PeriodicalId\":505134,\"journal\":{\"name\":\"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"volume\":\"41 8\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBIO58561.2023.10354980\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO58561.2023.10354980","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于目前机器人在复杂环境中执行抓取任务时抓取成功率较低,为了改善这一问题,本文提出了一种结合 U2-Net 和 Shuffle Attention 网络的机器人抓取检测网络 SA-U2GNet。该网络不仅能通过注意力机制实现不同子特征之间的信息沟通,还能通过两级嵌套的 U 型结构从 RGB-D 图像中捕获更多上下文信息。在康奈尔和提花抓取数据集上进行了训练和测试,准确率分别达到了 97.9% 和 94.7%,处理 RGB-D 图像所需的时间为 30 毫秒。与其他方法相比,该方法提高了准确率和时间效率,实验验证了该方法的可行性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Robot Plane Grasping Pose Detection Based on U2-Net
Since the current grasping success rate of robots is low when performing grasping tasks in complex environments, in order to improve this problem, this paper proposes a robot grasping detection network SA-U2GNet combining U2-Net and Shuffle Attention networks. The network can not only achieve information communication between different sub-features through the attention mechanism, but also capture more contextual information from RGB-D images through the two-level nested U-shaped structure. Training and testing were performed on the Cornell and Jacquard grasp datasets, the accuracy rates reached 97.9% and 94.7% respectively, and the time required to process RGB-D images was 30ms. Compared with other methods, this method improves the accuracy and time efficiency, and the experiment verifies the feasibility and effectiveness of this method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Barometric Soft Tactile Sensor for Depth Independent Contact Localization Stability Margin Based Gait Design on Slopes for a Novel Reconfigurable Quadruped Robot with a Foldable Trunk Blind Walking Balance Control and Disturbance Rejection of the Bipedal Humanoid Robot Xiao-Man via Reinforcement Learning A Closed-Loop Multi-perspective Visual Servoing Approach with Reinforcement Learning Modeling and Analysis of Pipe External Surface Grinding Force using Cup-shaped Wire Brush
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1