开放式领域中的多视角同时物体识别与抓取

IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal of Intelligent & Robotic Systems Pub Date : 2024-04-16 DOI:10.1007/s10846-024-02092-5
Hamidreza Kasaei, Mohammadreza Kasaei, Georgios Tziafas, Sha Luo, Remo Sasso
{"title":"开放式领域中的多视角同时物体识别与抓取","authors":"Hamidreza Kasaei, Mohammadreza Kasaei, Georgios Tziafas, Sha Luo, Remo Sasso","doi":"10.1007/s10846-024-02092-5","DOIUrl":null,"url":null,"abstract":"<p>To aid humans in everyday tasks, robots need to know which objects exist in the scene, where they are, and how to grasp and manipulate them in different situations. Therefore, object recognition and grasping are two key functionalities for autonomous robots. Most state-of-the-art approaches treat object recognition and grasping as two separate problems, even though both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot encounters new object categories, it must be retrained to incorporate new information without catastrophic forgetting. To resolve this problem, we propose a deep learning architecture with an augmented memory capacity to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as output. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings. Our approach empowers a robot to acquire knowledge about new object categories using, on average, less than five instances per category and achieve <span>\\(95\\%\\)</span> object recognition accuracy and above <span>\\(91\\%\\)</span> grasp success rate on (highly) cluttered scenarios in both simulation and real-robot experiments. A video of these experiments is available online at: https://youtu.be/n9SMpuEkOgk</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"72 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Simultaneous Multi-View Object Recognition and Grasping in Open-Ended Domains\",\"authors\":\"Hamidreza Kasaei, Mohammadreza Kasaei, Georgios Tziafas, Sha Luo, Remo Sasso\",\"doi\":\"10.1007/s10846-024-02092-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>To aid humans in everyday tasks, robots need to know which objects exist in the scene, where they are, and how to grasp and manipulate them in different situations. Therefore, object recognition and grasping are two key functionalities for autonomous robots. Most state-of-the-art approaches treat object recognition and grasping as two separate problems, even though both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot encounters new object categories, it must be retrained to incorporate new information without catastrophic forgetting. To resolve this problem, we propose a deep learning architecture with an augmented memory capacity to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as output. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings. Our approach empowers a robot to acquire knowledge about new object categories using, on average, less than five instances per category and achieve <span>\\\\(95\\\\%\\\\)</span> object recognition accuracy and above <span>\\\\(91\\\\%\\\\)</span> grasp success rate on (highly) cluttered scenarios in both simulation and real-robot experiments. A video of these experiments is available online at: https://youtu.be/n9SMpuEkOgk</p>\",\"PeriodicalId\":54794,\"journal\":{\"name\":\"Journal of Intelligent & Robotic Systems\",\"volume\":\"72 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligent & Robotic Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10846-024-02092-5\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Robotic Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10846-024-02092-5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

为了帮助人类完成日常任务,机器人需要知道场景中存在哪些物体,它们在哪里,以及在不同情况下如何抓取和操纵它们。因此,物体识别和抓取是自主机器人的两大关键功能。大多数最先进的方法都将物体识别和抓取作为两个独立的问题,尽管两者都使用视觉输入。此外,在训练阶段之后,机器人的知识是固定的。在这种情况下,如果机器人遇到新的物体类别,就必须对其进行重新训练,以便在不发生灾难性遗忘的情况下纳入新信息。为了解决这个问题,我们提出了一种具有增强记忆能力的深度学习架构,可以同时处理开放式物体识别和抓取。具体来说,我们的方法将物体的多视图作为输入,并联合估计像素抓取配置以及深度尺度和旋转不变表示作为输出。然后,通过元主动学习技术将获得的表征用于开放式物体识别。我们展示了我们的方法在模拟和真实世界环境中使用极少的现场示例抓取从未见过的物体和快速学习新物体类别的能力。在模拟和真实机器人实验中,我们的方法使机器人能够平均使用少于5个每个类别的实例来获取关于新物体类别的知识,并在(高度)杂乱的场景中实现了(95%)物体识别准确率和(91%)以上的抓取成功率。这些实验的视频可在线观看:https://youtu.be/n9SMpuEkOgk
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Simultaneous Multi-View Object Recognition and Grasping in Open-Ended Domains

To aid humans in everyday tasks, robots need to know which objects exist in the scene, where they are, and how to grasp and manipulate them in different situations. Therefore, object recognition and grasping are two key functionalities for autonomous robots. Most state-of-the-art approaches treat object recognition and grasping as two separate problems, even though both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot encounters new object categories, it must be retrained to incorporate new information without catastrophic forgetting. To resolve this problem, we propose a deep learning architecture with an augmented memory capacity to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as output. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings. Our approach empowers a robot to acquire knowledge about new object categories using, on average, less than five instances per category and achieve \(95\%\) object recognition accuracy and above \(91\%\) grasp success rate on (highly) cluttered scenarios in both simulation and real-robot experiments. A video of these experiments is available online at: https://youtu.be/n9SMpuEkOgk

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Intelligent & Robotic Systems
Journal of Intelligent & Robotic Systems 工程技术-机器人学
CiteScore
7.00
自引率
9.10%
发文量
219
审稿时长
6 months
期刊介绍: The Journal of Intelligent and Robotic Systems bridges the gap between theory and practice in all areas of intelligent systems and robotics. It publishes original, peer reviewed contributions from initial concept and theory to prototyping to final product development and commercialization. On the theoretical side, the journal features papers focusing on intelligent systems engineering, distributed intelligence systems, multi-level systems, intelligent control, multi-robot systems, cooperation and coordination of unmanned vehicle systems, etc. On the application side, the journal emphasizes autonomous systems, industrial robotic systems, multi-robot systems, aerial vehicles, mobile robot platforms, underwater robots, sensors, sensor-fusion, and sensor-based control. Readers will also find papers on real applications of intelligent and robotic systems (e.g., mechatronics, manufacturing, biomedical, underwater, humanoid, mobile/legged robot and space applications, etc.).
期刊最新文献
UAV Routing for Enhancing the Performance of a Classifier-in-the-loop DFT-VSLAM: A Dynamic Optical Flow Tracking VSLAM Method Design and Development of a Robust Control Platform for a 3-Finger Robotic Gripper Using EMG-Derived Hand Muscle Signals in NI LabVIEW Neural Network-based Adaptive Finite-time Control for 2-DOF Helicopter Systems with Prescribed Performance and Input Saturation Six-Degree-of-Freedom Pose Estimation Method for Multi-Source Feature Points Based on Fully Convolutional Neural Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1