CEPB dataset: a photorealistic dataset to foster the research on bin picking in cluttered environments

P. Tripicchio, Salvatore D’Avella, C. Avizzano
{"title":"CEPB dataset: a photorealistic dataset to foster the research on bin picking in cluttered environments","authors":"P. Tripicchio, Salvatore D’Avella, C. Avizzano","doi":"10.3389/frobt.2024.1222465","DOIUrl":null,"url":null,"abstract":"Several datasets have been proposed in the literature, focusing on object detection and pose estimation. The majority of them are interested in recognizing isolated objects or the pose of objects in well-organized scenarios. This work introduces a novel dataset that aims to stress vision algorithms in the difficult task of object detection and pose estimation in highly cluttered scenes concerning the specific case of bin picking for the Cluttered Environment Picking Benchmark (CEPB). The dataset provides about 1.5M virtually generated photo-realistic images (RGB + depth + normals + segmentation) of 50K annotated cluttered scenes mixing rigid, soft, and deformable objects of varying sizes used in existing robotic picking benchmarks together with their 3D models (40 objects). Such images include three different camera positions, three light conditions, and multiple High Dynamic Range Imaging (HDRI) maps for domain randomization purposes. The annotations contain the 2D and 3D bounding boxes of the involved objects, the centroids’ poses (translation + quaternion), and the visibility percentage of the objects’ surfaces. Nearly 10K separated object images are presented to perform simple tests and compare them with more complex cluttered scenarios tests. A baseline performed with the DOPE neural network is reported to highlight the challenges introduced by the novel dataset.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"30 18","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2024.1222465","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Several datasets have been proposed in the literature, focusing on object detection and pose estimation. The majority of them are interested in recognizing isolated objects or the pose of objects in well-organized scenarios. This work introduces a novel dataset that aims to stress vision algorithms in the difficult task of object detection and pose estimation in highly cluttered scenes concerning the specific case of bin picking for the Cluttered Environment Picking Benchmark (CEPB). The dataset provides about 1.5M virtually generated photo-realistic images (RGB + depth + normals + segmentation) of 50K annotated cluttered scenes mixing rigid, soft, and deformable objects of varying sizes used in existing robotic picking benchmarks together with their 3D models (40 objects). Such images include three different camera positions, three light conditions, and multiple High Dynamic Range Imaging (HDRI) maps for domain randomization purposes. The annotations contain the 2D and 3D bounding boxes of the involved objects, the centroids’ poses (translation + quaternion), and the visibility percentage of the objects’ surfaces. Nearly 10K separated object images are presented to perform simple tests and compare them with more complex cluttered scenarios tests. A baseline performed with the DOPE neural network is reported to highlight the challenges introduced by the novel dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CEPB 数据集:促进杂乱环境中垃圾箱拣选研究的逼真数据集
文献中已经提出了几个数据集,重点是物体检测和姿态估计。其中大多数数据集关注的是识别孤立的物体或在井然有序的场景中识别物体的姿态。这项工作引入了一个新的数据集,旨在强调视觉算法在高度杂乱场景中进行物体检测和姿态估计这一艰巨任务的能力,涉及杂乱环境拣选基准(CEPB)的垃圾桶拣选这一特定案例。该数据集提供了约 150 万张虚拟生成的逼真照片(RGB + 深度 + 法线 + 分割),这些照片来自 5 万个带注释的杂乱场景,其中混合了现有机器人拣选基准中使用的不同大小的刚性、软性和可变形物体,以及它们的三维模型(40 个物体)。这些图像包括三个不同的摄像机位置、三种光线条件和多个高动态范围成像(HDRI)地图,用于域随机化目的。注释包括相关对象的二维和三维边界框、中心点的位置(平移+四元数)以及对象表面的可见度百分比。为进行简单的测试并与更复杂的杂乱场景测试进行比较,我们展示了近 10K 张分离的物体图像。报告还介绍了使用 DOPE 神经网络进行的基线测试,以突出新数据集带来的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collective predictive coding hypothesis: symbol emergence as decentralized Bayesian inference Adaptive satellite attitude control for varying masses using deep reinforcement learning Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems Semantic learning from keyframe demonstration using object attribute constraints Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1