DriveCP:基于并行视觉的遮挡行人感知占位辅助场景增强技术

IF 2.3 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE journal of radio frequency identification Pub Date : 2024-04-22 DOI:10.1109/JRFID.2024.3392152
Songlin Bai;Yunzhe Wang;Zhiyao Luo;Yonglin Tian
{"title":"DriveCP:基于并行视觉的遮挡行人感知占位辅助场景增强技术","authors":"Songlin Bai;Yunzhe Wang;Zhiyao Luo;Yonglin Tian","doi":"10.1109/JRFID.2024.3392152","DOIUrl":null,"url":null,"abstract":"Diverse and large-high-quality data are essential to the deep learning algorithms for autonomous driving. However, manual data collection in intricate traffic scenarios is expensive, time-consuming, and hard to meet the requirements of quantity and quality. Though some generative methods have been used for traffic image synthesis and editing to tackle the problem of manual data collection, the impact of object relationships on data diversity is frequently disregarded in these approaches. In this paper, we focus on the occluded pedestrians within complex driving scenes and propose an occupancy-aided augmentation method for occluded humans in autonomous driving denoted as “Drive-CP“, built upon the foundation of parallel vision. Due to the flourishing development of AI Content Generation (AIGC) technologies, it is possible to automate the generation of diverse 2D and 3D assets. Based on AIGC technologies, we can construct our human library automatically, significantly enhancing the diversity of the training data. We experimentally demonstrate that Drive-CP can generate diversified occluded pedestrians in real complex traffic scenes and demonstrate its effectiveness in enriching the training set in object detection tasks.","PeriodicalId":73291,"journal":{"name":"IEEE journal of radio frequency identification","volume":"8 ","pages":"235-240"},"PeriodicalIF":2.3000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DriveCP: Occupancy-Assisted Scenario Augmentation for Occluded Pedestrian Perception Based on Parallel Vision\",\"authors\":\"Songlin Bai;Yunzhe Wang;Zhiyao Luo;Yonglin Tian\",\"doi\":\"10.1109/JRFID.2024.3392152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Diverse and large-high-quality data are essential to the deep learning algorithms for autonomous driving. However, manual data collection in intricate traffic scenarios is expensive, time-consuming, and hard to meet the requirements of quantity and quality. Though some generative methods have been used for traffic image synthesis and editing to tackle the problem of manual data collection, the impact of object relationships on data diversity is frequently disregarded in these approaches. In this paper, we focus on the occluded pedestrians within complex driving scenes and propose an occupancy-aided augmentation method for occluded humans in autonomous driving denoted as “Drive-CP“, built upon the foundation of parallel vision. Due to the flourishing development of AI Content Generation (AIGC) technologies, it is possible to automate the generation of diverse 2D and 3D assets. Based on AIGC technologies, we can construct our human library automatically, significantly enhancing the diversity of the training data. We experimentally demonstrate that Drive-CP can generate diversified occluded pedestrians in real complex traffic scenes and demonstrate its effectiveness in enriching the training set in object detection tasks.\",\"PeriodicalId\":73291,\"journal\":{\"name\":\"IEEE journal of radio frequency identification\",\"volume\":\"8 \",\"pages\":\"235-240\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal of radio frequency identification\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10506203/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of radio frequency identification","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10506203/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

对于自动驾驶的深度学习算法来说,多样化和高质量的大数据是必不可少的。然而,在错综复杂的交通场景中,人工数据采集成本高、耗时长,且难以满足数量和质量的要求。虽然已有一些生成式方法用于交通图像合成和编辑,以解决人工数据采集的问题,但这些方法往往忽略了对象关系对数据多样性的影响。在本文中,我们聚焦于复杂驾驶场景中被遮挡的行人,并在平行视觉的基础上提出了一种自动驾驶中被遮挡人的占位辅助增强方法,称为 "Drive-CP"。由于人工智能内容生成(AIGC)技术的蓬勃发展,自动生成各种二维和三维资产成为可能。基于 AIGC 技术,我们可以自动构建我们的人类库,从而大大提高训练数据的多样性。我们通过实验证明,Drive-CP 可以在真实复杂的交通场景中生成多样化的被遮挡行人,并证明了它在丰富物体检测任务训练集方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DriveCP: Occupancy-Assisted Scenario Augmentation for Occluded Pedestrian Perception Based on Parallel Vision
Diverse and large-high-quality data are essential to the deep learning algorithms for autonomous driving. However, manual data collection in intricate traffic scenarios is expensive, time-consuming, and hard to meet the requirements of quantity and quality. Though some generative methods have been used for traffic image synthesis and editing to tackle the problem of manual data collection, the impact of object relationships on data diversity is frequently disregarded in these approaches. In this paper, we focus on the occluded pedestrians within complex driving scenes and propose an occupancy-aided augmentation method for occluded humans in autonomous driving denoted as “Drive-CP“, built upon the foundation of parallel vision. Due to the flourishing development of AI Content Generation (AIGC) technologies, it is possible to automate the generation of diverse 2D and 3D assets. Based on AIGC technologies, we can construct our human library automatically, significantly enhancing the diversity of the training data. We experimentally demonstrate that Drive-CP can generate diversified occluded pedestrians in real complex traffic scenes and demonstrate its effectiveness in enriching the training set in object detection tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
0.00%
发文量
0
期刊最新文献
News From CRFID Meetings Guest Editorial of the Special Issue on RFID 2023, SpliTech 2023, and IEEE RFID-TA 2023 IoT-Based Integrated Sensing and Logging Solution for Cold Chain Monitoring Applications Robust Low-Cost Drone Detection and Classification Using Convolutional Neural Networks in Low SNR Environments Overview of RFID Applications Utilizing Neural Networks A 920-MHz, 160-μW, 25-dB Gain Negative Resistance Reflection Amplifier for BPSK Modulation RFID Tag
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1