{"title":"DriveCP:基于并行视觉的遮挡行人感知占位辅助场景增强技术","authors":"Songlin Bai;Yunzhe Wang;Zhiyao Luo;Yonglin Tian","doi":"10.1109/JRFID.2024.3392152","DOIUrl":null,"url":null,"abstract":"Diverse and large-high-quality data are essential to the deep learning algorithms for autonomous driving. However, manual data collection in intricate traffic scenarios is expensive, time-consuming, and hard to meet the requirements of quantity and quality. Though some generative methods have been used for traffic image synthesis and editing to tackle the problem of manual data collection, the impact of object relationships on data diversity is frequently disregarded in these approaches. In this paper, we focus on the occluded pedestrians within complex driving scenes and propose an occupancy-aided augmentation method for occluded humans in autonomous driving denoted as “Drive-CP“, built upon the foundation of parallel vision. Due to the flourishing development of AI Content Generation (AIGC) technologies, it is possible to automate the generation of diverse 2D and 3D assets. Based on AIGC technologies, we can construct our human library automatically, significantly enhancing the diversity of the training data. We experimentally demonstrate that Drive-CP can generate diversified occluded pedestrians in real complex traffic scenes and demonstrate its effectiveness in enriching the training set in object detection tasks.","PeriodicalId":73291,"journal":{"name":"IEEE journal of radio frequency identification","volume":"8 ","pages":"235-240"},"PeriodicalIF":2.3000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DriveCP: Occupancy-Assisted Scenario Augmentation for Occluded Pedestrian Perception Based on Parallel Vision\",\"authors\":\"Songlin Bai;Yunzhe Wang;Zhiyao Luo;Yonglin Tian\",\"doi\":\"10.1109/JRFID.2024.3392152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Diverse and large-high-quality data are essential to the deep learning algorithms for autonomous driving. However, manual data collection in intricate traffic scenarios is expensive, time-consuming, and hard to meet the requirements of quantity and quality. Though some generative methods have been used for traffic image synthesis and editing to tackle the problem of manual data collection, the impact of object relationships on data diversity is frequently disregarded in these approaches. In this paper, we focus on the occluded pedestrians within complex driving scenes and propose an occupancy-aided augmentation method for occluded humans in autonomous driving denoted as “Drive-CP“, built upon the foundation of parallel vision. Due to the flourishing development of AI Content Generation (AIGC) technologies, it is possible to automate the generation of diverse 2D and 3D assets. Based on AIGC technologies, we can construct our human library automatically, significantly enhancing the diversity of the training data. We experimentally demonstrate that Drive-CP can generate diversified occluded pedestrians in real complex traffic scenes and demonstrate its effectiveness in enriching the training set in object detection tasks.\",\"PeriodicalId\":73291,\"journal\":{\"name\":\"IEEE journal of radio frequency identification\",\"volume\":\"8 \",\"pages\":\"235-240\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal of radio frequency identification\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10506203/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of radio frequency identification","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10506203/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
DriveCP: Occupancy-Assisted Scenario Augmentation for Occluded Pedestrian Perception Based on Parallel Vision
Diverse and large-high-quality data are essential to the deep learning algorithms for autonomous driving. However, manual data collection in intricate traffic scenarios is expensive, time-consuming, and hard to meet the requirements of quantity and quality. Though some generative methods have been used for traffic image synthesis and editing to tackle the problem of manual data collection, the impact of object relationships on data diversity is frequently disregarded in these approaches. In this paper, we focus on the occluded pedestrians within complex driving scenes and propose an occupancy-aided augmentation method for occluded humans in autonomous driving denoted as “Drive-CP“, built upon the foundation of parallel vision. Due to the flourishing development of AI Content Generation (AIGC) technologies, it is possible to automate the generation of diverse 2D and 3D assets. Based on AIGC technologies, we can construct our human library automatically, significantly enhancing the diversity of the training data. We experimentally demonstrate that Drive-CP can generate diversified occluded pedestrians in real complex traffic scenes and demonstrate its effectiveness in enriching the training set in object detection tasks.