Zuhao Ge, Wenhao Yu, Xian Liu, Lizhe Qi, Yunquan Sun
{"title":"基于分层头的交通场景检测密度和上下文感知网络","authors":"Zuhao Ge, Wenhao Yu, Xian Liu, Lizhe Qi, Yunquan Sun","doi":"10.1109/IJCNN55064.2022.9892125","DOIUrl":null,"url":null,"abstract":"We investigate traffic scene detection from surveillance cameras and UAVs. This task is rather challenging, mainly due to the spatial nonuniform gathering, large-scale variance, and instance-level imbalanced distribution of vehicles. Most existing methods that employed FPN to enrich features are prone to failure in this scenario. To mitigate the influences above, we propose a novel detector called Density and Context Aware Network(DCANet) that can focus on dense regions and adaptively aggregate context features. Specifically, DCANet consists of three components: Density Map Supervision(DMP), Context Feature Aggregation(CFA), and Hierarchical Head Module(HHM). DMP is designed to capture the gathering information of objects supervised by density maps. CFA exploits adjacent feature layers' relationships to fulfill ROI-level contextual information enhancement. Finally, HHM is introduced to classify and locate imbalanced objects employed in hierarchical heads. Without bells and whistles, DCANet can be used in any two-stage detectors. Extensive experiments are carried out on the two widely used traffic detection datasets, CityCam and VisDrone, and DCANet reports new state-of-the-art scores on the CityCam.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Density and Context Aware Network with Hierarchical Head for Traffic Scene Detection\",\"authors\":\"Zuhao Ge, Wenhao Yu, Xian Liu, Lizhe Qi, Yunquan Sun\",\"doi\":\"10.1109/IJCNN55064.2022.9892125\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We investigate traffic scene detection from surveillance cameras and UAVs. This task is rather challenging, mainly due to the spatial nonuniform gathering, large-scale variance, and instance-level imbalanced distribution of vehicles. Most existing methods that employed FPN to enrich features are prone to failure in this scenario. To mitigate the influences above, we propose a novel detector called Density and Context Aware Network(DCANet) that can focus on dense regions and adaptively aggregate context features. Specifically, DCANet consists of three components: Density Map Supervision(DMP), Context Feature Aggregation(CFA), and Hierarchical Head Module(HHM). DMP is designed to capture the gathering information of objects supervised by density maps. CFA exploits adjacent feature layers' relationships to fulfill ROI-level contextual information enhancement. Finally, HHM is introduced to classify and locate imbalanced objects employed in hierarchical heads. Without bells and whistles, DCANet can be used in any two-stage detectors. Extensive experiments are carried out on the two widely used traffic detection datasets, CityCam and VisDrone, and DCANet reports new state-of-the-art scores on the CityCam.\",\"PeriodicalId\":106974,\"journal\":{\"name\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"90 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN55064.2022.9892125\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Density and Context Aware Network with Hierarchical Head for Traffic Scene Detection
We investigate traffic scene detection from surveillance cameras and UAVs. This task is rather challenging, mainly due to the spatial nonuniform gathering, large-scale variance, and instance-level imbalanced distribution of vehicles. Most existing methods that employed FPN to enrich features are prone to failure in this scenario. To mitigate the influences above, we propose a novel detector called Density and Context Aware Network(DCANet) that can focus on dense regions and adaptively aggregate context features. Specifically, DCANet consists of three components: Density Map Supervision(DMP), Context Feature Aggregation(CFA), and Hierarchical Head Module(HHM). DMP is designed to capture the gathering information of objects supervised by density maps. CFA exploits adjacent feature layers' relationships to fulfill ROI-level contextual information enhancement. Finally, HHM is introduced to classify and locate imbalanced objects employed in hierarchical heads. Without bells and whistles, DCANet can be used in any two-stage detectors. Extensive experiments are carried out on the two widely used traffic detection datasets, CityCam and VisDrone, and DCANet reports new state-of-the-art scores on the CityCam.