{"title":"Learning Crowd Scale and Distribution for Weakly Supervised Crowd Counting and Localization","authors":"Yaowu Fan;Jia Wan;Andy J. Ma","doi":"10.1109/TCSVT.2024.3460482","DOIUrl":null,"url":null,"abstract":"The count supervision used in weakly-supervised crowd counting is derived from the number of point annotations, which means that the labeling cost is not effectively reduced. Moreover, due to the lack of spatial information about the pedestrians during training, previous works struggle to accurately learn the positions of individuals. To address these challenges, we propose a crowd counting and localization method based on scene-specific synthetic data for surveillance scenarios, which can accurately predict the number and location of person without any manually labeled point-wise or count-wise annotations. Our method dynamically adjust scene-specific synthetic data to minimize domain differences from surveillance scenes by learning the crowd scale and distribution. Specifically, based on realistic synthetic data, the models learn precise location and scale information, which can then regenerate new synthetic data with a more reasonable pedestrian distribution and scale and generate high-quality pseudo point-wise annotations. Subsequently, the counter is trained using our proposed robust soft-weighted loss function, under the joint supervision of auto-generated point-wise annotations on synthetic data and pseudo point-wise annotations on real data in an end-to-end manner. Our proposed loss function, based on the designed weighted optimal transport, effectively mitigates noise in pseudo point-wise labels and is not only insensitive to hyperparemeters but also exhibits superior generalization ability on real data. We conduct comprehensive experiments across multiple scene-specific datasets, demonstrating our method’s superiority in counting and localization performance over count-supervised, fully-supervised, and state-of-the-art domain adaption algorithms. Code is available at <uri>https://github.com/fyw1999/LCSD</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"713-727"},"PeriodicalIF":11.1000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10680129/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The count supervision used in weakly-supervised crowd counting is derived from the number of point annotations, which means that the labeling cost is not effectively reduced. Moreover, due to the lack of spatial information about the pedestrians during training, previous works struggle to accurately learn the positions of individuals. To address these challenges, we propose a crowd counting and localization method based on scene-specific synthetic data for surveillance scenarios, which can accurately predict the number and location of person without any manually labeled point-wise or count-wise annotations. Our method dynamically adjust scene-specific synthetic data to minimize domain differences from surveillance scenes by learning the crowd scale and distribution. Specifically, based on realistic synthetic data, the models learn precise location and scale information, which can then regenerate new synthetic data with a more reasonable pedestrian distribution and scale and generate high-quality pseudo point-wise annotations. Subsequently, the counter is trained using our proposed robust soft-weighted loss function, under the joint supervision of auto-generated point-wise annotations on synthetic data and pseudo point-wise annotations on real data in an end-to-end manner. Our proposed loss function, based on the designed weighted optimal transport, effectively mitigates noise in pseudo point-wise labels and is not only insensitive to hyperparemeters but also exhibits superior generalization ability on real data. We conduct comprehensive experiments across multiple scene-specific datasets, demonstrating our method’s superiority in counting and localization performance over count-supervised, fully-supervised, and state-of-the-art domain adaption algorithms. Code is available at https://github.com/fyw1999/LCSD.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.