Wentao Wu, Fanghua Hong, Xiao Wang, Chenglong Li, Jin Tang
{"title":"VFM-Det: Towards High-Performance Vehicle Detection via Large Foundation Models","authors":"Wentao Wu, Fanghua Hong, Xiao Wang, Chenglong Li, Jin Tang","doi":"arxiv-2408.13031","DOIUrl":null,"url":null,"abstract":"Existing vehicle detectors are usually obtained by training a typical\ndetector (e.g., YOLO, RCNN, DETR series) on vehicle images based on a\npre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit and\nenhance the detection performance using pre-trained large foundation models.\nHowever, we think these detectors may only get sub-optimal results because the\nlarge models they use are not specifically designed for vehicles. In addition,\ntheir results heavily rely on visual features, and seldom of they consider the\nalignment between the vehicle's semantic information and visual\nrepresentations. In this work, we propose a new vehicle detection paradigm\nbased on a pre-trained foundation vehicle model (VehicleMAE) and a large\nlanguage model (T5), termed VFM-Det. It follows the region proposal-based\ndetection framework and the features of each proposal can be enhanced using\nVehicleMAE. More importantly, we propose a new VAtt2Vec module that predicts\nthe vehicle semantic attributes of these proposals and transforms them into\nfeature vectors to enhance the vision features via contrastive learning.\nExtensive experiments on three vehicle detection benchmark datasets thoroughly\nproved the effectiveness of our vehicle detector. Specifically, our model\nimproves the baseline approach by $+5.1\\%$, $+6.2\\%$ on the $AP_{0.5}$,\n$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code of\nthis work will be released at https://github.com/Event-AHU/VFM-Det.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.13031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Existing vehicle detectors are usually obtained by training a typical
detector (e.g., YOLO, RCNN, DETR series) on vehicle images based on a
pre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit and
enhance the detection performance using pre-trained large foundation models.
However, we think these detectors may only get sub-optimal results because the
large models they use are not specifically designed for vehicles. In addition,
their results heavily rely on visual features, and seldom of they consider the
alignment between the vehicle's semantic information and visual
representations. In this work, we propose a new vehicle detection paradigm
based on a pre-trained foundation vehicle model (VehicleMAE) and a large
language model (T5), termed VFM-Det. It follows the region proposal-based
detection framework and the features of each proposal can be enhanced using
VehicleMAE. More importantly, we propose a new VAtt2Vec module that predicts
the vehicle semantic attributes of these proposals and transforms them into
feature vectors to enhance the vision features via contrastive learning.
Extensive experiments on three vehicle detection benchmark datasets thoroughly
proved the effectiveness of our vehicle detector. Specifically, our model
improves the baseline approach by $+5.1\%$, $+6.2\%$ on the $AP_{0.5}$,
$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code of
this work will be released at https://github.com/Event-AHU/VFM-Det.