A Benchmark for Vision-based Multi-UAV Multi-object Tracking

Hao Shen, Xiwen Yang, D. Lin, Jianduo Chai, Jiakai Huo, Xiaofeng Xing, Shaoming He
{"title":"A Benchmark for Vision-based Multi-UAV Multi-object Tracking","authors":"Hao Shen, Xiwen Yang, D. Lin, Jianduo Chai, Jiakai Huo, Xiaofeng Xing, Shaoming He","doi":"10.1109/MFI55806.2022.9913874","DOIUrl":null,"url":null,"abstract":"Vision-based multi-sensor multi-object tracking is a fundamental task in the applications of a swarm of Unmanned Aerial Vehicles (UAVs). The benchmark datasets are critical to the development of computer vision research since they can provide a fair and principled way to evaluate various approaches and promote the improvement of corresponding algorithms. In recent years, many benchmarks have been created for single-camera single-object tracking, single-camera multi-object detection, and single-camera multi-object tracking scenarios. However, up to the best of our knowledge, few benchmarks of multi-camera multi-object tracking have been provided. In this paper, we build a dataset for multi-UAV multi-object tracking tasks to fill the gap. Several cameras are placed in the VICON motion capture system to simulate the UAV team, and several toy cars are employed to represent ground targets. The first-perspective videos from the cameras, the motion states of the cameras, and the ground truth of the objects are recorded. We also propose a metric to evaluate the performance of the multi-UAV multi-object tracking task. The dataset and the code for algorithm evaluation are available at our GitHub (https://github.com/bitshenwenxiao/MUMO).","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI55806.2022.9913874","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Vision-based multi-sensor multi-object tracking is a fundamental task in the applications of a swarm of Unmanned Aerial Vehicles (UAVs). The benchmark datasets are critical to the development of computer vision research since they can provide a fair and principled way to evaluate various approaches and promote the improvement of corresponding algorithms. In recent years, many benchmarks have been created for single-camera single-object tracking, single-camera multi-object detection, and single-camera multi-object tracking scenarios. However, up to the best of our knowledge, few benchmarks of multi-camera multi-object tracking have been provided. In this paper, we build a dataset for multi-UAV multi-object tracking tasks to fill the gap. Several cameras are placed in the VICON motion capture system to simulate the UAV team, and several toy cars are employed to represent ground targets. The first-perspective videos from the cameras, the motion states of the cameras, and the ground truth of the objects are recorded. We also propose a metric to evaluate the performance of the multi-UAV multi-object tracking task. The dataset and the code for algorithm evaluation are available at our GitHub (https://github.com/bitshenwenxiao/MUMO).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于视觉的多无人机多目标跟踪基准
基于视觉的多传感器多目标跟踪是无人机群应用中的一项基础性任务。基准数据集对计算机视觉研究的发展至关重要,因为它们可以为评估各种方法提供公平和原则性的方法,并促进相应算法的改进。近年来,针对单相机单目标跟踪、单相机多目标检测和单相机多目标跟踪场景,已经创建了许多基准测试。然而,据我们所知,提供多相机多目标跟踪的基准很少。在本文中,我们建立了一个多无人机多目标跟踪任务的数据集来填补这一空白。几个摄像机被放置在VICON动作捕捉系统中去模拟无人机团队,几个玩具车被用来代表地面目标。记录摄像机的第一视角视频、摄像机的运动状态和物体的地面真实情况。我们还提出了一个评价多无人机多目标跟踪任务性能的指标。算法评估的数据集和代码可以在我们的GitHub (https://github.com/bitshenwenxiao/MUMO)上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Regression with Ensemble of RANSAC in Camera-LiDAR Fusion for Road Boundary Detection and Modeling Global-local Feature Aggregation for Event-based Object Detection on EventKITTI Predicting Autonomous Vehicle Navigation Parameters via Image and Image-and-Point Cloud Fusion-based End-to-End Methods Perception-aware Receding Horizon Path Planning for UAVs with LiDAR-based SLAM PIPO: Policy Optimization with Permutation-Invariant Constraint for Distributed Multi-Robot Navigation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1