{"title":"Multi-Modal Object Tracking using Dynamic Performance Metrics","authors":"S. Denman, C. Fookes, S. Sridharan, D. Ryan","doi":"10.1109/AVSS.2010.16","DOIUrl":null,"url":null,"abstract":"Intelligent surveillance systems typically use a single visualspectrum modality for their input. These systems workwell in controlled conditions, but often fail when lightingis poor, or environmental effects such as shadows, dust orsmoke are present. Thermal spectrum imagery is not as susceptibleto environmental effects, however thermal imagingsensors are more sensitive to noise and they are onlygray scale, making distinguishing between objects difficult.Several approaches to combining the visual and thermalmodalities have been proposed, however they are limited byassuming that both modalities are perfuming equally well.When one modality fails, existing approaches are unable todetect the drop in performance and disregard the under performingmodality. In this paper, a novel middle fusion approachfor combining visual and thermal spectrum imagesfor object tracking is proposed. Motion and object detectionis performed on each modality and the object detectionresults for each modality are fused base on the currentperformance of each modality. Modality performance is determinedby comparing the number of objects tracked by thesystem with the number detected by each mode, with a smallallowance made for objects entering and exiting the scene.The tracking performance of the proposed fusion schemeis compared with performance of the visual and thermalmodes individually, and a baseline middle fusion scheme.Improvement in tracking performance using the proposedfusion approach is demonstrated. The proposed approachis also shown to be able to detect the failure of an individualmodality and disregard its results, ensuring performance isnot degraded in such situations.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2010.16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Intelligent surveillance systems typically use a single visualspectrum modality for their input. These systems workwell in controlled conditions, but often fail when lightingis poor, or environmental effects such as shadows, dust orsmoke are present. Thermal spectrum imagery is not as susceptibleto environmental effects, however thermal imagingsensors are more sensitive to noise and they are onlygray scale, making distinguishing between objects difficult.Several approaches to combining the visual and thermalmodalities have been proposed, however they are limited byassuming that both modalities are perfuming equally well.When one modality fails, existing approaches are unable todetect the drop in performance and disregard the under performingmodality. In this paper, a novel middle fusion approachfor combining visual and thermal spectrum imagesfor object tracking is proposed. Motion and object detectionis performed on each modality and the object detectionresults for each modality are fused base on the currentperformance of each modality. Modality performance is determinedby comparing the number of objects tracked by thesystem with the number detected by each mode, with a smallallowance made for objects entering and exiting the scene.The tracking performance of the proposed fusion schemeis compared with performance of the visual and thermalmodes individually, and a baseline middle fusion scheme.Improvement in tracking performance using the proposedfusion approach is demonstrated. The proposed approachis also shown to be able to detect the failure of an individualmodality and disregard its results, ensuring performance isnot degraded in such situations.