M. Bäuml, Keni Bernardin, Mika Fischer, H. K. Ekenel, R. Stiefelhagen
In this paper, we study the use of facial appearancefeatures for the re-identification of persons using distributedcamera networks in a realistic surveillance scenario.In contrast to features commonly used for person reidentification,such as whole body appearance, facial featuresoffer the advantage of remaining stable over muchlarger intervals of time. The challenge in using faces forsuch applications, apart from low captured face resolutions,is that their appearance across camera sightings is largelyinfluenced by lighting and viewing pose. Here, a numberof techniques to address these problems are presented andevaluated on a database of surveillance-type recordings. Asystem for online capture and interactive retrieval is presentedthat allows to search for sightings of particular personsin the video database. Evaluation results are presentedon surveillance data recorded with four cameras over severaldays. A mean average precision of 0.60 was achievedfor inter-camera retrieval using just a single track as queryset, and up to 0.86 after relevance feedback by an operator.
{"title":"Multi-pose Face Recognition for Person Retrieval in Camera Networks","authors":"M. Bäuml, Keni Bernardin, Mika Fischer, H. K. Ekenel, R. Stiefelhagen","doi":"10.1109/AVSS.2010.42","DOIUrl":"https://doi.org/10.1109/AVSS.2010.42","url":null,"abstract":"In this paper, we study the use of facial appearancefeatures for the re-identification of persons using distributedcamera networks in a realistic surveillance scenario.In contrast to features commonly used for person reidentification,such as whole body appearance, facial featuresoffer the advantage of remaining stable over muchlarger intervals of time. The challenge in using faces forsuch applications, apart from low captured face resolutions,is that their appearance across camera sightings is largelyinfluenced by lighting and viewing pose. Here, a numberof techniques to address these problems are presented andevaluated on a database of surveillance-type recordings. Asystem for online capture and interactive retrieval is presentedthat allows to search for sightings of particular personsin the video database. Evaluation results are presentedon surveillance data recorded with four cameras over severaldays. A mean average precision of 0.60 was achievedfor inter-camera retrieval using just a single track as queryset, and up to 0.86 after relevance feedback by an operator.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123676336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Godec, C. Leistner, H. Bischof, Andreas Starzacher, B. Rinner
In this paper, we introduce a fully autonomous vehicleclassification system that continuously learns from largeamounts of unlabeled data. For that purpose, we proposea novel on-line co-training method based on visual andacoustic information. Our system does not need complicatedmicrophone arrays or video calibration and automaticallyadapts to specific traffic scenes. These specialized detectorsare more accurate and more compact than generalclassifiers, which allows for light-weight usage in low-costand portable embedded systems. Hence, we implementedour system on an off-the-shelf embedded platform. In the experimentalpart, we show that the proposed method is ableto cover the desired task and outperforms single-cue systems.Furthermore, our co-training framework minimizesthe labeling effort without degrading the overall system performance.
{"title":"Audio-Visual Co-Training for Vehicle Classification","authors":"Martin Godec, C. Leistner, H. Bischof, Andreas Starzacher, B. Rinner","doi":"10.1109/AVSS.2010.31","DOIUrl":"https://doi.org/10.1109/AVSS.2010.31","url":null,"abstract":"In this paper, we introduce a fully autonomous vehicleclassification system that continuously learns from largeamounts of unlabeled data. For that purpose, we proposea novel on-line co-training method based on visual andacoustic information. Our system does not need complicatedmicrophone arrays or video calibration and automaticallyadapts to specific traffic scenes. These specialized detectorsare more accurate and more compact than generalclassifiers, which allows for light-weight usage in low-costand portable embedded systems. Hence, we implementedour system on an off-the-shelf embedded platform. In the experimentalpart, we show that the proposed method is ableto cover the desired task and outperforms single-cue systems.Furthermore, our co-training framework minimizesthe labeling effort without degrading the overall system performance.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, local stereo matching has experienced largeprogress by the introduction of adaptive support-weights. Inthis paper, we aim at eliminating negative effects of occlusionsby proposing an occlusion-based method to improvetraditional support weights. Weights of occluded points aregreatly reduced while computing matching costs, initial disparitiesand final disparities. Experimental results on the Middlebury images demonstratethat our method is very effective in improving disparitiesof points around occluded areas and depth discontinuities.According to the Middlebury benchmark, theproposed algorithm is now the top performer among localstereo methods. Moreover, this approach can be easily integratedinto nearly all existing support weights strategies.
{"title":"Occlusion-Aided Weights for Local Stereo Matching","authors":"Wei Wang, Caiming Zhang, Xia Hu, Weitao Li","doi":"10.1109/AVSS.2010.37","DOIUrl":"https://doi.org/10.1109/AVSS.2010.37","url":null,"abstract":"Recently, local stereo matching has experienced largeprogress by the introduction of adaptive support-weights. Inthis paper, we aim at eliminating negative effects of occlusionsby proposing an occlusion-based method to improvetraditional support weights. Weights of occluded points aregreatly reduced while computing matching costs, initial disparitiesand final disparities. Experimental results on the Middlebury images demonstratethat our method is very effective in improving disparitiesof points around occluded areas and depth discontinuities.According to the Middlebury benchmark, theproposed algorithm is now the top performer among localstereo methods. Moreover, this approach can be easily integratedinto nearly all existing support weights strategies.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.
{"title":"PETS2010 and PETS2009 Evaluation of Results Using Individual Ground Truthed Single Views","authors":"A. Ellis, J. Ferryman","doi":"10.1109/AVSS.2010.89","DOIUrl":"https://doi.org/10.1109/AVSS.2010.89","url":null,"abstract":"This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127415004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}