首页 > 最新文献

2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Multi-pose Face Recognition for Person Retrieval in Camera Networks 基于摄像机网络的多姿态人脸识别
M. Bäuml, Keni Bernardin, Mika Fischer, H. K. Ekenel, R. Stiefelhagen
In this paper, we study the use of facial appearancefeatures for the re-identification of persons using distributedcamera networks in a realistic surveillance scenario.In contrast to features commonly used for person reidentification,such as whole body appearance, facial featuresoffer the advantage of remaining stable over muchlarger intervals of time. The challenge in using faces forsuch applications, apart from low captured face resolutions,is that their appearance across camera sightings is largelyinfluenced by lighting and viewing pose. Here, a numberof techniques to address these problems are presented andevaluated on a database of surveillance-type recordings. Asystem for online capture and interactive retrieval is presentedthat allows to search for sightings of particular personsin the video database. Evaluation results are presentedon surveillance data recorded with four cameras over severaldays. A mean average precision of 0.60 was achievedfor inter-camera retrieval using just a single track as queryset, and up to 0.86 after relevance feedback by an operator.
在本文中,我们研究了在现实监控场景中使用分布式摄像机网络使用面部外观特征来重新识别人员。与通常用于重新识别人的特征(如全身外观)相比,面部特征具有在更长的时间间隔内保持稳定的优势。在这种应用程序中使用人脸的挑战,除了低捕获的人脸分辨率,是他们的外观在相机的视线很大程度上受到照明和观看姿势的影响。在这里,提出了一些解决这些问题的技术,并在监视型记录的数据库上进行了评估。提出了一种在线捕获和交互式检索系统,该系统允许在视频数据库中搜索特定人物的目击事件。评估结果由4台摄像机在数天内记录的监测数据呈现。仅使用单个轨迹作为查询集的相机间检索平均平均精度为0.60,经过操作员的相关反馈后,平均精度可达0.86。
{"title":"Multi-pose Face Recognition for Person Retrieval in Camera Networks","authors":"M. Bäuml, Keni Bernardin, Mika Fischer, H. K. Ekenel, R. Stiefelhagen","doi":"10.1109/AVSS.2010.42","DOIUrl":"https://doi.org/10.1109/AVSS.2010.42","url":null,"abstract":"In this paper, we study the use of facial appearancefeatures for the re-identification of persons using distributedcamera networks in a realistic surveillance scenario.In contrast to features commonly used for person reidentification,such as whole body appearance, facial featuresoffer the advantage of remaining stable over muchlarger intervals of time. The challenge in using faces forsuch applications, apart from low captured face resolutions,is that their appearance across camera sightings is largelyinfluenced by lighting and viewing pose. Here, a numberof techniques to address these problems are presented andevaluated on a database of surveillance-type recordings. Asystem for online capture and interactive retrieval is presentedthat allows to search for sightings of particular personsin the video database. Evaluation results are presentedon surveillance data recorded with four cameras over severaldays. A mean average precision of 0.60 was achievedfor inter-camera retrieval using just a single track as queryset, and up to 0.86 after relevance feedback by an operator.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123676336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Audio-Visual Co-Training for Vehicle Classification 车辆分类的视听协同训练
Martin Godec, C. Leistner, H. Bischof, Andreas Starzacher, B. Rinner
In this paper, we introduce a fully autonomous vehicleclassification system that continuously learns from largeamounts of unlabeled data. For that purpose, we proposea novel on-line co-training method based on visual andacoustic information. Our system does not need complicatedmicrophone arrays or video calibration and automaticallyadapts to specific traffic scenes. These specialized detectorsare more accurate and more compact than generalclassifiers, which allows for light-weight usage in low-costand portable embedded systems. Hence, we implementedour system on an off-the-shelf embedded platform. In the experimentalpart, we show that the proposed method is ableto cover the desired task and outperforms single-cue systems.Furthermore, our co-training framework minimizesthe labeling effort without degrading the overall system performance.
在本文中,我们介绍了一个完全自主的车辆分类系统,该系统可以从大量未标记的数据中持续学习。为此,我们提出了一种基于视觉和听觉信息的在线协同训练方法。我们的系统不需要复杂的麦克风阵列或视频校准,可以自动适应特定的交通场景。这些专门的检测器比一般分类器更准确,更紧凑,这允许在低成本便携式嵌入式系统中轻量级使用。因此,我们在现成的嵌入式平台上实现了我们的系统。在实验部分,我们证明了所提出的方法能够覆盖所需的任务,并且优于单线索系统。此外,我们的协同训练框架在不降低整体系统性能的情况下最大限度地减少了标记工作。
{"title":"Audio-Visual Co-Training for Vehicle Classification","authors":"Martin Godec, C. Leistner, H. Bischof, Andreas Starzacher, B. Rinner","doi":"10.1109/AVSS.2010.31","DOIUrl":"https://doi.org/10.1109/AVSS.2010.31","url":null,"abstract":"In this paper, we introduce a fully autonomous vehicleclassification system that continuously learns from largeamounts of unlabeled data. For that purpose, we proposea novel on-line co-training method based on visual andacoustic information. Our system does not need complicatedmicrophone arrays or video calibration and automaticallyadapts to specific traffic scenes. These specialized detectorsare more accurate and more compact than generalclassifiers, which allows for light-weight usage in low-costand portable embedded systems. Hence, we implementedour system on an off-the-shelf embedded platform. In the experimentalpart, we show that the proposed method is ableto cover the desired task and outperforms single-cue systems.Furthermore, our co-training framework minimizesthe labeling effort without degrading the overall system performance.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Occlusion-Aided Weights for Local Stereo Matching 局部立体匹配的遮挡辅助权重
Wei Wang, Caiming Zhang, Xia Hu, Weitao Li
Recently, local stereo matching has experienced largeprogress by the introduction of adaptive support-weights. Inthis paper, we aim at eliminating negative effects of occlusionsby proposing an occlusion-based method to improvetraditional support weights. Weights of occluded points aregreatly reduced while computing matching costs, initial disparitiesand final disparities. Experimental results on the Middlebury images demonstratethat our method is very effective in improving disparitiesof points around occluded areas and depth discontinuities.According to the Middlebury benchmark, theproposed algorithm is now the top performer among localstereo methods. Moreover, this approach can be easily integratedinto nearly all existing support weights strategies.
近年来,自适应支持权重的引入使局部立体匹配取得了很大的进展。在本文中,我们提出了一种基于咬合的方法来改进传统的支持权重,旨在消除咬合的负面影响。在计算匹配成本、初始差值和最终差值的同时,大大降低了被遮挡点的权重。在Middlebury图像上的实验结果表明,该方法可以有效地改善被遮挡区域周围点的差异和深度不连续性。根据Middlebury的基准测试,该算法目前在局部立体方法中表现最好。此外,这种方法可以很容易地集成到几乎所有现有的支持权重策略中。
{"title":"Occlusion-Aided Weights for Local Stereo Matching","authors":"Wei Wang, Caiming Zhang, Xia Hu, Weitao Li","doi":"10.1109/AVSS.2010.37","DOIUrl":"https://doi.org/10.1109/AVSS.2010.37","url":null,"abstract":"Recently, local stereo matching has experienced largeprogress by the introduction of adaptive support-weights. Inthis paper, we aim at eliminating negative effects of occlusionsby proposing an occlusion-based method to improvetraditional support weights. Weights of occluded points aregreatly reduced while computing matching costs, initial disparitiesand final disparities. Experimental results on the Middlebury images demonstratethat our method is very effective in improving disparitiesof points around occluded areas and depth discontinuities.According to the Middlebury benchmark, theproposed algorithm is now the top performer among localstereo methods. Moreover, this approach can be easily integratedinto nearly all existing support weights strategies.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PETS2010 and PETS2009 Evaluation of Results Using Individual Ground Truthed Single Views PETS2010和PETS2009使用单个地面真实单视图评估结果
A. Ellis, J. Ferryman
This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.
本文介绍了PETS2010研讨会人群图像分析挑战赛的结果。评估采用视频分析和内容提取(VACE)项目和事件、活动和关系分类(CLEAR)联盟开发的一系列指标进行。PETS 2010评估是使用从每个独立的二维视图创建的新的地面真相进行的。此外,对提交给PETS 2009和Winter-PETS 2009的作品的表现进行了评估并纳入结果。评估突出了作者系统在精度、准确性和鲁棒性等方面的检测和跟踪性能。
{"title":"PETS2010 and PETS2009 Evaluation of Results Using Individual Ground Truthed Single Views","authors":"A. Ellis, J. Ferryman","doi":"10.1109/AVSS.2010.89","DOIUrl":"https://doi.org/10.1109/AVSS.2010.89","url":null,"abstract":"This paper presents the results of the crowd image analysis challenge of the PETS2010 workshop. The evaluation was carried out using a selection of the metrics developed in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The PETS 2010 evaluation was performed using new ground truthing create from each independant two dimensional view. In addition, the performance of the submissions to the PETS 2009 and Winter-PETS 2009 were evaluated and included in the results. The evaluation highlights the detection and tracking performance of the authors’ systems in areas such as precision, accuracy and robustness.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127415004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
期刊
2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1