首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
An efficient method for detecting ghost and left objects in surveillance video 一种有效的监控视频中鬼魂和遗留物检测方法
Pub Date : 2007-09-05 DOI: 10.1142/S021800140900765X
Sijun Lu, Jian Zhang, D. Feng
This paper proposes an efficient method for detecting ghost and left objects in surveillance video, which, if not identified, may lead to errors or wasted computation in background modeling and object tracking in surveillance systems. This method contains two main steps: the first one is to detect stationary objects, which narrows down the evaluation targets to a very small number of foreground blobs; the second step is to discriminate the candidates between ghost and left objects. For the first step, we introduce a novel stationary object detection method based on continuous object tracking and shape matching. For the second step, we propose a fast and robust inpainting method to differentiate between ghost and left objects by constructing the real background using the candidate 's corresponding regions in the input and the background images. The effectiveness of our method has been validated by experiments over a variety of video sequences.
本文提出了一种有效的监控视频中幽灵和遗留物体的检测方法,这些物体如果不被识别,可能会导致监控系统在背景建模和目标跟踪中出现错误或浪费计算量。该方法包括两个主要步骤:第一步是检测静止物体,将评估目标缩小到非常少量的前景斑点;第二步是区分幽灵和左物体之间的候选对象。首先,提出了一种基于连续目标跟踪和形状匹配的静止目标检测方法。第二步,我们提出了一种快速鲁棒的方法,通过在输入图像和背景图像中使用候选对象的对应区域构建真实背景来区分鬼和左物体。在各种视频序列上的实验验证了该方法的有效性。
{"title":"An efficient method for detecting ghost and left objects in surveillance video","authors":"Sijun Lu, Jian Zhang, D. Feng","doi":"10.1142/S021800140900765X","DOIUrl":"https://doi.org/10.1142/S021800140900765X","url":null,"abstract":"This paper proposes an efficient method for detecting ghost and left objects in surveillance video, which, if not identified, may lead to errors or wasted computation in background modeling and object tracking in surveillance systems. This method contains two main steps: the first one is to detect stationary objects, which narrows down the evaluation targets to a very small number of foreground blobs; the second step is to discriminate the candidates between ghost and left objects. For the first step, we introduce a novel stationary object detection method based on continuous object tracking and shape matching. For the second step, we propose a fast and robust inpainting method to differentiate between ghost and left objects by constructing the real background using the candidate 's corresponding regions in the input and the background images. The effectiveness of our method has been validated by experiments over a variety of video sequences.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130846645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Acoustic Doppler sonar for gait recogination 用于步态识别的声学多普勒声纳
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425281
K. Kalgaonkar, B. Raj
A person's gait is a characteristic that might be employed to identify him/her automatically. Conventionally, automatic for gait-based identification of subjects employ video and image processing to characterize gait. In this paper we present an Acoustic Doppler Sensor(ADS) based technique for the characterization of gait. The ADS is very inexpensive sensor that can be built using off-the-shelf components, for under $20 USD at today's prices. We show that remarkably good gait recognition is possible with the ADS sensor.
一个人的步态是一种特征,可以用来自动识别他/她。传统上,基于步态的自动识别采用视频和图像处理来表征步态。本文提出了一种基于声多普勒传感器(ADS)的步态表征技术。ADS是一种非常便宜的传感器,可以使用现成的组件来制造,按目前的价格计算,售价不到20美元。我们表明,ADS传感器可以实现非常好的步态识别。
{"title":"Acoustic Doppler sonar for gait recogination","authors":"K. Kalgaonkar, B. Raj","doi":"10.1109/AVSS.2007.4425281","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425281","url":null,"abstract":"A person's gait is a characteristic that might be employed to identify him/her automatically. Conventionally, automatic for gait-based identification of subjects employ video and image processing to characterize gait. In this paper we present an Acoustic Doppler Sensor(ADS) based technique for the characterization of gait. The ADS is very inexpensive sensor that can be built using off-the-shelf components, for under $20 USD at today's prices. We show that remarkably good gait recognition is possible with the ADS sensor.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116838366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Classifying and tracking multiple persons for proactive surveillance of mass transport systems 对多人进行分类和跟踪,以便对大众运输系统进行主动监测
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425303
Suyu Kong, Conrad Sanderson, B. Lovell
We describe a pedestrian classification and tracking system that is able to track and label multiple people in an outdoor environment such as a railway station. The features selected for appearance modelling are circular colour histograms for the hue and conventional colour histograms for the saturation and value components. We combine blob matching with a particle filter for tracking and augment these algorithms with colour appearance models to track multiple people in the presence of occlusion. In the object classification stage, hierarchical chamfer matching combined with particle filtering is applied to classify commuters in the railway station into several classes. Classes of interest include normal commuters, commuters with backpacks, commuters with suitcases, and mothers with their children.
我们描述了一种行人分类和跟踪系统,该系统能够在火车站等户外环境中跟踪和标记多人。为外观建模选择的特征是色调的圆形颜色直方图和饱和度和值组件的常规颜色直方图。我们将blob匹配与粒子过滤器相结合进行跟踪,并将这些算法与颜色外观模型相增强,以在遮挡的情况下跟踪多个人。在目标分类阶段,采用分层倒角匹配和粒子滤波相结合的方法,对火车站的通勤者进行分类。兴趣阶层包括普通的通勤者,带着背包的通勤者,带着行李箱的通勤者,带着孩子的母亲。
{"title":"Classifying and tracking multiple persons for proactive surveillance of mass transport systems","authors":"Suyu Kong, Conrad Sanderson, B. Lovell","doi":"10.1109/AVSS.2007.4425303","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425303","url":null,"abstract":"We describe a pedestrian classification and tracking system that is able to track and label multiple people in an outdoor environment such as a railway station. The features selected for appearance modelling are circular colour histograms for the hue and conventional colour histograms for the saturation and value components. We combine blob matching with a particle filter for tracking and augment these algorithms with colour appearance models to track multiple people in the presence of occlusion. In the object classification stage, hierarchical chamfer matching combined with particle filtering is applied to classify commuters in the railway station into several classes. Classes of interest include normal commuters, commuters with backpacks, commuters with suitcases, and mothers with their children.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121802190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Scream and gunshot detection and localization for audio-surveillance systems 用于音频监视系统的尖叫和射击探测和定位
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425280
G. Valenzise, L. Gerosa, M. Tagliasacchi, F. Antonacci, A. Sarti
This paper describes an audio-based video surveillance system which automatically detects anomalous audio events in a public square, such as screams or gunshots, and localizes the position of the acoustic source, in such a way that a video-camera is steered consequently. The system employs two parallel GMM classifiers for discriminating screams from noise and gunshots from noise, respectively. Each classifier is trained using different features, chosen from a set of both conventional and innovative audio features. The location of the acoustic source which has produced the sound event is estimated by computing the time difference of arrivals of the signal at a microphone array and using linear-correction least square localization algorithm. Experimental results show that our system can detect events with a precision of 93% at a false rejection rate of 5% when the SNR is 10dB, while the source direction can be estimated with a precision of one degree. A real-time implementation of the system is going to be installed in a public square of Milan.
本文介绍了一种基于音频的视频监控系统,该系统可以自动检测公共广场上的异常音频事件,如尖叫声或枪声,并定位声源的位置,从而引导摄像机。该系统采用两个并行的GMM分类器分别用于区分尖叫和噪音以及枪声和噪音。每个分类器使用不同的特征进行训练,这些特征是从一组传统的和创新的音频特征中选择的。通过计算信号到达麦克风阵列的时间差并使用线性校正最小二乘定位算法来估计产生声事件的声源的位置。实验结果表明,当信噪比为10dB时,在5%的误抑制率下,系统的事件检测精度可达93%,而源方向估计精度可达1度。该系统的实时实现将安装在米兰的一个公共广场上。
{"title":"Scream and gunshot detection and localization for audio-surveillance systems","authors":"G. Valenzise, L. Gerosa, M. Tagliasacchi, F. Antonacci, A. Sarti","doi":"10.1109/AVSS.2007.4425280","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425280","url":null,"abstract":"This paper describes an audio-based video surveillance system which automatically detects anomalous audio events in a public square, such as screams or gunshots, and localizes the position of the acoustic source, in such a way that a video-camera is steered consequently. The system employs two parallel GMM classifiers for discriminating screams from noise and gunshots from noise, respectively. Each classifier is trained using different features, chosen from a set of both conventional and innovative audio features. The location of the acoustic source which has produced the sound event is estimated by computing the time difference of arrivals of the signal at a microphone array and using linear-correction least square localization algorithm. Experimental results show that our system can detect events with a precision of 93% at a false rejection rate of 5% when the SNR is 10dB, while the source direction can be estimated with a precision of one degree. A real-time implementation of the system is going to be installed in a public square of Milan.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127864247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 366
The Intelligent vision sensor: Turning video into information 智能视觉传感器:将视频转化为信息
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425287
A. Lipton, John I. W. Clark, B. Thompson, Gary Myers, S. Titus, Zhong Zhang, P. L. Venetianer
Video analytics for security and surveillance applications is becoming commonplace. Advances in algorithm robustness and low-cost video platforms have allowed analytics to become an ingredient for many different devices ranging from cameras to encoders to routers to storage. As algorithms become more refined, the analytics paradigm shifts from a human-support model to an automation model. In this context, ObjectVideo, the leader in intelligent video, has created a new concept in video analytics devices - the intelligent vision sensor (IVS). This device consists of a video imager and lens combined with an onboard processor and communication channel. This low-cost device turns video imagery into actionable information that can be used in building automation and business intelligence applications. This paper describes the technical and market drivers that facilitate the creation and adoption of the IVS device as well as a specific case study involving an application for heating, ventilation, and air conditioning (HVAC) and lighting control.
用于安全和监控应用的视频分析正变得越来越普遍。算法稳健性和低成本视频平台的进步,使分析成为许多不同设备的组成部分,从相机到编码器,从路由器到存储器。随着算法变得更加精细,分析范式从人工支持模型转变为自动化模型。在此背景下,智能视频领域的领导者ObjectVideo在视频分析设备中创造了一个新概念——智能视觉传感器(IVS)。该设备由一个视频成像仪和镜头以及板载处理器和通信通道组成。这种低成本的设备将视频图像转换为可操作的信息,可用于楼宇自动化和商业智能应用。本文描述了促进IVS设备创建和采用的技术和市场驱动因素,以及涉及加热,通风,空调(HVAC)和照明控制应用的具体案例研究。
{"title":"The Intelligent vision sensor: Turning video into information","authors":"A. Lipton, John I. W. Clark, B. Thompson, Gary Myers, S. Titus, Zhong Zhang, P. L. Venetianer","doi":"10.1109/AVSS.2007.4425287","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425287","url":null,"abstract":"Video analytics for security and surveillance applications is becoming commonplace. Advances in algorithm robustness and low-cost video platforms have allowed analytics to become an ingredient for many different devices ranging from cameras to encoders to routers to storage. As algorithms become more refined, the analytics paradigm shifts from a human-support model to an automation model. In this context, ObjectVideo, the leader in intelligent video, has created a new concept in video analytics devices - the intelligent vision sensor (IVS). This device consists of a video imager and lens combined with an onboard processor and communication channel. This low-cost device turns video imagery into actionable information that can be used in building automation and business intelligence applications. This paper describes the technical and market drivers that facilitate the creation and adoption of the IVS device as well as a specific case study involving an application for heating, ventilation, and air conditioning (HVAC) and lighting control.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117027590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
ETISEO, performance evaluation for video surveillance systems ETISEO,视频监控系统性能评估
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425357
Anh-Tuan Nghiem, F. Brémond, M. Thonnat, V. Valentin
This paper presents the results of ETISEO, a performance evaluation project for video surveillance systems. Many other projects have already evaluated the performance of video surveillance systems, but more on an end-user point of view. ETISEO aims at studying the dependency between algorithms and the video characteristics. Firstly we describe ETISEO methodology which consists in addressing each video processing problem separately. Secondly, we present the main evaluation metrics of ETISEO as well as their benefits, limitations and conditions of use. Finally, we discuss about the contributions of ETISEO to the evaluation community.
本文介绍了视频监控系统性能评估项目ETISEO的结果。许多其他项目已经评估了视频监控系统的性能,但更多地是从最终用户的角度出发。ETISEO旨在研究算法与视频特征之间的依赖关系。首先,我们描述了ETISEO方法,它包括分别解决每个视频处理问题。其次,介绍了ETISEO的主要评价指标及其优点、局限性和使用条件。最后,我们讨论了ETISEO对评估界的贡献。
{"title":"ETISEO, performance evaluation for video surveillance systems","authors":"Anh-Tuan Nghiem, F. Brémond, M. Thonnat, V. Valentin","doi":"10.1109/AVSS.2007.4425357","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425357","url":null,"abstract":"This paper presents the results of ETISEO, a performance evaluation project for video surveillance systems. Many other projects have already evaluated the performance of video surveillance systems, but more on an end-user point of view. ETISEO aims at studying the dependency between algorithms and the video characteristics. Firstly we describe ETISEO methodology which consists in addressing each video processing problem separately. Secondly, we present the main evaluation metrics of ETISEO as well as their benefits, limitations and conditions of use. Finally, we discuss about the contributions of ETISEO to the evaluation community.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122384935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 158
A LQR spatiotemporal fusion technique for face profile collection in smart camera surveillance 基于LQR时空融合技术的智能摄像头监控人脸轮廓采集
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425338
Chung-Ching Chang, H. Aghajan
In this paper, we propose a joint face orientation estimation technique for face profile collection in smart camera networks. The system is composed of in-node coarse estimation and joint refined estimation between cameras. In-node signal processing algorithms are designed to be lightweight to reduce computation load, yielding coarse estimates which may be erroneous. The proposed model-based technique determines the orientation and the angular motion of the face using two features, namely the hair-face ratio and the head optical flow. These features yield an estimate of the face orientation and the angular velocity through least squares (LS) analysis. In the joint refined estimation step, a discrete-time linear dynamical model is defined. Spatiotemporal consistency between cameras is measured by a cost function, which is minimized through linear quadratic regulation (LQR) to yield a robust closed-loop feedback system that estimates the face orientation, angular motion, and relative angular difference to the face between cameras. Based on the face orientation estimates, a collection of face profile are accumulated over time as the human subject moves around. The proposed technique does not require camera locations to be known in prior, and hence is applicable to vision networks deployed casually without localization.
本文提出了一种面向智能摄像机网络人脸轮廓采集的联合人脸方向估计技术。该系统由节点内粗估计和摄像机间联合精细估计两部分组成。节点内信号处理算法被设计为轻量级以减少计算负荷,产生可能错误的粗略估计。该方法利用发面比和头部光流两个特征来确定人脸的方向和角运动。通过最小二乘(LS)分析,这些特征产生了对面朝向和角速度的估计。在联合精细估计步骤中,定义了离散时间线性动力学模型。相机之间的时空一致性通过成本函数来测量,该成本函数通过线性二次调节(LQR)最小化,从而产生一个鲁棒的闭环反馈系统,该系统可以估计相机之间的面部方向,角运动和相对角度差。基于人脸方向的估计,随着时间的推移,人脸轮廓的集合会随着人体的移动而积累。该技术不需要事先知道摄像机的位置,因此适用于随意部署的视觉网络,无需定位。
{"title":"A LQR spatiotemporal fusion technique for face profile collection in smart camera surveillance","authors":"Chung-Ching Chang, H. Aghajan","doi":"10.1109/AVSS.2007.4425338","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425338","url":null,"abstract":"In this paper, we propose a joint face orientation estimation technique for face profile collection in smart camera networks. The system is composed of in-node coarse estimation and joint refined estimation between cameras. In-node signal processing algorithms are designed to be lightweight to reduce computation load, yielding coarse estimates which may be erroneous. The proposed model-based technique determines the orientation and the angular motion of the face using two features, namely the hair-face ratio and the head optical flow. These features yield an estimate of the face orientation and the angular velocity through least squares (LS) analysis. In the joint refined estimation step, a discrete-time linear dynamical model is defined. Spatiotemporal consistency between cameras is measured by a cost function, which is minimized through linear quadratic regulation (LQR) to yield a robust closed-loop feedback system that estimates the face orientation, angular motion, and relative angular difference to the face between cameras. Based on the face orientation estimates, a collection of face profile are accumulated over time as the human subject moves around. The proposed technique does not require camera locations to be known in prior, and hence is applicable to vision networks deployed casually without localization.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133663562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Midground object detection in real world video scenes 现实世界视频场景中地目标检测
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425364
B. Valentine, S. Apewokin, L. Wills, D. S. Wills, A. Gentile
Traditional video scene analysis depends on accurate background modeling to identify salient foreground objects. However, in many important surveillance applications, saliency is defined by the appearance of a new non-ephemeral object that is between the foreground and background. This midground realm is defined by a temporal window following the object's appearance; but it also depends on adaptive background modeling to allow detection with scene variations (e.g., occlusion, small illumination changes). The human visual system is ill-suited for midground detection. For example, when surveying a busy airline terminal, it is difficult (but important) to detect an unattended bag which appears in the scene. This paper introduces a midground detection technique which emphasizes computational and storage efficiency. The approach uses a new adaptive, pixel-level modeling technique derived from existing backgrounding methods. Experimental results demonstrate that this technique can accurately and efficiently identify midground objects in real-world scenes, including PETS2006 and AVSS2007 challenge datasets.
传统的视频场景分析依赖于精确的背景建模来识别突出的前景目标。然而,在许多重要的监视应用中,显著性是由前景和背景之间出现一个新的非短暂物体来定义的。这个中景区域是由物体出现后的时间窗口定义的;但它也依赖于自适应背景建模,以允许检测场景变化(例如,遮挡,小照明变化)。人类的视觉系统不适合中景检测。例如,在检查一个繁忙的航空公司航站楼时,很难(但很重要)发现现场出现的无人看管的包。本文介绍了一种注重计算效率和存储效率的中景检测技术。该方法使用了一种新的自适应像素级建模技术,该技术源自现有的背景方法。实验结果表明,该技术可以准确有效地识别真实场景中的中景物体,包括PETS2006和AVSS2007挑战数据集。
{"title":"Midground object detection in real world video scenes","authors":"B. Valentine, S. Apewokin, L. Wills, D. S. Wills, A. Gentile","doi":"10.1109/AVSS.2007.4425364","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425364","url":null,"abstract":"Traditional video scene analysis depends on accurate background modeling to identify salient foreground objects. However, in many important surveillance applications, saliency is defined by the appearance of a new non-ephemeral object that is between the foreground and background. This midground realm is defined by a temporal window following the object's appearance; but it also depends on adaptive background modeling to allow detection with scene variations (e.g., occlusion, small illumination changes). The human visual system is ill-suited for midground detection. For example, when surveying a busy airline terminal, it is difficult (but important) to detect an unattended bag which appears in the scene. This paper introduces a midground detection technique which emphasizes computational and storage efficiency. The approach uses a new adaptive, pixel-level modeling technique derived from existing backgrounding methods. Experimental results demonstrate that this technique can accurately and efficiently identify midground objects in real-world scenes, including PETS2006 and AVSS2007 challenge datasets.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131209997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Multitarget association and tracking in 3-D space based on particle filter with joint multitarget probability density 基于联合多目标概率密度粒子滤波的三维空间多目标关联与跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425374
Jinseok Lee, Byung Guk Kim, S. Cho, Sangjin Hong, W. Cho
This paper addresses the problem of 3-dimensional (3D) multitarget tracking using particle filter with the joint multitarget probability density (JMPD) technique. The estimation allows the nonlinear target motion with unlabeled measurement association as well as non-Gaussian target state densities. In addition, we decompose the 3D formulation into multiple 2D particle filters that operate on the 2D planes. Both selection and combining of the 2D particle filters for 3D tracking are presented and discussed. Finally, we analyze the tracking and association performance of the proposed approach especially in the cases of multitarget crossing and overlapping.
本文研究了结合联合多目标概率密度(JMPD)技术的粒子滤波三维多目标跟踪问题。该估计允许具有非标记测量关联的非线性目标运动和非高斯目标状态密度。此外,我们将3D公式分解为多个在2D平面上运行的2D粒子过滤器。提出并讨论了用于三维跟踪的二维粒子滤波器的选择和组合。最后,分析了该方法在多目标交叉和重叠情况下的跟踪和关联性能。
{"title":"Multitarget association and tracking in 3-D space based on particle filter with joint multitarget probability density","authors":"Jinseok Lee, Byung Guk Kim, S. Cho, Sangjin Hong, W. Cho","doi":"10.1109/AVSS.2007.4425374","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425374","url":null,"abstract":"This paper addresses the problem of 3-dimensional (3D) multitarget tracking using particle filter with the joint multitarget probability density (JMPD) technique. The estimation allows the nonlinear target motion with unlabeled measurement association as well as non-Gaussian target state densities. In addition, we decompose the 3D formulation into multiple 2D particle filters that operate on the 2D planes. Both selection and combining of the 2D particle filters for 3D tracking are presented and discussed. Finally, we analyze the tracking and association performance of the proposed approach especially in the cases of multitarget crossing and overlapping.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124291888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sphere detection and tracking for a space capturing operation 用于空间捕获操作的球体检测和跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425307
M. Kharbat, N. Aouf, A. Tsourdos, B. White
Capture mechanisms are used to transfer objects between two vehicles in the space with no physical contact. A sphere (canister) detection and tracking method using an enhanced Hough transform technique and Hinfin filter is proposed. The presented system aims to assist in the capture operation, currently investigated the European Space Agency and other partners, and to be used in space missions as an alternative to docking or berthing operations. Test results show the robustness and reliability of the proposed method. They also demonstrate the low computational and memory complexities needed.
捕捉机制用于在没有物理接触的情况下在空间中的两辆车之间转移物体。提出了一种基于增强霍夫变换技术和Hinfin滤波的球(罐)检测与跟踪方法。提出的系统旨在协助捕获操作,目前正在研究欧洲空间局和其他合作伙伴,并将用于空间任务,作为对接或停泊操作的替代方案。实验结果表明了该方法的鲁棒性和可靠性。它们还展示了所需的低计算和内存复杂性。
{"title":"Sphere detection and tracking for a space capturing operation","authors":"M. Kharbat, N. Aouf, A. Tsourdos, B. White","doi":"10.1109/AVSS.2007.4425307","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425307","url":null,"abstract":"Capture mechanisms are used to transfer objects between two vehicles in the space with no physical contact. A sphere (canister) detection and tracking method using an enhanced Hough transform technique and Hinfin filter is proposed. The presented system aims to assist in the capture operation, currently investigated the European Space Agency and other partners, and to be used in space missions as an alternative to docking or berthing operations. Test results show the robustness and reliability of the proposed method. They also demonstrate the low computational and memory complexities needed.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121437844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1