首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Compact representation and probabilistic classification of human actions in videos 视频中人类行为的紧凑表示和概率分类
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425334
C. Colombo, Dario Comanducci, A. Bimbo
This paper addresses the problem of classifying human actions in a video sequence. A representation eigenspace approach based on the PCA algorithm is used to train the classifier according to an incremental learning scheme based on a "one action, one eigenspace" approach. Before dimensionality reduction, a high dimensional description of each frame of the video sequence is constructed, based on foreground blob analysis. Classification is performed by matching incrementally the reduced representation of the test image sequence against each of the learned ones, and accumulating matching scores according to a probabilistic framework, until a decision is obtained. Experimental results with real video sequences are presented and discussed.
本文研究了视频序列中人类行为的分类问题。采用基于PCA算法的表征特征空间方法,根据基于“一个动作,一个特征空间”方法的增量学习方案训练分类器。在降维之前,基于前景斑点分析,构建视频序列每帧的高维描述。分类是通过将测试图像序列的简化表示与每个学习到的图像序列进行增量匹配,并根据概率框架累积匹配分数,直到获得决策。给出了真实视频序列的实验结果并进行了讨论。
{"title":"Compact representation and probabilistic classification of human actions in videos","authors":"C. Colombo, Dario Comanducci, A. Bimbo","doi":"10.1109/AVSS.2007.4425334","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425334","url":null,"abstract":"This paper addresses the problem of classifying human actions in a video sequence. A representation eigenspace approach based on the PCA algorithm is used to train the classifier according to an incremental learning scheme based on a \"one action, one eigenspace\" approach. Before dimensionality reduction, a high dimensional description of each frame of the video sequence is constructed, based on foreground blob analysis. Classification is performed by matching incrementally the reduced representation of the test image sequence against each of the learned ones, and accumulating matching scores according to a probabilistic framework, until a decision is obtained. Experimental results with real video sequences are presented and discussed.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123215248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Watershed algorithm for moving object extraction considering energy minimization by snakes 考虑蛇能量最小化的运动目标提取分水岭算法
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425367
K. Imamura, Masaki Hiraoka, H. Hashimoto
MPEG-4, which is a video coding standard, supports object-based functionalities for high efficiency coding. MPEG-7, a multimedia content description interface, handles the object data in, for example, retrieval and/or editing systems. Therefore, extraction of semantic video objects is an indispensable tool that benefits these newly developed schemes. In the present paper, we propose a technique that extracts the shape of moving objects by combining snakes and watershed algorithm. The proposed method comprises two steps. In the first step, snakes extract contours of moving objects as a result of the minimization of an energy function. In the second step, the conditional watershed algorithm extracts contours from a topographical surface including a new function term. This function term is introduced to improve the estimated contours considering boundaries of moving objects obtained by snakes. The efficiency of the proposed approach in moving object extraction is demonstrated through computer simulations.
MPEG-4是一种视频编码标准,支持基于对象的高效编码功能。MPEG-7是一种多媒体内容描述接口,用于处理对象数据,例如,检索和/或编辑系统。因此,语义视频对象的提取是这些新方案不可或缺的工具。本文提出了一种结合蛇形和分水岭算法提取运动物体形状的方法。该方法包括两个步骤。在第一步中,蛇提取运动物体的轮廓,这是能量函数最小化的结果。第二步,条件分水岭算法从包含新函数项的地形表面提取轮廓。引入该函数项是为了改进蛇形获得的考虑运动目标边界的估计轮廓。通过计算机仿真验证了该方法在运动目标提取中的有效性。
{"title":"Watershed algorithm for moving object extraction considering energy minimization by snakes","authors":"K. Imamura, Masaki Hiraoka, H. Hashimoto","doi":"10.1109/AVSS.2007.4425367","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425367","url":null,"abstract":"MPEG-4, which is a video coding standard, supports object-based functionalities for high efficiency coding. MPEG-7, a multimedia content description interface, handles the object data in, for example, retrieval and/or editing systems. Therefore, extraction of semantic video objects is an indispensable tool that benefits these newly developed schemes. In the present paper, we propose a technique that extracts the shape of moving objects by combining snakes and watershed algorithm. The proposed method comprises two steps. In the first step, snakes extract contours of moving objects as a result of the minimization of an energy function. In the second step, the conditional watershed algorithm extracts contours from a topographical surface including a new function term. This function term is introduced to improve the estimated contours considering boundaries of moving objects obtained by snakes. The efficiency of the proposed approach in moving object extraction is demonstrated through computer simulations.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"33 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123598074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3-D model-based people detection & tracking 基于三维模型的人员检测和跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425375
G. Garibotto
The paper describes a method for people detection and tracking from multi-camera views. The proposed approach is based on 3D models of the person shape, where motion tracking is carried out in 3D space with re-projection onto calibrated images to perform target validation according to a prediction-verification paradigm. Multiple cameras with partial overlap can be used to cover a much wider area. The referred examples are based on the data base from PETS 2006 video sequences and a data base from EU-ISCAPS demonstration environment.
本文介绍了一种多摄像机视角下的人物检测与跟踪方法。所提出的方法基于人体形状的3D模型,其中运动跟踪在3D空间中进行,并根据预测-验证范式重新投影到校准图像上以执行目标验证。部分重叠的多个摄像机可以用来覆盖更广泛的区域。所引用的实例是基于pet 2006视频序列数据库和EU-ISCAPS演示环境的数据库。
{"title":"3-D model-based people detection & tracking","authors":"G. Garibotto","doi":"10.1109/AVSS.2007.4425375","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425375","url":null,"abstract":"The paper describes a method for people detection and tracking from multi-camera views. The proposed approach is based on 3D models of the person shape, where motion tracking is carried out in 3D space with re-projection onto calibrated images to perform target validation according to a prediction-verification paradigm. Multiple cameras with partial overlap can be used to cover a much wider area. The referred examples are based on the data base from PETS 2006 video sequences and a data base from EU-ISCAPS demonstration environment.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121214153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sign language detection using 3D visual cues 使用3D视觉线索的手语检测
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425350
J. Lichtenauer, G. T. Holt, E. Hendriks, M. Reinders
A 3D visual hand gesture recognition method is proposed that detects correctly performed signs from stereo camera input. Hand tracking is based on skin detection with an adaptive chrominance model to get high accuracy. Informative high level motion properties are extracted to simplify the classification task. Each example is mapped onto a fixed reference sign by Dynamic Time Warping, to get precise time correspondences. The classification is done by combining weak classifiers based on robust statistics. Each base classifier assumes a uniform distribution of a single feature, determined by winsorization on the noisy training set. The operating point of the classifier is determined by stretching the uniform distributions of the base classifiers instead of changing the threshold on the total posterior likelihood. In a cross validation with 120 signs performed by 70 different persons, 95% of the test signs were correctly detected at a false positive rate of 5%.
提出了一种三维视觉手势识别方法,该方法可以检测立体摄像机输入的正确手势。手部跟踪是基于皮肤检测的自适应色度模型,以获得较高的精度。提取信息丰富的高级运动属性以简化分类任务。通过动态时间翘曲将每个示例映射到固定的参考符号上,以获得精确的时间对应。分类是通过结合基于鲁棒统计的弱分类器来完成的。每个基分类器假设单个特征的均匀分布,通过对有噪声的训练集进行加权化来确定。分类器的工作点是通过拉伸基本分类器的均匀分布来确定的,而不是改变总后验似然的阈值。在由70个不同的人执行的120个标志的交叉验证中,95%的测试标志被正确检测,假阳性率为5%。
{"title":"Sign language detection using 3D visual cues","authors":"J. Lichtenauer, G. T. Holt, E. Hendriks, M. Reinders","doi":"10.1109/AVSS.2007.4425350","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425350","url":null,"abstract":"A 3D visual hand gesture recognition method is proposed that detects correctly performed signs from stereo camera input. Hand tracking is based on skin detection with an adaptive chrominance model to get high accuracy. Informative high level motion properties are extracted to simplify the classification task. Each example is mapped onto a fixed reference sign by Dynamic Time Warping, to get precise time correspondences. The classification is done by combining weak classifiers based on robust statistics. Each base classifier assumes a uniform distribution of a single feature, determined by winsorization on the noisy training set. The operating point of the classifier is determined by stretching the uniform distributions of the base classifiers instead of changing the threshold on the total posterior likelihood. In a cross validation with 120 signs performed by 70 different persons, 95% of the test signs were correctly detected at a false positive rate of 5%.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116998094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using social effects to guide tracking in complex scenes 利用社会效应引导复杂场景的跟踪
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425312
A. French, Asad Naeem, I. Dryden, T. Pridmore
This paper presents a new methodology for improving the tracking of multiple targets in complex scenes. The new method,Motion Parameter Sharing, incorporates social motion information into tracking predictions. This is achieved by allowing a tracker to share motion estimates within groups of targets which have previously been moving in a coordinated fashion. The method is intuitive and, as well as aiding the prediction estimates, allows the implicit formation of 'social groups' of targets as a side effect of the process. The underlying reasoning and method are presented, as well as a description of how the method fits into the framework of a typical Bayesian tracking system. This is followed by some preliminary results which suggest the method is more accurate and robust than algorithms which do not incorporate the social information available in multiple target scenarios.
本文提出了一种改进复杂场景中多目标跟踪的新方法。新的方法,运动参数共享,将社会运动信息纳入跟踪预测。这是通过允许跟踪器在先前以协调方式移动的目标组内共享运动估计来实现的。该方法是直观的,以及帮助预测估计,允许隐式形成的“社会群体”的目标,作为该过程的副作用。提出了基本的推理和方法,并描述了该方法如何适应典型贝叶斯跟踪系统的框架。随后的一些初步结果表明,该方法比不包含多个目标场景中可用的社会信息的算法更准确、更健壮。
{"title":"Using social effects to guide tracking in complex scenes","authors":"A. French, Asad Naeem, I. Dryden, T. Pridmore","doi":"10.1109/AVSS.2007.4425312","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425312","url":null,"abstract":"This paper presents a new methodology for improving the tracking of multiple targets in complex scenes. The new method,Motion Parameter Sharing, incorporates social motion information into tracking predictions. This is achieved by allowing a tracker to share motion estimates within groups of targets which have previously been moving in a coordinated fashion. The method is intuitive and, as well as aiding the prediction estimates, allows the implicit formation of 'social groups' of targets as a side effect of the process. The underlying reasoning and method are presented, as well as a description of how the method fits into the framework of a typical Bayesian tracking system. This is followed by some preliminary results which suggest the method is more accurate and robust than algorithms which do not incorporate the social information available in multiple target scenarios.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121610088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Model-based human posture estimation for gesture analysis in an opportunistic fusion smart camera network 机会融合智能摄像机网络中基于模型的人体姿态估计
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425353
Chen Wu, H. Aghajan
In multi-camera networks rich visual data is provided both spatially and temporally. In this paper a method of human posture estimation is described incorporating the concept of an opportunistic fusion framework aiming to employ manifold sources of visual information across space, time, and feature levels. One motivation for the proposed method is to reduce raw visual data in a single camera to elliptical parameterized segments for efficient communication between cameras. A 3D human body model is employed as the convergence point of spatiotemporal and feature fusion. It maintains both geometric parameters of the human posture and the adoptively learned appearance attributes, all of which are updated from the three dimensions of space, time and features of the opportunistic fusion. In sufficient confidence levels parameters of the 3D human body model are again used as feedback to aid subsequent in-node vision analysis. Color distribution registered in the model is used to initialize segmentation. Perceptually Organized Expectation Maximization (POEM) is then applied to refine color segments with observations from a single camera. Geometric configuration of the 3D skeleton is estimated by Particle Swarm Optimization (PSO).
在多摄像机网络中,提供了丰富的空间和时间视觉数据。本文描述了一种人体姿态估计方法,该方法结合了机会融合框架的概念,旨在利用跨越空间、时间和特征级别的多种视觉信息来源。该方法的一个动机是将单个摄像机中的原始视觉数据简化为椭圆参数化段,以便于摄像机之间的有效通信。采用三维人体模型作为时空和特征融合的收敛点。它既保留了人体姿势的几何参数,又保留了自适应学习的外观属性,这些属性都是从空间、时间和机会融合的特征三个维度更新的。在足够的置信水平下,三维人体模型的参数再次用作反馈,以帮助后续的节点内视觉分析。使用模型中注册的颜色分布初始化分割。然后应用感知组织期望最大化(POEM)来细化从单个相机观察到的颜色段。采用粒子群算法对三维骨架的几何构型进行估计。
{"title":"Model-based human posture estimation for gesture analysis in an opportunistic fusion smart camera network","authors":"Chen Wu, H. Aghajan","doi":"10.1109/AVSS.2007.4425353","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425353","url":null,"abstract":"In multi-camera networks rich visual data is provided both spatially and temporally. In this paper a method of human posture estimation is described incorporating the concept of an opportunistic fusion framework aiming to employ manifold sources of visual information across space, time, and feature levels. One motivation for the proposed method is to reduce raw visual data in a single camera to elliptical parameterized segments for efficient communication between cameras. A 3D human body model is employed as the convergence point of spatiotemporal and feature fusion. It maintains both geometric parameters of the human posture and the adoptively learned appearance attributes, all of which are updated from the three dimensions of space, time and features of the opportunistic fusion. In sufficient confidence levels parameters of the 3D human body model are again used as feedback to aid subsequent in-node vision analysis. Color distribution registered in the model is used to initialize segmentation. Perceptually Organized Expectation Maximization (POEM) is then applied to refine color segments with observations from a single camera. Geometric configuration of the 3D skeleton is estimated by Particle Swarm Optimization (PSO).","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121755431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
2D face pose normalisation using a 3D morphable model 使用3D变形模型的2D面部姿势归一化
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425285
J. Tena, Raymond S. Smith, M. Hamouz, J. Kittler, A. Hilton, J. Illingworth
The ever growing need for improved security, surveillance and identity protection, calls for the creation of evermore reliable and robust face recognition technology that is scalable and can be deployed in all kinds of environments without compromising its effectiveness. In this paper we study the impact that pose correction has on the performance of 2D face recognition. To measure the effect, we use a state of the art 2D recognition algorithm. The pose correction is performed by means of 3D morphable model. Our results on the non frontal XM2VTS database showed that pose correction can improve recognition rates up to 30%.
对安全、监控和身份保护的需求不断增长,要求创造更可靠、更强大的面部识别技术,这种技术具有可扩展性,可以部署在各种环境中,而不会影响其有效性。本文研究了姿态校正对二维人脸识别性能的影响。为了测量效果,我们使用了最先进的二维识别算法。采用三维变形模型进行姿态校正。我们在非正面XM2VTS数据库上的结果表明,姿态校正可以将识别率提高30%。
{"title":"2D face pose normalisation using a 3D morphable model","authors":"J. Tena, Raymond S. Smith, M. Hamouz, J. Kittler, A. Hilton, J. Illingworth","doi":"10.1109/AVSS.2007.4425285","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425285","url":null,"abstract":"The ever growing need for improved security, surveillance and identity protection, calls for the creation of evermore reliable and robust face recognition technology that is scalable and can be deployed in all kinds of environments without compromising its effectiveness. In this paper we study the impact that pose correction has on the performance of 2D face recognition. To measure the effect, we use a state of the art 2D recognition algorithm. The pose correction is performed by means of 3D morphable model. Our results on the non frontal XM2VTS database showed that pose correction can improve recognition rates up to 30%.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131251730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Experiments with patch-based object classification 基于patch的目标分类实验
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425294
R. Wijnhoven, P. D. With
We present and experiment with a patch-based algorithm for the purpose of object classification in video surveillance. A feature vector is calculated based on template matching of a large set of image patches, within detected regions-of-interest (ROIs, also called blobs), of moving objects. Instead of matching direct image pixels, we use Gabor-filtered versions of the input image at several scales. We present results for a new typical video surveillance dataset containing over 9,000 object images. Additionally, we show results for the PETS 2001 dataset and another dataset from literature. Because our algorithm is not invariant to the object orientation, the set was split into four subsets with different orientation. We show the improvements, resulting from taking the object orientation into account. Using 50 training samples or higher, our resulting detection rate is on the average above 95%, which improves with the orientation consideration to 98%. Because of the inherent scalability of the algorithm, an embedded system implementation is well within reach.
本文提出并实验了一种基于补丁的视频监控目标分类算法。在检测到的运动物体的感兴趣区域(roi,也称为blobs)内,基于大量图像补丁的模板匹配计算特征向量。我们不是直接匹配图像像素,而是在几个尺度上使用gabor滤波版本的输入图像。我们给出了一个新的典型视频监控数据集的结果,该数据集包含超过9000个目标图像。此外,我们展示了PETS 2001数据集和另一个文献数据集的结果。由于算法对物体的方向不是不变的,因此将集合分成四个方向不同的子集。我们展示了由于考虑了面向对象而产生的改进。使用50个或更多的训练样本,我们得到的检测率平均在95%以上,考虑方向后提高到98%。由于该算法具有固有的可扩展性,因此可以在嵌入式系统中实现。
{"title":"Experiments with patch-based object classification","authors":"R. Wijnhoven, P. D. With","doi":"10.1109/AVSS.2007.4425294","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425294","url":null,"abstract":"We present and experiment with a patch-based algorithm for the purpose of object classification in video surveillance. A feature vector is calculated based on template matching of a large set of image patches, within detected regions-of-interest (ROIs, also called blobs), of moving objects. Instead of matching direct image pixels, we use Gabor-filtered versions of the input image at several scales. We present results for a new typical video surveillance dataset containing over 9,000 object images. Additionally, we show results for the PETS 2001 dataset and another dataset from literature. Because our algorithm is not invariant to the object orientation, the set was split into four subsets with different orientation. We show the improvements, resulting from taking the object orientation into account. Using 50 training samples or higher, our resulting detection rate is on the average above 95%, which improves with the orientation consideration to 98%. Because of the inherent scalability of the algorithm, an embedded system implementation is well within reach.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134516182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Bottom-up/top-down coordination in a multiagent visual sensor network 多智能体视觉传感器网络中的自底向上/自顶向下协调
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425292
Federico Castanedo, M. A. Patricio, Jesús García, J. M. Molina
In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.
提出了一种多智能体视觉传感器网络中多传感器协调的方法。采用了多智能体系统的信念-愿望-意图模型。在这个多智能体系统中,讨论了多个监视传感器智能体之间的相互作用以及它们各自的融合智能体。采用自下而上/自上而下的协调方法改进监视过程,其中融合剂控制协调过程。在自底向上阶段,信息被发送到融合剂。另一方面,在自顶向下阶段,反馈消息被发送到那些正在执行关于全局融合跟踪过程的不一致跟踪过程的监视传感器代理。这些反馈信息允许监视传感器代理纠正其跟踪过程。最后,利用PETS 2006数据库进行了初步实验。
{"title":"Bottom-up/top-down coordination in a multiagent visual sensor network","authors":"Federico Castanedo, M. A. Patricio, Jesús García, J. M. Molina","doi":"10.1109/AVSS.2007.4425292","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425292","url":null,"abstract":"In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121923280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A DSP-based system for the detection of vehicles parked in prohibited areas 一个基于dsp的系统,用于检测停在禁止区域的车辆
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425320
S. Boragno, B. Boghossian, J. Black, D. Makris, S. Velastín
In this paper, a system for automatic robust video surveillance is described and in particular its application to the problem of locating vehicles that stop in prohibited area is discussed. The structure of software for video processing (alarm generation, interface with the operator and information storage) is outlined together with the hardware (Trimedia DSP boards and industrial computers) which constitutes an industrial-grade product. The emphasis on this paper is to demonstrate robust detection and hence we show the results of a performance evaluation process carried out with the UK's i-LIDS "Parked Vehicle " reference dataset.
本文介绍了一种自动鲁棒视频监控系统,重点讨论了该系统在车辆停在禁区内的定位问题中的应用。概述了视频处理软件的结构(报警生成、与操作员的接口和信息存储)以及硬件(Trimedia DSP板和工业计算机),构成了工业级产品。本文的重点是展示鲁棒检测,因此我们展示了使用英国i-LIDS“停放车辆”参考数据集进行的性能评估过程的结果。
{"title":"A DSP-based system for the detection of vehicles parked in prohibited areas","authors":"S. Boragno, B. Boghossian, J. Black, D. Makris, S. Velastín","doi":"10.1109/AVSS.2007.4425320","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425320","url":null,"abstract":"In this paper, a system for automatic robust video surveillance is described and in particular its application to the problem of locating vehicles that stop in prohibited area is discussed. The structure of software for video processing (alarm generation, interface with the operator and information storage) is outlined together with the hardware (Trimedia DSP boards and industrial computers) which constitutes an industrial-grade product. The emphasis on this paper is to demonstrate robust detection and hence we show the results of a performance evaluation process carried out with the UK's i-LIDS \"Parked Vehicle \" reference dataset.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127813706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1