Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425292
Federico Castanedo, M. A. Patricio, Jesús García, J. M. Molina
In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.
{"title":"Bottom-up/top-down coordination in a multiagent visual sensor network","authors":"Federico Castanedo, M. A. Patricio, Jesús García, J. M. Molina","doi":"10.1109/AVSS.2007.4425292","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425292","url":null,"abstract":"In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121923280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425320
S. Boragno, B. Boghossian, J. Black, D. Makris, S. Velastín
In this paper, a system for automatic robust video surveillance is described and in particular its application to the problem of locating vehicles that stop in prohibited area is discussed. The structure of software for video processing (alarm generation, interface with the operator and information storage) is outlined together with the hardware (Trimedia DSP boards and industrial computers) which constitutes an industrial-grade product. The emphasis on this paper is to demonstrate robust detection and hence we show the results of a performance evaluation process carried out with the UK's i-LIDS "Parked Vehicle " reference dataset.
{"title":"A DSP-based system for the detection of vehicles parked in prohibited areas","authors":"S. Boragno, B. Boghossian, J. Black, D. Makris, S. Velastín","doi":"10.1109/AVSS.2007.4425320","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425320","url":null,"abstract":"In this paper, a system for automatic robust video surveillance is described and in particular its application to the problem of locating vehicles that stop in prohibited area is discussed. The structure of software for video processing (alarm generation, interface with the operator and information storage) is outlined together with the hardware (Trimedia DSP boards and industrial computers) which constitutes an industrial-grade product. The emphasis on this paper is to demonstrate robust detection and hence we show the results of a performance evaluation process carried out with the UK's i-LIDS \"Parked Vehicle \" reference dataset.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127813706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425332
Zsolt L. Husz, A. Wallace, P. Green
This paper considers the link between tracking algorithms and high-level human behavioural analysis, introducing the action primitives model that recovers symbolic labels from tracked limb configurations. The model consists of similar short-term actions, action primitives clusters, formed automatically and then labelled by supervised learning. The model allows both short actions and longer activities, either periodic or aperiodic. New labels are added incrementally. We determine the effects of model parameters on the labelling of action primitives using ground truth derived from a motion capture system. We also present a representative example of a labelled video sequence.
{"title":"Human activity recognition with action primitives","authors":"Zsolt L. Husz, A. Wallace, P. Green","doi":"10.1109/AVSS.2007.4425332","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425332","url":null,"abstract":"This paper considers the link between tracking algorithms and high-level human behavioural analysis, introducing the action primitives model that recovers symbolic labels from tracked limb configurations. The model consists of similar short-term actions, action primitives clusters, formed automatically and then labelled by supervised learning. The model allows both short actions and longer activities, either periodic or aperiodic. New labels are added incrementally. We determine the effects of model parameters on the labelling of action primitives using ground truth derived from a motion capture system. We also present a representative example of a labelled video sequence.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126605528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425341
Gurumurthy Swaminathan, V. Venkoparao, S. Bedros
Face tracking is a key component for automated video surveillance systems. It supports and enhances tasks such as face recognition and video indexing. Face tracking in surveillance scenarios is a challenging problem due to ambient illumination variations, face pose changes, occlusions, and background clutter. We present an algorithm for tracking faces in surveillance video based on a particle filter mechanism using multiple appearance models for robust representation of the face. We propose color based appearance model complemented by an edge based appearance model using the Difference of Gaussian (DOG) filters. We demonstrate that combined appearance models are more robust in handling the face and scene variations than a single appearance model. For example, color template appearance model is better in handling pose variations but they deteriorate against illumination variations. Similarly, an edge based model is robust in handling illumination variations but they fail in handling substantial pose changes. Hence, a combined model is more robust in handling pose and illumination changes than either one of them by itself. We show how the algorithm performs on a real surveillance scenario where the face undergoes various pose and illumination changes. The algorithm runs in real-time at 20 fps on a standard 3.0 GHz desktop PC.
{"title":"Multiple appearance models for face tracking in surveillance videos","authors":"Gurumurthy Swaminathan, V. Venkoparao, S. Bedros","doi":"10.1109/AVSS.2007.4425341","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425341","url":null,"abstract":"Face tracking is a key component for automated video surveillance systems. It supports and enhances tasks such as face recognition and video indexing. Face tracking in surveillance scenarios is a challenging problem due to ambient illumination variations, face pose changes, occlusions, and background clutter. We present an algorithm for tracking faces in surveillance video based on a particle filter mechanism using multiple appearance models for robust representation of the face. We propose color based appearance model complemented by an edge based appearance model using the Difference of Gaussian (DOG) filters. We demonstrate that combined appearance models are more robust in handling the face and scene variations than a single appearance model. For example, color template appearance model is better in handling pose variations but they deteriorate against illumination variations. Similarly, an edge based model is robust in handling illumination variations but they fail in handling substantial pose changes. Hence, a combined model is more robust in handling pose and illumination changes than either one of them by itself. We show how the algorithm performs on a real surveillance scenario where the face undergoes various pose and illumination changes. The algorithm runs in real-time at 20 fps on a standard 3.0 GHz desktop PC.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131658646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425306
J. M. D. Rincón, C. Orrite-Uruñuela, J. Jaraba
In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. First, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a probability color density image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(zx) using the integral image of the PDI.
{"title":"An efficient particle filter for color-based tracking in complex scenes","authors":"J. M. D. Rincón, C. Orrite-Uruñuela, J. Jaraba","doi":"10.1109/AVSS.2007.4425306","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425306","url":null,"abstract":"In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. First, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a probability color density image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(zx) using the integral image of the PDI.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133505254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425363
G. Lefebvre, Christophe Garcia
We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expressions and facial poses, as demonstrated in various experiments on well-known databases.
{"title":"Facial biometry by stimulating salient singularity masks","authors":"G. Lefebvre, Christophe Garcia","doi":"10.1109/AVSS.2007.4425363","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425363","url":null,"abstract":"We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expressions and facial poses, as demonstrated in various experiments on well-known databases.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116042041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425342
F. Angella, Livier Reithler, Frédéric Gallesio
This article describes a new method which aims at the optimal deployment of sensors for video-surveillance systems, taking realistic models of fixed and PTZ cameras into account, as well as video analysis requirements. The approach relies on a spatial translation of constraints, a method for fast exploration of potential solutions and hardware acceleration of inter-visibility computation. This operational tool allows the evaluation of complex surveillance systems prior installation thanks to a precise simulation of their spatial coverage.
{"title":"Optimal deployment of cameras for video surveillance systems","authors":"F. Angella, Livier Reithler, Frédéric Gallesio","doi":"10.1109/AVSS.2007.4425342","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425342","url":null,"abstract":"This article describes a new method which aims at the optimal deployment of sensors for video-surveillance systems, taking realistic models of fixed and PTZ cameras into account, as well as video analysis requirements. The approach relies on a spatial translation of constraints, a method for fast exploration of potential solutions and hardware acceleration of inter-visibility computation. This operational tool allows the evaluation of complex surveillance systems prior installation thanks to a precise simulation of their spatial coverage.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123965826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425277
M. Kemp
There has been intense interest in the use of millimetre wave and terahertz technology for the detection of concealed weapons, explosives and other threats. Radiation at these frequencies is safe, penetrates barriers and has short enough wavelengths to allow discrimination between objects. In addition, many solids including explosives have characteristic spectroscopic signatures at terahertz wavelengths which can be used to identify them. This paper reviews the progress which has been made in recent years and identifies the achievements, challenges and prospects for these technologies in checkpoint people screening, stand off detection of improvised explosive devices (lEDs) and suicide bombers as well as more specialized screening tasks.
{"title":"Detecting hidden objects: Security imaging using millimetre-waves and terahertz","authors":"M. Kemp","doi":"10.1109/AVSS.2007.4425277","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425277","url":null,"abstract":"There has been intense interest in the use of millimetre wave and terahertz technology for the detection of concealed weapons, explosives and other threats. Radiation at these frequencies is safe, penetrates barriers and has short enough wavelengths to allow discrimination between objects. In addition, many solids including explosives have characteristic spectroscopic signatures at terahertz wavelengths which can be used to identify them. This paper reviews the progress which has been made in recent years and identifies the achievements, challenges and prospects for these technologies in checkpoint people screening, stand off detection of improvised explosive devices (lEDs) and suicide bombers as well as more specialized screening tasks.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130086897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425343
R. Pflugfelder, H. Bischof
People tracking is of fundamental importance in multi-camera surveillance systems. In recent years, many approaches for multi-camera tracking have been discussed. Most methods use either various image features or the geometric relation between the cameras or both as a cue. It is a desire to know the geometry for distant cameras, because geometry is not influenced by, for example, drastic changes in object appearance or in scene illumination. However, the determination of the camera geometry is cumbersome. The paper tries to solve this problem and contributes in two different ways. On the one hand, an approach is presented that calibrates two distant cameras automatically. We continue previous work and focus especially on the calibration of the extrinsic parameters. Point correspondences are used for this task which are acquired by detecting points on top of people's heads. On the other hand, qualitative experimental results with the PETS 2006 benchmark data show that the self-calibration is accurate enough for a solely geometric tracking of people across distant cameras. Reliable features for a matching are hardly available in such cases.
{"title":"People tracking across two distant self-calibrated cameras","authors":"R. Pflugfelder, H. Bischof","doi":"10.1109/AVSS.2007.4425343","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425343","url":null,"abstract":"People tracking is of fundamental importance in multi-camera surveillance systems. In recent years, many approaches for multi-camera tracking have been discussed. Most methods use either various image features or the geometric relation between the cameras or both as a cue. It is a desire to know the geometry for distant cameras, because geometry is not influenced by, for example, drastic changes in object appearance or in scene illumination. However, the determination of the camera geometry is cumbersome. The paper tries to solve this problem and contributes in two different ways. On the one hand, an approach is presented that calibrates two distant cameras automatically. We continue previous work and focus especially on the calibration of the extrinsic parameters. Point correspondences are used for this task which are acquired by detecting points on top of people's heads. On the other hand, qualitative experimental results with the PETS 2006 benchmark data show that the self-calibration is accurate enough for a solely geometric tracking of people across distant cameras. Reliable features for a matching are hardly available in such cases.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124125298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425305
F. Maire
Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track signalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The proposed system induces a 3D model of the track as a piecewise quadratic function (with continuity constraints on the function and its derivative). The geometric constraints of the rail tracks allow the creation of a completely self-calibrating system. Although road lane marking detection algorithms perform well most of the time for rail detection, the metallic surface of a rail does not always behave like a road lane marking. Therefore we had to develop new techniques to address the specific problems of the reflectance of rails.
{"title":"Vision based anti-collision system for rail track maintenance vehicles","authors":"F. Maire","doi":"10.1109/AVSS.2007.4425305","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425305","url":null,"abstract":"Maintenance trains travel in convoy. In Australia, only the first train of the convoy pays attention to the track signalization (the other convoy vehicles simply follow the preceding vehicle). Because of human errors, collisions can happen between the maintenance vehicles. Although an anti-collision system based on a laser distance meter is already in operation, the existing system has a limited range due to the curvature of the tracks. In this paper, we introduce an anti-collision system based on vision. The proposed system induces a 3D model of the track as a piecewise quadratic function (with continuity constraints on the function and its derivative). The geometric constraints of the rail tracks allow the creation of a completely self-calibrating system. Although road lane marking detection algorithms perform well most of the time for rail detection, the metallic surface of a rail does not always behave like a road lane marking. Therefore we had to develop new techniques to address the specific problems of the reflectance of rails.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128838180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}