首页 > 最新文献

2007 IEEE Conference on Advanced Video and Signal Based Surveillance最新文献

英文 中文
Tracking of two acoustic sources in reverberant environments using a particle swarm optimizer 用粒子群优化器跟踪混响环境中的两个声源
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425373
F. Antonacci, Davide Riva, A. Sarti, M. Tagliasacchi, S. Tubaro
In this paper we consider the problem of tracking multiple acoustic sources in reverberant environments. The solution that we propose is based on the combination of two techniques. A blind source separation (BSS) method known as TRINICON [5] is applied to the signals acquired by the microphone arrays. The TRINICON de-mixing filters are used to obtain the Time Differences of Arrival (TDOAs), which are related to the source location through a nonlinear function. A particle filter is then applied in order to localize the sources. Particles move according to a swarm-like dynamics, which significatively reduces the number of particles involved with respect to traditional particle filter. We discuss results for the case of two sources and four microphone pairs. In addition, we propose a method, based on detecting source inactivity, which overcomes the ambiguities that intrinsically arise when only two microphone pairs are used. Experimental results demonstrate that the average localization error on a variety of pseudo-random trajectories is around 40 cm when the T60 reverberation time is 0.6s.
本文研究了混响环境中多声源的跟踪问题。我们提出的解决方案是基于两种技术的结合。对麦克风阵列采集的信号采用盲源分离(blind source separation, BSS)方法TRINICON[5]。利用TRINICON解混滤波器获得与源位置有关的到达时间差(TDOAs), TDOAs是一个非线性函数。然后应用粒子滤波器来定位源。粒子的运动以一种类似于群体的动态方式进行,与传统的粒子滤波相比,这大大减少了粒子的数量。我们讨论了两个声源和四个传声器对情况下的结果。此外,我们提出了一种基于检测源不活动的方法,该方法克服了仅使用两个麦克风对时固有的模糊性。实验结果表明,当T60混响时间为0.6s时,各种伪随机轨迹的平均定位误差在40 cm左右。
{"title":"Tracking of two acoustic sources in reverberant environments using a particle swarm optimizer","authors":"F. Antonacci, Davide Riva, A. Sarti, M. Tagliasacchi, S. Tubaro","doi":"10.1109/AVSS.2007.4425373","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425373","url":null,"abstract":"In this paper we consider the problem of tracking multiple acoustic sources in reverberant environments. The solution that we propose is based on the combination of two techniques. A blind source separation (BSS) method known as TRINICON [5] is applied to the signals acquired by the microphone arrays. The TRINICON de-mixing filters are used to obtain the Time Differences of Arrival (TDOAs), which are related to the source location through a nonlinear function. A particle filter is then applied in order to localize the sources. Particles move according to a swarm-like dynamics, which significatively reduces the number of particles involved with respect to traditional particle filter. We discuss results for the case of two sources and four microphone pairs. In addition, we propose a method, based on detecting source inactivity, which overcomes the ambiguities that intrinsically arise when only two microphone pairs are used. Experimental results demonstrate that the average localization error on a variety of pseudo-random trajectories is around 40 cm when the T60 reverberation time is 0.6s.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117173865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A system for face detection and tracking in unconstrained environments 无约束环境下的人脸检测和跟踪系统
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425361
Augusto Destrero, F. Odone, A. Verri
We describe a trainable system for face detection and tracking. The structure of the system is based on multiple cues that discard non face areas as soon as possible: we combine motion, skin, and face detection. The latter is the core of our system and consists of a hierarchy of small SVM classifiers built on the output of an automatic feature selection procedure. Our feature selection is entirely data-driven and allows us to obtain powerful descriptions from a relatively small set of data. Finally, a Kalman tracking on the face region optimizes detection results over time. We present an experimental analysis of the face detection module and results obtained with the whole system on the specific task of counting people entering the scene.
我们描述了一个可训练的人脸检测和跟踪系统。该系统的结构基于多个线索,这些线索会尽快丢弃非面部区域:我们将运动、皮肤和面部检测结合起来。后者是我们系统的核心,由基于自动特征选择过程输出的小SVM分类器组成。我们的特征选择完全是数据驱动的,允许我们从相对较小的数据集中获得强大的描述。最后,人脸区域的卡尔曼跟踪随着时间的推移优化检测结果。我们对人脸检测模块进行了实验分析,并结合整个系统对进入场景的人数进行了统计。
{"title":"A system for face detection and tracking in unconstrained environments","authors":"Augusto Destrero, F. Odone, A. Verri","doi":"10.1109/AVSS.2007.4425361","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425361","url":null,"abstract":"We describe a trainable system for face detection and tracking. The structure of the system is based on multiple cues that discard non face areas as soon as possible: we combine motion, skin, and face detection. The latter is the core of our system and consists of a hierarchy of small SVM classifiers built on the output of an automatic feature selection procedure. Our feature selection is entirely data-driven and allows us to obtain powerful descriptions from a relatively small set of data. Finally, a Kalman tracking on the face region optimizes detection results over time. We present an experimental analysis of the face detection module and results obtained with the whole system on the specific task of counting people entering the scene.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123299359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Automatic people detection and counting for athletic videos classification 自动检测和计数运动视频分类
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425349
C. Panagiotakis, E. Ramasso, G. Tziritas, M. Rombaut, D. Pellerin
We propose a general framework that focuses on automatic individual/multiple people motion-shape analysis and on suitable features extraction that can be used on action/activity recognition problems under real, dynamical and unconstrained environments. We have considered various athletic videos from a single uncalibrated, possibly moving camera in order to evaluate the robustness of the proposed method. We have used an easily expanded hierarchical scheme in order to classify them to videos of individual and team sports. Robust, adaptive and independent from the camera motion, the proposed features are combined within Transferable Belief Model (TBM) framework providing a two level (frames and shot) video categorization. The experimental results of 97% individual/team sport categorization accuracy, using a dataset of more than 250 videos of athletic meetings indicate the good performance of the proposed scheme.
我们提出了一个通用框架,重点关注个人/多人的自动运动形状分析和合适的特征提取,可用于真实,动态和无约束环境下的动作/活动识别问题。我们考虑了来自单个未校准的运动摄像机的各种运动视频,以评估所提出方法的鲁棒性。我们使用了一个易于扩展的分层方案,以便将它们分类为个人和团队运动的视频。鲁棒性、自适应性和不受摄像机运动影响,所提出的特征结合在可转移信念模型(TBM)框架内,提供两级(帧和镜头)视频分类。使用250多个运动会视频数据集的实验结果表明,该方案具有良好的性能,个人/团队运动分类准确率达到97%。
{"title":"Automatic people detection and counting for athletic videos classification","authors":"C. Panagiotakis, E. Ramasso, G. Tziritas, M. Rombaut, D. Pellerin","doi":"10.1109/AVSS.2007.4425349","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425349","url":null,"abstract":"We propose a general framework that focuses on automatic individual/multiple people motion-shape analysis and on suitable features extraction that can be used on action/activity recognition problems under real, dynamical and unconstrained environments. We have considered various athletic videos from a single uncalibrated, possibly moving camera in order to evaluate the robustness of the proposed method. We have used an easily expanded hierarchical scheme in order to classify them to videos of individual and team sports. Robust, adaptive and independent from the camera motion, the proposed features are combined within Transferable Belief Model (TBM) framework providing a two level (frames and shot) video categorization. The experimental results of 97% individual/team sport categorization accuracy, using a dataset of more than 250 videos of athletic meetings indicate the good performance of the proposed scheme.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122667984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Classification of gait types based on the duty-factor 基于责任因子的步态类型分类
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425330
P. Fihl, T. Moeslund
This paper deals with classification of human gait types based on the notion that different gait types are in fact different types of locomotion, i.e., running is not simply walking done faster. We present the duty-factor, which is a descriptor based on this notion. The duty-factor is independent on the speed of the human, the cameras setup etc. and hence a robust descriptor for gait classification. The duty-factor is basically a matter of measuring the ground support of the feet with respect to the stride. We estimate this by comparing the incoming silhouettes to a database of silhouettes with known ground support. Silhouettes are extracted using the codebook method and represented using shape contexts. The matching with database silhouettes is done using the Hungarian method. While manually estimated duty-factors show a clear classification the presented system contains misclassifications due to silhouette noise and ambiguities in the database silhouettes.
本文基于不同的步态类型实际上是不同类型的运动的概念来处理人类步态类型的分类,即跑步不是简单地走得更快。我们提出了责任因子,它是基于这个概念的一个描述符。责任因子独立于人的速度,相机设置等,因此是步态分类的鲁棒描述符。责任系数基本上是测量双脚相对于步幅的地面支撑力的问题。我们通过将传入的轮廓与已知地面支持的轮廓数据库进行比较来估计这一点。使用代码本方法提取轮廓,并使用形状上下文表示轮廓。与数据库轮廓的匹配使用匈牙利方法完成。虽然人工估计的责任因子显示出清晰的分类,但由于轮廓噪声和数据库轮廓的模糊性,所提出的系统存在错误分类。
{"title":"Classification of gait types based on the duty-factor","authors":"P. Fihl, T. Moeslund","doi":"10.1109/AVSS.2007.4425330","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425330","url":null,"abstract":"This paper deals with classification of human gait types based on the notion that different gait types are in fact different types of locomotion, i.e., running is not simply walking done faster. We present the duty-factor, which is a descriptor based on this notion. The duty-factor is independent on the speed of the human, the cameras setup etc. and hence a robust descriptor for gait classification. The duty-factor is basically a matter of measuring the ground support of the feet with respect to the stride. We estimate this by comparing the incoming silhouettes to a database of silhouettes with known ground support. Silhouettes are extracted using the codebook method and represented using shape contexts. The matching with database silhouettes is done using the Hungarian method. While manually estimated duty-factors show a clear classification the presented system contains misclassifications due to silhouette noise and ambiguities in the database silhouettes.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123873226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
2D and 3D face localization for complex scenes 复杂场景的二维和三维人脸定位
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425339
Ghassan O. Karame, A. Stergiou, N. Katsarakis, Panagiotis Papageorgiou, Aristodemos Pnevmatikakis
In this paper, we address face tracking of multiple people in complex 3D scenes, using multiple calibrated and synchronized far-field recordings. We localize faces in every camera view and associate them across the different views. To cope with the complexity of 2D face localization introduced by the multitude of people and unconstrained face poses, a combination of stochastic and deterministic trackers, detectors and a Gaussian mixture model for face validation are utilized. Then faces of the same person seen from the different cameras are associated by first finding all possible associations and then choosing the best option by means of a 3D stochastic tracker. The performance of the proposed system is evaluated and is found enhanced compared to existing systems.
在本文中,我们使用多个校准和同步的远场记录来解决复杂3D场景中多人的面部跟踪问题。我们在每个摄像头视图中定位人脸,并在不同的视图中将它们关联起来。针对人群众多和人脸姿态不受约束所带来的二维人脸定位的复杂性,采用随机与确定性相结合的跟踪器、检测器和高斯混合模型进行人脸验证。然后,通过首先找到所有可能的关联,然后通过3D随机跟踪器选择最佳选项,将从不同摄像机看到的同一个人的脸联系起来。对所提出的系统的性能进行了评估,发现与现有系统相比,该系统的性能得到了提高。
{"title":"2D and 3D face localization for complex scenes","authors":"Ghassan O. Karame, A. Stergiou, N. Katsarakis, Panagiotis Papageorgiou, Aristodemos Pnevmatikakis","doi":"10.1109/AVSS.2007.4425339","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425339","url":null,"abstract":"In this paper, we address face tracking of multiple people in complex 3D scenes, using multiple calibrated and synchronized far-field recordings. We localize faces in every camera view and associate them across the different views. To cope with the complexity of 2D face localization introduced by the multitude of people and unconstrained face poses, a combination of stochastic and deterministic trackers, detectors and a Gaussian mixture model for face validation are utilized. Then faces of the same person seen from the different cameras are associated by first finding all possible associations and then choosing the best option by means of a 3D stochastic tracker. The performance of the proposed system is evaluated and is found enhanced compared to existing systems.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126145110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detection of temporarily static regions by processing video at different frame rates 通过以不同帧率处理视频来检测临时静态区域
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425316
F. Porikli
This paper presents an abandoned item and illegally parked vehicle detection method for single static camera video surveillance applications. By processing the input video at different frame rates, two backgrounds are constructed; one for short-term and another for long-term. Each of these backgrounds is defined as a mixture of Gaussian models, which are adapted using online Bayesian update. Two binary foreground maps are estimated by comparing the current frame with the backgrounds, and motion statistics are aggregated in a likelihood image by applying a set of heuristics to the foreground maps. Likelihood image is then used to differentiate between the pixels that belong to moving objects, temporarily static regions and scene background. Depending on the application, the temporary static regions indicate abandoned items, illegally parked vehicles, objects removed from the scene, etc. The presented pixel-wise method does not require object tracking, thus its performance is not upper-bounded to error prone detection and correspondence tasks that usually fail for crowded scenes. It accurately segments objects even if they are fully occluded. It can also be effectively implemented on a parallel processing architecture.
提出了一种适用于单静态摄像机视频监控应用的废弃物品和非法停放车辆检测方法。通过对输入视频进行不同帧率的处理,构造两个背景;一个是短期的,另一个是长期的。每个背景都被定义为高斯模型的混合,这些模型使用在线贝叶斯更新进行调整。通过比较当前帧和背景来估计两个二元前景图,并通过对前景图应用一组启发式算法将运动统计信息聚合在似然图像中。然后使用似然图像来区分属于运动物体、临时静态区域和场景背景的像素。根据应用程序的不同,临时静态区域表示废弃物品、非法停放的车辆、从现场移走的物体等。所提出的逐像素方法不需要对象跟踪,因此它的性能不受容易出错的检测和通信任务的上限,而这些任务通常在拥挤的场景中失败。即使物体被完全遮挡,它也能准确地分割物体。它也可以在并行处理架构上有效地实现。
{"title":"Detection of temporarily static regions by processing video at different frame rates","authors":"F. Porikli","doi":"10.1109/AVSS.2007.4425316","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425316","url":null,"abstract":"This paper presents an abandoned item and illegally parked vehicle detection method for single static camera video surveillance applications. By processing the input video at different frame rates, two backgrounds are constructed; one for short-term and another for long-term. Each of these backgrounds is defined as a mixture of Gaussian models, which are adapted using online Bayesian update. Two binary foreground maps are estimated by comparing the current frame with the backgrounds, and motion statistics are aggregated in a likelihood image by applying a set of heuristics to the foreground maps. Likelihood image is then used to differentiate between the pixels that belong to moving objects, temporarily static regions and scene background. Depending on the application, the temporary static regions indicate abandoned items, illegally parked vehicles, objects removed from the scene, etc. The presented pixel-wise method does not require object tracking, thus its performance is not upper-bounded to error prone detection and correspondence tasks that usually fail for crowded scenes. It accurately segments objects even if they are fully occluded. It can also be effectively implemented on a parallel processing architecture.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129385668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Adaptive summarisation of surveillance video sequences 监控视频序列的自适应摘要
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425369
Jian Li, S. G. Nikolov, C. Benton, N. Scott-Samuel
We describe our studies on summarising surveillance videos using optical flow information. The proposed method incorporates motion analysis into a video skimming scheme in which the playback speed is determined by the detectability of interesting motion behaviours according to prior information. A psycho-visual experiment was conducted to compare human performance and viewing strategy for summarised videos using standard video skimming techniques and a proposed motion-based adaptive summarisation technique.
我们描述了利用光流信息对监控视频进行总结的研究。该方法将运动分析结合到视频浏览方案中,其中播放速度由根据先验信息的有趣运动行为的可检测性决定。通过一项心理视觉实验,比较了使用标准视频浏览技术和提出的基于动作的自适应摘要技术的人类对摘要视频的表现和观看策略。
{"title":"Adaptive summarisation of surveillance video sequences","authors":"Jian Li, S. G. Nikolov, C. Benton, N. Scott-Samuel","doi":"10.1109/AVSS.2007.4425369","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425369","url":null,"abstract":"We describe our studies on summarising surveillance videos using optical flow information. The proposed method incorporates motion analysis into a video skimming scheme in which the playback speed is determined by the detectability of interesting motion behaviours according to prior information. A psycho-visual experiment was conducted to compare human performance and viewing strategy for summarised videos using standard video skimming techniques and a proposed motion-based adaptive summarisation technique.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129542810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Single camera calibration for trajectory-based behavior analysis 基于轨迹行为分析的单摄像机标定
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425301
N. Anjum, A. Cavallaro
Perspective deformations on the image plane make the analysis of object behaviors difficult in surveillance video. In this paper, we improve the results of trajectory-based scene analysis by using single camera calibration for perspective rectification. First, the ground-plane view is estimated from perspective images captured from a single camera. Next, unsupervised fuzzy clustering is applied on the transformed trajectories to group similar behaviors and to isolate outliers. We evaluate the proposed approach on real outdoor surveillance scenarios with standard datasets and show that perspective rectification improves the accuracy of the trajectory clustering results.
在监控视频中,图像平面上的透视变形给物体行为分析带来困难。在本文中,我们改进了基于轨迹的场景分析结果,使用单摄像机校准进行视角校正。首先,从单个摄像机拍摄的透视图像估计地平面视图。其次,将无监督模糊聚类应用于变换后的轨迹,对相似行为进行分组并分离异常值。我们在标准数据集的真实室外监控场景中评估了所提出的方法,并表明视角校正提高了轨迹聚类结果的准确性。
{"title":"Single camera calibration for trajectory-based behavior analysis","authors":"N. Anjum, A. Cavallaro","doi":"10.1109/AVSS.2007.4425301","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425301","url":null,"abstract":"Perspective deformations on the image plane make the analysis of object behaviors difficult in surveillance video. In this paper, we improve the results of trajectory-based scene analysis by using single camera calibration for perspective rectification. First, the ground-plane view is estimated from perspective images captured from a single camera. Next, unsupervised fuzzy clustering is applied on the transformed trajectories to group similar behaviors and to isolate outliers. We evaluate the proposed approach on real outdoor surveillance scenarios with standard datasets and show that perspective rectification improves the accuracy of the trajectory clustering results.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128840919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
What are customers looking at? 顾客在看什么?
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425345
Xiaoming Liu, N. Krahnstoever, Ting Yu, P. Tu
Computer vision approaches for retail applications can provide value far beyond the common domain of loss prevention. Gaining insight into the movement and behaviors of shoppers is of high interest for marketing, merchandizing, store operations and data mining. Of particular interest is the process of purchase decision making. What catches a customers attention? What products go unnoticed? What does a customer look at before making a final decision? Towards this goal we presents a system that detects and tracks both the location and gaze of shoppers in retail environments. While networks of standard overhead store cameras are used for tracking the location of customers, small in-shelf cameras are used for estimating customer gaze. The presented system operates robustly in real-time and can be deployed in a variety of retail applications.
零售应用的计算机视觉方法提供的价值远远超出了常见的防损领域。深入了解购物者的活动和行为对于市场营销、商品销售、商店运营和数据挖掘都是非常重要的。特别有趣的是购买决策的过程。什么能吸引顾客的注意力?哪些产品不被注意?顾客在做最后决定之前会看什么?为了实现这一目标,我们提出了一个系统,可以检测和跟踪零售环境中购物者的位置和目光。标准的商店顶部摄像头网络用于跟踪顾客的位置,而小型的货架内摄像头用于估计顾客的目光。所提出的系统运行健壮,实时,可部署在各种零售应用。
{"title":"What are customers looking at?","authors":"Xiaoming Liu, N. Krahnstoever, Ting Yu, P. Tu","doi":"10.1109/AVSS.2007.4425345","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425345","url":null,"abstract":"Computer vision approaches for retail applications can provide value far beyond the common domain of loss prevention. Gaining insight into the movement and behaviors of shoppers is of high interest for marketing, merchandizing, store operations and data mining. Of particular interest is the process of purchase decision making. What catches a customers attention? What products go unnoticed? What does a customer look at before making a final decision? Towards this goal we presents a system that detects and tracks both the location and gaze of shoppers in retail environments. While networks of standard overhead store cameras are used for tracking the location of customers, small in-shelf cameras are used for estimating customer gaze. The presented system operates robustly in real-time and can be deployed in a variety of retail applications.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121631708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Video analytics for retail 零售视频分析
Pub Date : 2007-09-05 DOI: 10.1109/AVSS.2007.4425348
A. Senior, L. Brown, A. Hampapur, Chiao-Fe Shu, Y. Zhai, R. Feris, Ying-li Tian, S. Borger, Christopher R. Carlson
We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.
我们描述了一套基于视频理解和交易日志组合的零售分析工具。工具提供了防止损失(退货欺诈和收银员欺诈),商店操作(顾客计数)和销售(展示效果)。结果提出了退货,欺诈和客户计数。
{"title":"Video analytics for retail","authors":"A. Senior, L. Brown, A. Hampapur, Chiao-Fe Shu, Y. Zhai, R. Feris, Ying-li Tian, S. Borger, Christopher R. Carlson","doi":"10.1109/AVSS.2007.4425348","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425348","url":null,"abstract":"We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126740364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
期刊
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1