首页 > 最新文献

Progress in Computer Vision and Image Analysis最新文献

英文 中文
Progress in Computer Vision and Image Analysis 计算机视觉与图像分析进展
Pub Date : 2008-11-15 DOI: 10.1142/7003
H. Bunke, J. Villanueva, G. Sanchez
Medical Imaging Texture Analysis Image Segmentation Motion Deformable Models Document Analysis Pattern Analysis Tracking Object Recognition Machine Intelligence Machine Vision.
医学成像纹理分析图像分割运动变形模型文档分析模式分析跟踪对象识别机器智能机器视觉
{"title":"Progress in Computer Vision and Image Analysis","authors":"H. Bunke, J. Villanueva, G. Sanchez","doi":"10.1142/7003","DOIUrl":"https://doi.org/10.1142/7003","url":null,"abstract":"Medical Imaging Texture Analysis Image Segmentation Motion Deformable Models Document Analysis Pattern Analysis Tracking Object Recognition Machine Intelligence Machine Vision.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131976942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Approach to Sparse Histogram Image Lossless Compression using JPEG2000 基于JPEG2000的稀疏直方图图像无损压缩新方法
Pub Date : 2006-12-01 DOI: 10.1142/9789812834461_0023
M. Aguzzi, M. Albanesi
In this paper a novel approach to the compression of sparse histogram images is proposed. First, we define a sparsity index which gives hints on the relationship between the mathematical concept of matrix sparsity and the visual information of pixel distribution. We use this index to better understand the scope of our approach and its preferred field of applicability, and to evaluate the performance. We present two algorithms which modify one of the coding steps of the JPEG2000 standard for lossless image compression. A theoretical study of the gain referring to the standard is given. Experimental results on well standardized images of the literature confirm the expectations, especially for high sparse images.
本文提出了一种新的稀疏直方图图像压缩方法。首先,我们定义了一个稀疏度指标,该指标提示了矩阵稀疏度的数学概念与像素分布的视觉信息之间的关系。我们使用该指数来更好地了解我们的方法的范围及其首选的适用领域,并评估性能。提出了两种改进JPEG2000标准编码步骤的无损图像压缩算法。参考该标准对增益进行了理论研究。对文献中标准化程度较高的图像的实验结果证实了这一期望,特别是对高度稀疏的图像。
{"title":"A Novel Approach to Sparse Histogram Image Lossless Compression using JPEG2000","authors":"M. Aguzzi, M. Albanesi","doi":"10.1142/9789812834461_0023","DOIUrl":"https://doi.org/10.1142/9789812834461_0023","url":null,"abstract":"In this paper a novel approach to the compression of sparse histogram images is proposed. First, we define a sparsity index which gives hints on the relationship between the mathematical concept of matrix sparsity and the visual information of pixel distribution. We use this index to better understand the scope of our approach and its preferred field of applicability, and to evaluate the performance. We present two algorithms which modify one of the coding steps of the JPEG2000 standard for lossless image compression. A theoretical study of the gain referring to the standard is given. Experimental results on well standardized images of the literature confirm the expectations, especially for high sparse images.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122603116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Architectural Scene Reconstruction from single or Multiple Uncalibrated Images 从单个或多个未校准图像重建建筑场景
Pub Date : 2006-12-01 DOI: 10.1142/9789812834461_0025
H. Lin, Syuan-Liang Chen, Jen-Hung Lin
In this paper we present a system for the reconstruction of 3D models of architectural scenes from single or multiple uncalibrated images. The partial 3D model of a building is recovered from a single image using geometric constraints such as parallelism and orthogonality, which are likely to be found in most architectural scenes. The approximate corner positions of a building are selected interactively by a user and then further refined automatically using Hough transform. The relative depths of the corner points are calculated according to the perspective projection model. Partial 3D models recovered from different viewpoints are registered to a common coordinate system for integration. The 3D model registration process is carried out using modified ICP (iterative closest point) algorithm with the initial parameters provided by geometric constraints of the building. The integrated 3D model is then fitted with piecewise planar surfaces to generate a more geometrically consistent model. The acquired images are finally mapped onto the surface of the reconstructed 3D model to create a photo-realistic model. A working system which allows a user to interactively build a 3D model of an architectural scene from single or multiple images has been proposed and implemented.
本文提出了一种基于单幅或多幅未标定图像的建筑场景三维模型重建系统。建筑物的部分3D模型是使用在大多数建筑场景中可能发现的平行和正交等几何约束从单个图像中恢复的。建筑物的大致角位置由用户交互选择,然后使用霍夫变换进一步自动细化。根据透视投影模型计算角点的相对深度。从不同视点恢复的部分三维模型被注册到一个共同的坐标系中进行集成。三维模型配准过程使用改进的ICP(迭代最近点)算法进行,初始参数由建筑物的几何约束提供。然后将集成的三维模型与分段平面拟合,以生成几何上更一致的模型。最后将获取的图像映射到重建的三维模型表面,以创建逼真的模型。提出并实现了一种允许用户从单个或多个图像中交互式地构建建筑场景三维模型的工作系统。
{"title":"Architectural Scene Reconstruction from single or Multiple Uncalibrated Images","authors":"H. Lin, Syuan-Liang Chen, Jen-Hung Lin","doi":"10.1142/9789812834461_0025","DOIUrl":"https://doi.org/10.1142/9789812834461_0025","url":null,"abstract":"In this paper we present a system for the reconstruction of 3D models of architectural scenes from single or multiple uncalibrated images. The partial 3D model of a building is recovered from a single image using geometric constraints such as parallelism and orthogonality, which are likely to be found in most architectural scenes. The approximate corner positions of a building are selected interactively by a user and then further refined automatically using Hough transform. The relative depths of the corner points are calculated according to the perspective projection model. Partial 3D models recovered from different viewpoints are registered to a common coordinate system for integration. The 3D model registration process is carried out using modified ICP (iterative closest point) algorithm with the initial parameters provided by geometric constraints of the building. The integrated 3D model is then fitted with piecewise planar surfaces to generate a more geometrically consistent model. The acquired images are finally mapped onto the surface of the reconstructed 3D model to create a photo-realistic model. A working system which allows a user to interactively build a 3D model of an architectural scene from single or multiple images has been proposed and implemented.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130827587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ear Biometrics Based on Geometrical Feature Extraction 基于几何特征提取的耳朵生物识别
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0018
M. Choraś
Biometrics identification methods proved to be very efficient, more natural and easy for users than traditional methods of human identification. In fact, only biometrics methods truly identify humans, not keys and cards they posses or passwords they should remember. The future of biometrics will surely lead to systems based on image analysis as the data acquisition is very simple and requires only cameras, scanners or sensors. More importantly such methods could be passive, which means that the user does not have to take active part in the whole process or, in fact, would not even know that the process of identification takes place. There are many possible data sources for human identification systems, but the physiological biometrics seem to have many advantages over methods based on human behaviour. The most interesting human anatomical parts for such passive, physiological biometrics systems based on images acquired from cameras are face and ear. Both of those methods contain large volume of unique features that allow to distinctively identify many users and will be surely implemented into efficient biometrics systems for many applications. The article introduces to ear biometrics and presents its advantages over face biometrics in passive human identification systems. Then the geometrical method of feature extraction from human ear images in order to perform human identification is presented.
事实证明,生物识别方法比传统的人体识别方法更有效、更自然、更容易使用。事实上,只有生物识别方法才能真正识别人类,而不是他们拥有的钥匙和卡片或他们应该记住的密码。生物识别技术的未来必然是基于图像分析的系统,因为数据采集非常简单,只需要摄像头、扫描仪或传感器。更重要的是,这些方法可以是被动的,这意味着用户不必主动参与整个过程,或者实际上甚至不知道识别过程的发生。人体识别系统有许多可能的数据源,但生理生物识别技术似乎比基于人类行为的方法有许多优势。对于这种被动的生理生物识别系统来说,最有趣的人体解剖部位是面部和耳朵。这两种方法都包含大量独特的特征,可以识别许多用户,并且肯定会在许多应用中实现高效的生物识别系统。本文介绍了耳生物识别技术,并介绍了耳生物识别技术相对于人脸生物识别技术在被动人体识别系统中的优势。在此基础上,提出了从人耳图像中提取特征的几何方法,以实现人的身份识别。
{"title":"Ear Biometrics Based on Geometrical Feature Extraction","authors":"M. Choraś","doi":"10.1142/9789812834461_0018","DOIUrl":"https://doi.org/10.1142/9789812834461_0018","url":null,"abstract":"Biometrics identification methods proved to be very efficient, more natural and easy for users than traditional methods of human identification. In fact, only biometrics methods truly identify humans, not keys and cards they posses or passwords they should remember. The future of biometrics will surely lead to systems based on image analysis as the data acquisition is very simple and requires only cameras, scanners or sensors. More importantly such methods could be passive, which means that the user does not have to take active part in the whole process or, in fact, would not even know that the process of identification takes place. There are many possible data sources for human identification systems, but the physiological biometrics seem to have many advantages over methods based on human behaviour. The most interesting human anatomical parts for such passive, physiological biometrics systems based on images acquired from cameras are face and ear. Both of those methods contain large volume of unique features that allow to distinctively identify many users and will be surely implemented into efficient biometrics systems for many applications. The article introduces to ear biometrics and presents its advantages over face biometrics in passive human identification systems. Then the geometrical method of feature extraction from human ear images in order to perform human identification is presented.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129940455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 158
Detecting Human Heads with their orientations 探测人类头部的方向
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0015
A. Sugimoto, Mitsuhiro Kimura, Takashi Matsuyama
We propose a two-step method for detecting human heads with their orientations. In the first step, the method employs an ellipse as the contour model of human-head appearances to deal with wide variety of appearances. Our method then evaluates the ellipse to detect possible human heads. In the second step, on the other hand, our method focuses on features inside the ellipse, such as eyes, the mouth or cheeks, to model facial components. The method evaluates not only such components themselves but also their geometric configuration to eliminate false positives in the first step and, at the same time, to estimate face orientations. Our intensive experiments show that our method can correctly and stably detect human heads with their orientations.
我们提出了一种两步法来检测人类头部的方向。在第一步中,该方法采用椭圆作为人头外观的轮廓模型来处理各种各样的外观。我们的方法然后评估椭圆以检测可能的人头。另一方面,在第二步中,我们的方法关注椭圆内部的特征,如眼睛、嘴巴或脸颊,来建模面部成分。该方法不仅评估这些成分本身,还评估它们的几何结构,以在第一步消除误报,同时估计人脸方向。大量的实验表明,该方法能够准确、稳定地检测出人的头部方向。
{"title":"Detecting Human Heads with their orientations","authors":"A. Sugimoto, Mitsuhiro Kimura, Takashi Matsuyama","doi":"10.1142/9789812834461_0015","DOIUrl":"https://doi.org/10.1142/9789812834461_0015","url":null,"abstract":"We propose a two-step method for detecting human heads with their orientations. In the first step, the method employs an ellipse as the contour model of human-head appearances to deal with wide variety of appearances. Our method then evaluates the ellipse to detect possible human heads. In the second step, on the other hand, our method focuses on features inside the ellipse, such as eyes, the mouth or cheeks, to model facial components. The method evaluates not only such components themselves but also their geometric configuration to eliminate false positives in the first step and, at the same time, to estimate face orientations. Our intensive experiments show that our method can correctly and stably detect human heads with their orientations.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125411461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Prior Knowledge Based Motion Model Representation 基于先验知识的运动模型表示
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0016
A. Sappa, Niki Aifanti, S. Malassiotis, M. Strintzis
This paper presents a new approach for human walking modeling from monocular image sequences. A kinematics model and a walking motion model are introduced in order to exploit prior knowledge. The proposed technique consists of two steps. Initially, an efficient feature point selection and tracking approach is used to compute feature points’ trajectories. Peaks and valleys of these trajectories are used to detect key frames— frames where both legs are in contact with the floor. Secondly, motion models associated with each joint are locally tuned by using those key frames. Differently than previous approaches, this tuning process is not performed at every frame, reducing CPU time. In addition, the movement’s frequency is defined by the elapsed time between two consecutive key frames, which allows handling walking displacement at different speed. Experimental results with different video sequences are presented.
提出了一种基于单目图像序列的人体行走建模新方法。为了利用先验知识,引入了运动学模型和行走运动模型。所建议的技术包括两个步骤。首先,采用一种高效的特征点选择和跟踪方法计算特征点的轨迹。这些轨迹的波峰和波谷被用来检测关键帧——两条腿都与地面接触的帧。其次,利用这些关键帧对与每个关节相关的运动模型进行局部调整。与以前的方法不同,此调优过程不是在每一帧上执行,从而减少了CPU时间。此外,运动的频率由两个连续关键帧之间的时间来定义,这允许以不同的速度处理行走位移。给出了不同视频序列下的实验结果。
{"title":"Prior Knowledge Based Motion Model Representation","authors":"A. Sappa, Niki Aifanti, S. Malassiotis, M. Strintzis","doi":"10.1142/9789812834461_0016","DOIUrl":"https://doi.org/10.1142/9789812834461_0016","url":null,"abstract":"This paper presents a new approach for human walking modeling from monocular image sequences. A kinematics model and a walking motion model are introduced in order to exploit prior knowledge. The proposed technique consists of two steps. Initially, an efficient feature point selection and tracking approach is used to compute feature points’ trajectories. Peaks and valleys of these trajectories are used to detect key frames— frames where both legs are in contact with the floor. Secondly, motion models associated with each joint are locally tuned by using those key frames. Differently than previous approaches, this tuning process is not performed at every frame, reducing CPU time. In addition, the movement’s frequency is defined by the elapsed time between two consecutive key frames, which allows handling walking displacement at different speed. Experimental results with different video sequences are presented.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114963459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Area and Volume restoration in Elastically Deformable solids 弹性变形固体的面积和体积恢复
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0021
Micky Kelager, Anders Fleron, Kenny Erleben
This paper describes an improvement of a classical energy-based model to simulate elastically deformable solids. The classical model lacks the ability to prevent the collapsing of solids under influence of external forces, such as user interactions and collision. A thorough explanation is given for the origins of instabilities, and extensions that solve the issues are proposed to the physical model. Within the original framework of the classical model a complete restoration of area and volume is introduced. The improved model is suitable for interactive simulation and can recover from volumetric collapsing, in particular upon large deformation.
本文描述了一个经典的基于能量的模型的改进,以模拟弹性变形固体。经典模型缺乏防止实体在外力(如用户交互和碰撞)影响下坍塌的能力。对不稳定性的根源进行了详尽的解释,并对物理模型提出了解决这些问题的扩展。在经典模型的原始框架内,引入了面积和体积的完全恢复。改进后的模型适合于交互模拟,并能从体积坍塌中恢复,特别是在大变形时。
{"title":"Area and Volume restoration in Elastically Deformable solids","authors":"Micky Kelager, Anders Fleron, Kenny Erleben","doi":"10.1142/9789812834461_0021","DOIUrl":"https://doi.org/10.1142/9789812834461_0021","url":null,"abstract":"This paper describes an improvement of a classical energy-based model to simulate elastically deformable solids. The classical model lacks the ability to prevent the collapsing of solids under influence of external forces, such as user interactions and collision. A thorough explanation is given for the origins of instabilities, and extensions that solve the issues are proposed to the physical model. Within the original framework of the classical model a complete restoration of area and volume is introduced. The improved model is suitable for interactive simulation and can recover from volumetric collapsing, in particular upon large deformation.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133971581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Particle filter and Population-Based Metaheuristics for Visual Articulated Motion Tracking 结合粒子滤波和基于种群的元启发式视觉关节运动跟踪
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0017
J. Pantrigo, Ángel Sánchez, A. S. Montemayor, Kostas Gianikellis
Visual tracking of articulated motion is a complex task with high computational costs. Because of the fact that articulated objects are usually represented as a set of linked limbs, tracking is performed with the support of a model. Model-based tracking allows determining object pose in an effortless way and handling occlusions. However, the use of articulated models generates a multidimensional state-space and, therefore, the tracking becomes computationally very expensive or even infeasible. Due to the dynamic nature of the problem, some sequential estimation algorithms like particle filters are usually applied to visual tracking. Unfortunately, particle filter fails in high dimensional estimation problems such as articulated objects or multiple object tracking. These problems are called emph{dynamic optimization problems}. Metaheuristics, which are high level general strategies for designing heuristics procedures, have emerged for solving many real world combinatorial problems as a way to efficiently and effectively exploring the problem search space. Path relinking (PR) and scatter search (SS) are evolutionary metaheuristics successfully applied to several hard optimization problems. PRPF and SSPF algorithms respectively hybridize both, particle filter and these two population-based metaheuristic schemes. In this paper, We present and compare two different hybrid algorithms called Path Relinking Particle Filter (PRPF) and Scatter Search Particle Filter (SSPF), applied to 2D human motion tracking. Experimental results show that the proposed algorithms increase the performance of standard particle filters.
关节运动的视觉跟踪是一项复杂的任务,计算成本高。由于铰接对象通常被表示为一组连接的肢体,因此跟踪是在模型的支持下进行的。基于模型的跟踪允许以轻松的方式确定物体姿态并处理遮挡。然而,铰接模型的使用产生了一个多维状态空间,因此,跟踪在计算上变得非常昂贵,甚至是不可行的。由于问题的动态性,一些序列估计算法,如粒子滤波,通常应用于视觉跟踪。遗憾的是,粒子滤波在高维估计问题(如关节目标或多目标跟踪)中失败。这些问题被称为emph{动态优化问题}。元启发式是设计启发式过程的高级通用策略,用于解决许多现实世界的组合问题,是一种高效探索问题搜索空间的方法。路径链接(PR)和分散搜索(SS)是进化元启发式方法,已成功地应用于若干困难的优化问题。PRPF和SSPF算法分别混合了粒子滤波和这两种基于种群的元启发式算法。在本文中,我们提出并比较了两种不同的混合算法,即路径重链接粒子滤波(PRPF)和散射搜索粒子滤波(SSPF),应用于二维人体运动跟踪。实验结果表明,所提算法提高了标准粒子滤波器的性能。
{"title":"Combining Particle filter and Population-Based Metaheuristics for Visual Articulated Motion Tracking","authors":"J. Pantrigo, Ángel Sánchez, A. S. Montemayor, Kostas Gianikellis","doi":"10.1142/9789812834461_0017","DOIUrl":"https://doi.org/10.1142/9789812834461_0017","url":null,"abstract":"Visual tracking of articulated motion is a complex task with high computational costs. Because of the fact that articulated objects are usually represented as a set of linked limbs, tracking is performed with the support of a model. Model-based tracking allows determining object pose in an effortless way and handling occlusions. However, the use of articulated models generates a multidimensional state-space and, therefore, the tracking becomes computationally very expensive or even infeasible. Due to the dynamic nature of the problem, some sequential estimation algorithms like particle filters are usually applied to visual tracking. Unfortunately, particle filter fails in high dimensional estimation problems such as articulated objects or multiple object tracking. These problems are called emph{dynamic optimization problems}. Metaheuristics, which are high level general strategies for designing heuristics procedures, have emerged for solving many real world combinatorial problems as a way to efficiently and effectively exploring the problem search space. Path relinking (PR) and scatter search (SS) are evolutionary metaheuristics successfully applied to several hard optimization problems. PRPF and SSPF algorithms respectively hybridize both, particle filter and these two population-based metaheuristic schemes. In this paper, We present and compare two different hybrid algorithms called Path Relinking Particle Filter (PRPF) and Scatter Search Particle Filter (SSPF), applied to 2D human motion tracking. Experimental results show that the proposed algorithms increase the performance of standard particle filters.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122200775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Simultaneous and Causal Appearance Learning and Tracking 同时和因果现象的学习和跟踪
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0013
J. Melenchón, Ignasi Iriondo Sanz, L. Meler
A novel way to learn and track simultaneously the appearance of a previously non-seen face without intrusive techniques can be found in this article. The presented approach has a causal behaviour: no future frames are needed to process the current ones. The model used in the tracking process is refined with each input frame thanks to a new algorithm for the simultaneous and incremental computation of the singular value decomposition (SVD) and the mean of the data. Previously developed methods about iterative computation of SVD are taken into account and an original way to extract the mean information from the reduced SVD of a matrix is also considered. Furthermore, the results are produced with linear computational cost and sublinear memory requirements with respect to the size of the data. Finally, experimental results are included, showing the tracking performance and some comparisons between the batch and our incremental computation of the SVD with mean information.
在这篇文章中可以找到一种新的方法来同时学习和跟踪以前未见过的面孔的外观,而不需要侵入技术。所提出的方法具有因果行为:不需要未来的框架来处理当前的框架。利用奇异值分解(SVD)和数据均值同步增量计算的新算法,对跟踪过程中使用的模型进行了细化。本文考虑了已有的奇异值分解迭代计算方法,并提出了一种从矩阵的简化奇异值分解中提取平均信息的新颖方法。此外,产生的结果具有线性计算成本和相对于数据大小的亚线性内存需求。最后,给出了实验结果,显示了跟踪性能,并比较了批处理和我们的增量计算的平均信息的奇异值分解。
{"title":"Simultaneous and Causal Appearance Learning and Tracking","authors":"J. Melenchón, Ignasi Iriondo Sanz, L. Meler","doi":"10.1142/9789812834461_0013","DOIUrl":"https://doi.org/10.1142/9789812834461_0013","url":null,"abstract":"A novel way to learn and track simultaneously the appearance of a previously non-seen face without intrusive techniques can be found in this article. The presented approach has a causal behaviour: no future frames are needed to process the current ones. The model used in the tracking process is refined with each input frame thanks to a new algorithm for the simultaneous and incremental computation of the singular value decomposition (SVD) and the mean of the data. Previously developed methods about iterative computation of SVD are taken into account and an original way to extract the mean information from the reduced SVD of a matrix is also considered. Furthermore, the results are produced with linear computational cost and sublinear memory requirements with respect to the size of the data. Finally, experimental results are included, showing the tracking performance and some comparisons between the batch and our incremental computation of the SVD with mean information.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Comparison Framework for walking performances using aSpaces 使用空间行走性能的比较框架
Pub Date : 2005-11-01 DOI: 10.1142/9789812834461_0014
Jordi Gonzàlez, Javier Varona, F. X. Roca, Juan José Villanueva
In this paper, we address the analysis of human actions by comparing different performances of the same action executed by different actors. Specifically, we present a comparison procedure applied to the walking action, but the scheme can be applied to other different actions, such as bending, running, etc. To achieve fair comparison results, we define a novel human body model based on joint angles, which maximizes the differences between human postures and, moreover, reflects the anatomical structure of human beings. Subsequently, a human action space, called aSpace, is built in order to represent each performance (i.e., each predefined sequence of postures) as a parametric manifold. The final human action representation is called p-action, which is based on the most characteristic human body postures found during several walking performances. These postures are found automatically by means of a predefined distance function, and they are called key-frames. By using key-frames, we synchronize any performance with respect to the p-action. Furthermore, by considering an arc length parameterization, independence from the speed at which performances are played is attained. As a result, the style of human walking can be successfully analysed by establishing the differences of the joints between female and male walkers.
在本文中,我们通过比较不同演员执行的相同动作的不同表演来解决人类行为的分析。具体来说,我们提出了一个适用于步行动作的比较程序,但该方案可以应用于其他不同的动作,如弯曲,跑步等。为了获得公平的比较结果,我们定义了一种基于关节角度的新型人体模型,该模型最大限度地提高了人体姿势之间的差异,并且反映了人体的解剖结构。随后,为了将每个表演(即每个预定义的姿势序列)表示为参数流形,构建了一个称为aSpace的人类动作空间。最后的人类动作表现形式被称为p-action,它是基于在几次行走表演中发现的最具特征的人体姿势。这些姿势是通过预定义的距离函数自动找到的,它们被称为关键帧。通过使用关键帧,我们可以根据p-action同步任何性能。此外,通过考虑弧长参数化,获得了与表演速度无关的独立性。因此,通过建立女性和男性步行者之间关节的差异,可以成功地分析人类走路的风格。
{"title":"A Comparison Framework for walking performances using aSpaces","authors":"Jordi Gonzàlez, Javier Varona, F. X. Roca, Juan José Villanueva","doi":"10.1142/9789812834461_0014","DOIUrl":"https://doi.org/10.1142/9789812834461_0014","url":null,"abstract":"In this paper, we address the analysis of human actions by comparing different performances of the same action executed by different actors. Specifically, we present a comparison procedure applied to the walking action, but the scheme can be applied to other different actions, such as bending, running, etc. To achieve fair comparison results, we define a novel human body model based on joint angles, which maximizes the differences between human postures and, moreover, reflects the anatomical structure of human beings. Subsequently, a human action space, called aSpace, is built in order to represent each performance (i.e., each predefined sequence of postures) as a parametric manifold. The final human action representation is called p-action, which is based on the most characteristic human body postures found during several walking performances. These postures are found automatically by means of a predefined distance function, and they are called key-frames. By using key-frames, we synchronize any performance with respect to the p-action. Furthermore, by considering an arc length parameterization, independence from the speed at which performances are played is attained. As a result, the style of human walking can be successfully analysed by establishing the differences of the joints between female and male walkers.","PeriodicalId":181042,"journal":{"name":"Progress in Computer Vision and Image Analysis","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127008071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Progress in Computer Vision and Image Analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1