首页 > 最新文献

Proceedings IEEE International Workshop on Modelling People. MPeople'99最新文献

英文 中文
Towards model-based capture of a persons shape, appearance and motion 基于模型的人的形状,外观和运动的捕获
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798344
A. Hilton
This paper introduces a model-based approach to capturing a persons shape, appearance and movement. A 3D animated model of a clothed persons whole-body shape and appearance is automatically constructed from a set of orthogonal view colour images. The reconstructed model of a person is then used together with the least-squares inverse-kinematics framework of Bregler and Malik (1998) to capture simple 3D movements from a video image sequence.
本文介绍了一种基于模型的方法来捕捉人的形状、外表和运动。从一组正交视图彩色图像中自动构建服装人全身形状和外观的三维动画模型。然后将人的重建模型与Bregler和Malik(1998)的最小二乘逆运动学框架一起使用,从视频图像序列中捕获简单的3D运动。
{"title":"Towards model-based capture of a persons shape, appearance and motion","authors":"A. Hilton","doi":"10.1109/PEOPLE.1999.798344","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798344","url":null,"abstract":"This paper introduces a model-based approach to capturing a persons shape, appearance and movement. A 3D animated model of a clothed persons whole-body shape and appearance is automatically constructed from a set of orthogonal view colour images. The reconstructed model of a person is then used together with the least-squares inverse-kinematics framework of Bregler and Malik (1998) to capture simple 3D movements from a video image sequence.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114195691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Real time tracking and modeling of faces: an EKF-based analysis by synthesis approach 人脸的实时跟踪和建模:基于ekf的综合分析方法
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798346
Jacob Ström, T. Jebara, S. Basu, A. Pentland
A real-time system for tracking and modeling of faces using an analysis-by-synthesis approach is presented. A 3D face model is texture-mapped with a head-on view of the face. Feature points in the face-texture are then selected based on image Hessians. The selected points of the rendered image are tracked in the incoming video using normalized correlation. The result is fed into an extended Kalman filter to recover camera geometry, head pose, and structure from motion. This information is used to rigidly move the face model to render the next image needed for tracking. Every point is tracked from the Kalman filter's estimated position. The variance of each measurement is estimated using a number of factors, including the residual error and the angle between the surface normal and the camera. The estimated head pose can be used to warp the face in the incoming video back to frontal position, and parts of the image can then be subject to eigenspace coding for efficient transmission. The mouth texture is transmitted in this way using 50 bits per frame plus overhead from the person specific eigenspace. The face tracking system runs at 30 Hz, coding the mouth texture slows it down to 12 Hz.
提出了一种基于合成分析的人脸实时跟踪与建模系统。3D面部模型是纹理映射与正面视图的脸。然后根据图像Hessians选择人脸纹理中的特征点。使用归一化相关在传入视频中跟踪渲染图像的选定点。结果被输入到扩展的卡尔曼滤波器中,以从运动中恢复相机的几何形状、头部姿势和结构。该信息用于严格移动人脸模型,以呈现跟踪所需的下一个图像。从卡尔曼滤波估计的位置跟踪每个点。每次测量的方差是使用许多因素来估计的,包括残余误差和表面法线与相机之间的角度。估计的头部姿态可以用来将传入视频中的面部扭曲回正面位置,然后图像的部分可以进行特征空间编码以实现高效传输。嘴巴纹理以这种方式传输,每帧使用50比特加上来自人特定特征空间的开销。面部追踪系统以30赫兹的频率运行,而对嘴部纹理进行编码将其降低到12赫兹。
{"title":"Real time tracking and modeling of faces: an EKF-based analysis by synthesis approach","authors":"Jacob Ström, T. Jebara, S. Basu, A. Pentland","doi":"10.1109/PEOPLE.1999.798346","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798346","url":null,"abstract":"A real-time system for tracking and modeling of faces using an analysis-by-synthesis approach is presented. A 3D face model is texture-mapped with a head-on view of the face. Feature points in the face-texture are then selected based on image Hessians. The selected points of the rendered image are tracked in the incoming video using normalized correlation. The result is fed into an extended Kalman filter to recover camera geometry, head pose, and structure from motion. This information is used to rigidly move the face model to render the next image needed for tracking. Every point is tracked from the Kalman filter's estimated position. The variance of each measurement is estimated using a number of factors, including the residual error and the angle between the surface normal and the camera. The estimated head pose can be used to warp the face in the incoming video back to frontal position, and parts of the image can then be subject to eigenspace coding for efficient transmission. The mouth texture is transmitted in this way using 50 bits per frame plus overhead from the person specific eigenspace. The face tracking system runs at 30 Hz, coding the mouth texture slows it down to 12 Hz.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127108600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Understanding purposeful human motion 理解有目的的人体动作
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798342
C. Wren, Alex Pentland
Human motion can be understood on several levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, 3-D tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We will show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate both qualitative and quantitative gains in tracking performance.
人体运动可以从几个层面来理解。最基本的概念是,人类是具有可预测视觉外观的事物的集合。其次是人类存在于一个物理宇宙的概念,因此,人类的大部分运动可以用物理定律来建模和预测。最后一种观点认为,人类利用肌肉主动塑造有目的的动作。我们采用递归框架对人体运动进行实时三维跟踪,使像素级概率过程能够利用高级模型中编码的上下文知识,包括人体运动的动态约束模型。我们将证明,有目的的行为模型自然地从这个框架中产生,而且,这些模型可以用来提高对人类运动的感知。结果表明,在跟踪性能的定性和定量增益。
{"title":"Understanding purposeful human motion","authors":"C. Wren, Alex Pentland","doi":"10.1109/PEOPLE.1999.798342","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798342","url":null,"abstract":"Human motion can be understood on several levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, 3-D tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We will show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate both qualitative and quantitative gains in tracking performance.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131032624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Learning structured behaviour models using variable length Markov models 使用变长马尔可夫模型学习结构化行为模型
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798351
Aphrodite Galata, Neil Johnson, D. Hogg
In recent years there has been an increased interest in the modelling and recognition of human activities involving highly structured and semantically rich behaviour such as dance, aerobics, and sign language. A novel approach is presented for automatically acquiring stochastic models of the high-level structure of an activity without the assumption of any prior knowledge. The process involves temporal segmentation into plausible atomic behaviour components and the use of variable length Markov models for the efficient representation of behaviours. Experimental results are presented which demonstrate the generation of realistic sample behaviours and evaluate the performance of models for long-term temporal prediction.
近年来,人们对人类活动的建模和识别越来越感兴趣,这些活动涉及高度结构化和语义丰富的行为,如舞蹈、有氧运动和手语。提出了一种无需假设任何先验知识就能自动获取活动高层结构随机模型的新方法。这个过程包括将时间分割成合理的原子行为组件,并使用变长马尔可夫模型来有效地表示行为。实验结果证明了真实样本行为的产生,并评估了模型长期时间预测的性能。
{"title":"Learning structured behaviour models using variable length Markov models","authors":"Aphrodite Galata, Neil Johnson, D. Hogg","doi":"10.1109/PEOPLE.1999.798351","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798351","url":null,"abstract":"In recent years there has been an increased interest in the modelling and recognition of human activities involving highly structured and semantically rich behaviour such as dance, aerobics, and sign language. A novel approach is presented for automatically acquiring stochastic models of the high-level structure of an activity without the assumption of any prior knowledge. The process involves temporal segmentation into plausible atomic behaviour components and the use of variable length Markov models for the efficient representation of behaviours. Experimental results are presented which demonstrate the generation of realistic sample behaviours and evaluate the performance of models for long-term temporal prediction.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134418094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Tracking hybrid 2D-3D human models from multiple views 从多个视图跟踪混合2D-3D人体模型
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798341
Eng-Jon Ong, S. Gong
A novel framework is proposed under which robust matching and tracking of a 3D skeleton model of a human body from multiple views can be performed We propose a method for measuring the ambiguity of 2D measurements provided by each view. The ambiguity measurement is then used for selecting the best view for the most accurate match and tracking. A hybrid 2D-3D representation is chosen for modelling human body poses. The hybrid model is learnt using hierarchical principal component analysis. The CONDENSATION algorithm is used to robustly track and match 3D skeleton models in individual views.
提出了一种新的框架,在该框架下,可以从多个视图对人体的三维骨骼模型进行鲁棒匹配和跟踪。我们提出了一种测量每个视图提供的二维测量的模糊度的方法。然后使用模糊度测量来选择最佳视图以进行最精确的匹配和跟踪。选择混合2D-3D表示来建模人体姿势。采用层次主成分分析法学习混合模型。采用冷凝算法对单个视图中的三维骨架模型进行鲁棒跟踪和匹配。
{"title":"Tracking hybrid 2D-3D human models from multiple views","authors":"Eng-Jon Ong, S. Gong","doi":"10.1109/PEOPLE.1999.798341","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798341","url":null,"abstract":"A novel framework is proposed under which robust matching and tracking of a 3D skeleton model of a human body from multiple views can be performed We propose a method for measuring the ambiguity of 2D measurements provided by each view. The ambiguity measurement is then used for selecting the best view for the most accurate match and tracking. A hybrid 2D-3D representation is chosen for modelling human body poses. The hybrid model is learnt using hierarchical principal component analysis. The CONDENSATION algorithm is used to robustly track and match 3D skeleton models in individual views.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
An improved algorithm for reconstruction of the surface of the human body from 3D scanner data using local B-spline patches 一种基于局部b样条补丁的三维扫描数据重建人体表面的改进算法
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798343
I. Douros, L. Dekker, B. Buxton
There are an increasing number of applications that require the construction of computerised human body models. The work presented here is a follow-up on a previously presented surface reconstruction algorithm that has been greatly improved by following a local approach. The advantages of the method are presented along with explanation of why its results are better compared to the previous algorithm. The result is a compound, multi-segment, and yet entirely smooth and watertight surface. Such a surface has a strong potential of being used for applications of a mainly medical nature, such as calculation of surface area and physical surface reconstruction for prosthetics manufacturing.
有越来越多的应用需要构建计算机化的人体模型。这里提出的工作是对先前提出的表面重建算法的后续研究,该算法通过遵循局部方法得到了极大的改进。介绍了该方法的优点,并解释了为什么它的结果比以前的算法更好。其结果是形成了一个复合的、多段的、完全光滑的水密表面。这种表面具有很大的潜力,可用于主要医学性质的应用,例如用于假肢制造的表面积计算和物理表面重建。
{"title":"An improved algorithm for reconstruction of the surface of the human body from 3D scanner data using local B-spline patches","authors":"I. Douros, L. Dekker, B. Buxton","doi":"10.1109/PEOPLE.1999.798343","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798343","url":null,"abstract":"There are an increasing number of applications that require the construction of computerised human body models. The work presented here is a follow-up on a previously presented surface reconstruction algorithm that has been greatly improved by following a local approach. The advantages of the method are presented along with explanation of why its results are better compared to the previous algorithm. The result is a compound, multi-segment, and yet entirely smooth and watertight surface. Such a surface has a strong potential of being used for applications of a mainly medical nature, such as calculation of surface area and physical surface reconstruction for prosthetics manufacturing.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121594945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Automated body modeling from video sequences 从视频序列自动身体建模
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798345
Ralf Plänkers, Pascal. Fua, Ralf. Plaenkers, Pascal. Fua
Synthetic modeling of human bodies and the simulation of motion is a long-standing problem in animation and much work is involved before a near-realistic performance can be achieved. At present, it takes an experienced designer a very long time to build a complete and realistic model that closely resembles a specific person. Our ultimate goal is to automate the process and to produce realistic animation models given a set of video sequences. In this paper we show that, given video sequences of a person moving in front of the camera, we can recover shape information and joint locations. Both of which are essential to instantiate a complete and realistic model that closely resembles a specific person and without knowledge about the position of the articulations a character cannot be animated. This is achieved with minimal human intervention. The recovered shape and motion parameters can be used to reconstruct the original movement or to allow other animation models to mimic the subject's actions.
人体的合成建模和运动的模拟是动画中一个长期存在的问题,在实现接近真实的表现之前需要做很多工作。目前,一个有经验的设计师需要很长时间才能建立一个完整的、逼真的、与特定人物非常相似的模型。我们的最终目标是自动化的过程,并产生逼真的动画模型给定一组视频序列。在本文中,我们证明,给定一个人在摄像机前移动的视频序列,我们可以恢复形状信息和关节位置。这两者都是必要的实例化一个完整的和现实的模型,非常类似于一个特定的人,没有关于关节的位置的知识,一个角色不能动画。这是在最少的人为干预下实现的。恢复的形状和运动参数可用于重建原始运动或允许其他动画模型来模仿主体的动作。
{"title":"Automated body modeling from video sequences","authors":"Ralf Plänkers, Pascal. Fua, Ralf. Plaenkers, Pascal. Fua","doi":"10.1109/PEOPLE.1999.798345","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798345","url":null,"abstract":"Synthetic modeling of human bodies and the simulation of motion is a long-standing problem in animation and much work is involved before a near-realistic performance can be achieved. At present, it takes an experienced designer a very long time to build a complete and realistic model that closely resembles a specific person. Our ultimate goal is to automate the process and to produce realistic animation models given a set of video sequences. In this paper we show that, given video sequences of a person moving in front of the camera, we can recover shape information and joint locations. Both of which are essential to instantiate a complete and realistic model that closely resembles a specific person and without knowledge about the position of the articulations a character cannot be animated. This is achieved with minimal human intervention. The recovered shape and motion parameters can be used to reconstruct the original movement or to allow other animation models to mimic the subject's actions.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127253321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Modeling people's focus of attention 塑造人们的注意力焦点
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798349
R. Stiefelhagen, Jie Yang, A. Waibel
In this paper, we present an approach to model focus of attention of participants in a meeting via hidden Markov models (HMM). We employ HMM to encode and track focus of attention, based on the participants' gaze information and knowledge of their positions. The positions of the participants are detected by face tracking in the view of a panoramic camera mounted on the meeting table. We use neural networks to estimate the participants' gaze from camera images. We discuss the implementation of the approach in detail, including system architecture, data collection, and evaluation. The system has achieved an accuracy rate of up to 93% in detecting focus of attention on test sequences taken from meetings. We have used focus of attention as an index in a multimedia meeting browser.
本文提出了一种利用隐马尔可夫模型对会议参与者的注意力集中进行建模的方法。基于参与者的注视信息和对其位置的了解,我们采用HMM对注意力焦点进行编码和跟踪。通过安装在会议桌上的全景摄像机的面部跟踪来检测参与者的位置。我们使用神经网络从相机图像中估计参与者的凝视。我们详细讨论了该方法的实现,包括系统架构、数据收集和评估。该系统在检测来自会议的测试序列的注意力焦点方面达到了高达93%的准确率。我们在多媒体会议浏览器中使用了焦点作为索引。
{"title":"Modeling people's focus of attention","authors":"R. Stiefelhagen, Jie Yang, A. Waibel","doi":"10.1109/PEOPLE.1999.798349","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798349","url":null,"abstract":"In this paper, we present an approach to model focus of attention of participants in a meeting via hidden Markov models (HMM). We employ HMM to encode and track focus of attention, based on the participants' gaze information and knowledge of their positions. The positions of the participants are detected by face tracking in the view of a panoramic camera mounted on the meeting table. We use neural networks to estimate the participants' gaze from camera images. We discuss the implementation of the approach in detail, including system architecture, data collection, and evaluation. The system has achieved an accuracy rate of up to 93% in detecting focus of attention on test sequences taken from meetings. We have used focus of attention as an index in a multimedia meeting browser.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127274656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Stereoscopic system for human body tracking in natural scenes 自然场景中人体跟踪的立体系统
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798348
J. Amat, A. Casals, M. Frigola
Human body detection and tracking in a scene constitutes a very active working field clue to their applicability to many areas, specially as a man-machine interface (MMI) means. The system presented aims to improve the reliability and efficiency of teleoperation. The system is of application to teleoperated manipulation in civil applications such as big robots in shipyards, mines, public works or even cranes. Image segmentation is performed from movement detection. The recognition of moving bodies is verified by means of a simplified articulated cylindrical model, thus allowing to operate with a low computational cost.
场景中的人体检测与跟踪是一种非常活跃的工作现场线索,其应用领域非常广泛,特别是作为一种人机界面(MMI)手段。该系统旨在提高远程操作的可靠性和效率。该系统适用于船舶、矿山、公共工程甚至起重机等大型机器人的遥控操作。图像分割是从运动检测执行的。通过简化的铰接圆柱模型验证了运动物体的识别,从而允许以较低的计算成本运行。
{"title":"Stereoscopic system for human body tracking in natural scenes","authors":"J. Amat, A. Casals, M. Frigola","doi":"10.1109/PEOPLE.1999.798348","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798348","url":null,"abstract":"Human body detection and tracking in a scene constitutes a very active working field clue to their applicability to many areas, specially as a man-machine interface (MMI) means. The system presented aims to improve the reliability and efficiency of teleoperation. The system is of application to teleoperated manipulation in civil applications such as big robots in shipyards, mines, public works or even cranes. Image segmentation is performed from movement detection. The recognition of moving bodies is verified by means of a simplified articulated cylindrical model, thus allowing to operate with a low computational cost.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128784732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Generating a population of animated faces from pictures 从图片中生成一组动画面孔
Pub Date : 1999-09-20 DOI: 10.1109/PEOPLE.1999.798347
Won-Sook Lee, Nadia Magnenat-Thalmann
This paper describes a simple and robust method for generating photo-realistic animated face population in a virtual world. First we make a small set of 3D-virtual faces just using photo data, using a method so called virtual cloning. Then we use very intuitive 3D-morphing system to generate a new population, which is profited from 3D-structure of existing virtual faces. The virtual cloning method uses a set of orthogonal pictures of a person. This efficient method for reconstructing 3D heads suitable for animation starts with the extraction of feature points from the orthogonal picture sets. A previously constructed, animation-ready generic model is transformed to each individualized head based on the features extracted from the orthogonal pictures. Using projections of the 3D head, a 2D texture image is obtained for an individual reconstructed from pictures, which is then fitted to the clone, a fully automated procedure resulting in 360-degree seamless texture mapping. We also introduce an extremely fast dynamic system for 3D morphing with 3D spatial interpolation and powerful 2D texture-image metamorphosis based on triangulation inherited from 3D structure of virtual faces. The interface allows for the real-time inspection and control of the morphing process.
本文描述了一种在虚拟世界中生成逼真的动画人脸种群的简单而鲁棒的方法。首先,我们用照片数据制作了一小组3d虚拟面孔,使用一种被称为虚拟克隆的方法。然后利用非常直观的三维变形系统,利用已有虚拟人脸的三维结构,生成新的种群。虚拟克隆方法使用一组正交的人的图片。该方法首先从正交图像集中提取特征点,实现适合动画的三维头部重构。基于从正交图像中提取的特征,将先前构建的动画就绪的通用模型转换为每个个性化的头部。使用3D头部的投影,从图片中重建的个体获得2D纹理图像,然后将其安装到克隆体上,这是一个完全自动化的过程,可实现360度无缝纹理映射。我们还介绍了一个极快的三维变形动态系统,该系统具有三维空间插值和基于三角剖分的强大的二维纹理图像变形,该系统继承了虚拟人脸的三维结构。该接口允许实时检查和变形过程的控制。
{"title":"Generating a population of animated faces from pictures","authors":"Won-Sook Lee, Nadia Magnenat-Thalmann","doi":"10.1109/PEOPLE.1999.798347","DOIUrl":"https://doi.org/10.1109/PEOPLE.1999.798347","url":null,"abstract":"This paper describes a simple and robust method for generating photo-realistic animated face population in a virtual world. First we make a small set of 3D-virtual faces just using photo data, using a method so called virtual cloning. Then we use very intuitive 3D-morphing system to generate a new population, which is profited from 3D-structure of existing virtual faces. The virtual cloning method uses a set of orthogonal pictures of a person. This efficient method for reconstructing 3D heads suitable for animation starts with the extraction of feature points from the orthogonal picture sets. A previously constructed, animation-ready generic model is transformed to each individualized head based on the features extracted from the orthogonal pictures. Using projections of the 3D head, a 2D texture image is obtained for an individual reconstructed from pictures, which is then fitted to the clone, a fully automated procedure resulting in 360-degree seamless texture mapping. We also introduce an extremely fast dynamic system for 3D morphing with 3D spatial interpolation and powerful 2D texture-image metamorphosis based on triangulation inherited from 3D structure of virtual faces. The interface allows for the real-time inspection and control of the morphing process.","PeriodicalId":237701,"journal":{"name":"Proceedings IEEE International Workshop on Modelling People. MPeople'99","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings IEEE International Workshop on Modelling People. MPeople'99
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1