Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346257
Irfan Essa, Trevor Darrell, A. Pentland
We describe a computer system that allows real-time tracking of facial expressions. Sparse, fast visual measurements using 2D templates are used to observe the face of a subject. Rather than track features on the face, the distributed response of a set of templates is used to characterize a given facial region. These measurements ape coupled via a linear interpolation method to states in a physically-based model of facial animation, which includes both skin and muscle dynamics. By integrating real-time 2D image-processing with 3D models we obtain a system that is able to quickly track and interpret complex facial motions.<>
{"title":"Tracking facial motion","authors":"Irfan Essa, Trevor Darrell, A. Pentland","doi":"10.1109/MNRAO.1994.346257","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346257","url":null,"abstract":"We describe a computer system that allows real-time tracking of facial expressions. Sparse, fast visual measurements using 2D templates are used to observe the face of a subject. Rather than track features on the face, the distributed response of a set of templates is used to characterize a given facial region. These measurements ape coupled via a linear interpolation method to states in a physically-based model of facial animation, which includes both skin and muscle dynamics. By integrating real-time 2D image-processing with 3D models we obtain a system that is able to quickly track and interpret complex facial motions.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132087231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346260
James M. Rehg, T. Kanade
Computer sensing of hand and limb motion is an important problem for applications in human-computer interaction (HCI), virtual reality, and athletic performance measurement. Commercially available sensors are invasive, and require the user to wear gloves or targets. We have developed a noninvasive vision-based hand tracking system, called DigitEyes. Employing a kinematic hand model, the DigitEyes system has demonstrated tracking performance at speeds of up to 10 Hz, using line and point features extracted from gray scale images of unadorned, unmarked hands. We describe an application of our sensor to a 3D mouse user-interface problem.<>
{"title":"DigitEyes: vision-based hand tracking for human-computer interaction","authors":"James M. Rehg, T. Kanade","doi":"10.1109/MNRAO.1994.346260","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346260","url":null,"abstract":"Computer sensing of hand and limb motion is an important problem for applications in human-computer interaction (HCI), virtual reality, and athletic performance measurement. Commercially available sensors are invasive, and require the user to wear gloves or targets. We have developed a noninvasive vision-based hand tracking system, called DigitEyes. Employing a kinematic hand model, the DigitEyes system has demonstrated tracking performance at speeds of up to 10 Hz, using line and point features extracted from gray scale images of unadorned, unmarked hands. We describe an application of our sensor to a 3D mouse user-interface problem.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122819064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346247
T. Y. Tian, M. Shah
Presents a general approach to determine the 3D motion and structure of multiple objects undergoing arbitrary motions. We segment the scene based on 3D motion parameters. First, the general motion model is fitted to each single trajectory. For this nonlinear fitting, initial estimates are obtained by a linear multiple-motion SFM (structure from motion) algorithm using the first two frames. Next, trajectories are clustered into groups corresponding to different moving objects. In our approach, discontinuous trajectories, resulting from occlusion, are also allowed. Finally, multiple trajectory fitting is applied to each trajectory group to improve the estimates further. Our simulation results show that the proposed method is robust.<>
提出了一种确定任意运动的多物体的三维运动和结构的一般方法。我们根据3D运动参数对场景进行分割。首先,将一般运动模型拟合到每条单轨迹上。对于这种非线性拟合,通过使用前两帧的线性多运动SFM (structure from motion)算法获得初始估计。接下来,将轨迹聚类成不同运动对象对应的组。在我们的方法中,由遮挡引起的不连续轨迹也是允许的。最后,对每个轨迹组进行多次轨迹拟合,进一步提高估计精度。仿真结果表明,该方法具有较好的鲁棒性
{"title":"A general approach for determining 3D motion and structure of multiple objects from image trajectories","authors":"T. Y. Tian, M. Shah","doi":"10.1109/MNRAO.1994.346247","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346247","url":null,"abstract":"Presents a general approach to determine the 3D motion and structure of multiple objects undergoing arbitrary motions. We segment the scene based on 3D motion parameters. First, the general motion model is fitted to each single trajectory. For this nonlinear fitting, initial estimates are obtained by a linear multiple-motion SFM (structure from motion) algorithm using the first two frames. Next, trajectories are clustered into groups corresponding to different moving objects. In our approach, discontinuous trajectories, resulting from occlusion, are also allowed. Finally, multiple trajectory fitting is applied to each trajectory group to improve the estimates further. Our simulation results show that the proposed method is robust.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124163439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346255
I. Kakadiaris, Dimitris N. Metaxas, R. Bajcsy
We present an integrated approach towards the segmentation and shape estimation of human body outlines. Initially, we assume that the human body consists of a single part, and we fit a deformable model to the given data using our physics-based shape and motion estimation framework. As an actor attains different postures, new protrusions emerge on the outline. We model these changes in the shape using a new representation scheme consisting of a parametric composition of deformable models. This representation allows us to identify the underlying human parts that gradually become visible, by monitoring the evolution of shape and motion parameters of the composed models. Based on these parameters, their joint locations are identified. The algorithm is applied iteratively over subsequent frames until all moving parts are identified. We demonstrate the technique in a series of experiments with very encouraging results.<>
{"title":"Active motion-based segmentation of human body outlines","authors":"I. Kakadiaris, Dimitris N. Metaxas, R. Bajcsy","doi":"10.1109/MNRAO.1994.346255","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346255","url":null,"abstract":"We present an integrated approach towards the segmentation and shape estimation of human body outlines. Initially, we assume that the human body consists of a single part, and we fit a deformable model to the given data using our physics-based shape and motion estimation framework. As an actor attains different postures, new protrusions emerge on the outline. We model these changes in the shape using a new representation scheme consisting of a parametric composition of deformable models. This representation allows us to identify the underlying human parts that gradually become visible, by monitoring the evolution of shape and motion parameters of the composed models. Based on these parameters, their joint locations are identified. The algorithm is applied iteratively over subsequent frames until all moving parts are identified. We demonstrate the technique in a series of experiments with very encouraging results.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346240
T. Joshi, N. Ahuja, J. Ponce
Addresses the problem of estimating the structure and motion of a smooth curved object from its silhouettes observed over time by a trinocular imagery. We first construct a model for the local structure along the silhouette for each frame in the temporal sequence. The local models are then integrated into a global surface description by estimating the motion between successive frames. The algorithm tracks certain surface and image features (parabolic points and silhouette inflections, frontier points) which are used to bootstrap the motion estimation process. The whole silhouette is then used to refine the initial motion estimate. We have implemented the proposed approach and report preliminary results.<>
{"title":"Towards structure and motion estimation from dynamic silhouettes","authors":"T. Joshi, N. Ahuja, J. Ponce","doi":"10.1109/MNRAO.1994.346240","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346240","url":null,"abstract":"Addresses the problem of estimating the structure and motion of a smooth curved object from its silhouettes observed over time by a trinocular imagery. We first construct a model for the local structure along the silhouette for each frame in the temporal sequence. The local models are then integrated into a global surface description by estimating the motion between successive frames. The algorithm tracks certain surface and image features (parabolic points and silhouette inflections, frontier points) which are used to bootstrap the motion estimation process. The whole silhouette is then used to refine the initial motion estimate. We have implemented the proposed approach and report preliminary results.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130836469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346256
Mark Rosenblum, Y. Yacoob, Larry Davis
A radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human emotions. We describe a hierarchical approach which at the highest level identifies emotions, at the mid level determines motion of facial features, and at the low level recovers motion directions. Individual emotion networks were trained to recognize the 'smile' and 'surprise' emotions. Each emotion network was trained by viewing a set of sequences of one emotion for many subjects. The trained neural network was then tested for retention, extrapolation and rejection ability. Success rates were about 88% for retention, 73% for extrapolation, and 79% for rejection.<>
{"title":"Human emotion recognition from motion using a radial basis function network architecture","authors":"Mark Rosenblum, Y. Yacoob, Larry Davis","doi":"10.1109/MNRAO.1994.346256","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346256","url":null,"abstract":"A radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human emotions. We describe a hierarchical approach which at the highest level identifies emotions, at the mid level determines motion of facial features, and at the low level recovers motion directions. Individual emotion networks were trained to recognize the 'smile' and 'surprise' emotions. Each emotion network was trained by viewing a set of sequences of one emotion for many subjects. The trained neural network was then tested for retention, extrapolation and rejection ability. Success rates were about 88% for retention, 73% for extrapolation, and 79% for rejection.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133450669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346234
Philip Smith, N. Nandhakumar
The vast majority of published research in motion has assumed that the imaged world moves in a rigid manner, even though this is an ill-posed assumption for recovering the motion parameters of many naturally occurring objects, such as clouds, plants, and animals. Unfortunately, if the rigidity of motion assumption is relaxed to allow deformation of motion, the problem of estimating the motion becomes severely underconstrained. In this paper, we define a model of deformable motion based on the concept of an object's relative elasticity. We then use this novel concept to develop an iterative, linear technique to recover a description of the whole-body, as well as the sectional, motion of objects undergoing deformable transformations. The algorithm's ability to perform the stated task is then verified by experiment.<>
{"title":"Iterative estimation of non-rigid motion based on relative elasticity","authors":"Philip Smith, N. Nandhakumar","doi":"10.1109/MNRAO.1994.346234","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346234","url":null,"abstract":"The vast majority of published research in motion has assumed that the imaged world moves in a rigid manner, even though this is an ill-posed assumption for recovering the motion parameters of many naturally occurring objects, such as clouds, plants, and animals. Unfortunately, if the rigidity of motion assumption is relaxed to allow deformation of motion, the problem of estimating the motion becomes severely underconstrained. In this paper, we define a model of deformable motion based on the concept of an object's relative elasticity. We then use this novel concept to develop an iterative, linear technique to recover a description of the whole-body, as well as the sectional, motion of objects undergoing deformable transformations. The algorithm's ability to perform the stated task is then verified by experiment.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132587999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346249
D. Jacobs, C. Chennubhotla
There has been much work on using point features tracked through a video sequence to determine structure and motion. In many situations, to use this work, we must first isolate subsets of points that share a common motion. This is hard because we must distinguish between independent motions and apparent deviations from a single motion due to noise. We propose several methods of searching for point-sets with consistent 3D motions. We analyze the potential sensitivity of each method for detecting independent motions, and experiment with each method on a real image sequence.<>
{"title":"Segmenting independently moving, noisy points","authors":"D. Jacobs, C. Chennubhotla","doi":"10.1109/MNRAO.1994.346249","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346249","url":null,"abstract":"There has been much work on using point features tracked through a video sequence to determine structure and motion. In many situations, to use this work, we must first isolate subsets of points that share a common motion. This is hard because we must distinguish between independent motions and apparent deviations from a single motion due to noise. We propose several methods of searching for point-sets with consistent 3D motions. We analyze the potential sensitivity of each method for detecting independent motions, and experiment with each method on a real image sequence.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116592109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346236
A. Baumberg, David C. Hogg
There has been considerable research interest recently, in the areas of real time contour tracking and active shape models. This paper demonstrates how dynamic filtering can be used in combination with a modal-based flexible shape model to track an articulated non-rigid body in motion. The results show the method being used to track the silhouette of a walking pedestrian in real time. The active shape model used was generated automatically from real image data and incorporates variability in shape due to orientation as well as object flexibility. A Kalman filter is used to control spatial scale for feature search over successive frames. Iterative refinement allows accurate contour localisation where feasible. The shape model incorporates knowledge of the likely shape of the contour and speeds up tracking by reducing the number of system parameters. A further increase in speed is obtained by filtering the shape parameters independently.<>
{"title":"An efficient method for contour tracking using active shape models","authors":"A. Baumberg, David C. Hogg","doi":"10.1109/MNRAO.1994.346236","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346236","url":null,"abstract":"There has been considerable research interest recently, in the areas of real time contour tracking and active shape models. This paper demonstrates how dynamic filtering can be used in combination with a modal-based flexible shape model to track an articulated non-rigid body in motion. The results show the method being used to track the silhouette of a walking pedestrian in real time. The active shape model used was generated automatically from real image data and incorporates variability in shape due to orientation as well as object flexibility. A Kalman filter is used to control spatial scale for feature search over successive frames. Iterative refinement allows accurate contour localisation where feasible. The shape model incorporates knowledge of the likely shape of the contour and speeds up tracking by reducing the number of system parameters. A further increase in speed is obtained by filtering the shape parameters independently.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124693884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1994-11-11DOI: 10.1109/MNRAO.1994.346261
Jake K. Aggarwal, Qin Cai, W. Liao, B. Sabata
Motion of physical objects is non-rigid, in general. Most researchers have focused on the study of the motion and structure of rigid objects because of its simplicity and elegance. Recently, investigation of non-rigid structure and motion transformation has drawn the attention of researchers from a wide spectrum of disciplines. Since the non-rigid motion class encompasses a huge domain, we restrict our overview to the motion analysis of articulated and elastic non-rigid objects. Numerous approaches that have been proposed to recover the 3D structure and motion of objects are studied. The discussion includes both: 1) motion recovery without shape models, and 2) model-based analysis, and covers a number of examples of real world objects.<>
{"title":"Articulated and elastic non-rigid motion: a review","authors":"Jake K. Aggarwal, Qin Cai, W. Liao, B. Sabata","doi":"10.1109/MNRAO.1994.346261","DOIUrl":"https://doi.org/10.1109/MNRAO.1994.346261","url":null,"abstract":"Motion of physical objects is non-rigid, in general. Most researchers have focused on the study of the motion and structure of rigid objects because of its simplicity and elegance. Recently, investigation of non-rigid structure and motion transformation has drawn the attention of researchers from a wide spectrum of disciplines. Since the non-rigid motion class encompasses a huge domain, we restrict our overview to the motion analysis of articulated and elastic non-rigid objects. Numerous approaches that have been proposed to recover the 3D structure and motion of objects are studied. The discussion includes both: 1) motion recovery without shape models, and 2) model-based analysis, and covers a number of examples of real world objects.<<ETX>>","PeriodicalId":336218,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128620964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}