Pub Date : 2000-03-28DOI: 10.1109/AFGR.2000.840635
Michael J. Lyons, Julien Budynek, A. Plante, S. Akamatsu
A method for automatically classifying facial images is proposed. Faces are represented using elastic graphs labelled with with 2D Gabor wavelet features. The system is trained from examples to classify faces on the basis of high-level attributes, such as sex, "race", and expression, using linear discriminant analysis (LDA). Use of the Gabor representation relaxes the requirement for precise normalization of the face: approximate registration of a facial graph is sufficient. LDA allows simple and rapid training from examples, as well as straightforward interpretation of the role of the input features for classification. The algorithm is tested on three different facial image datasets, one of which was acquired under relatively uncontrolled conditions, on tasks of sex, "race" and expression classification. Results of these tests are presented. The discriminant vectors may be interpreted in terms of the saliency of the input features for the different classification tasks, which we portray visually with feature saliency maps for node position as well as filter spatial frequency and orientation.
{"title":"Classifying facial attributes using a 2-D Gabor wavelet representation and discriminant analysis","authors":"Michael J. Lyons, Julien Budynek, A. Plante, S. Akamatsu","doi":"10.1109/AFGR.2000.840635","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840635","url":null,"abstract":"A method for automatically classifying facial images is proposed. Faces are represented using elastic graphs labelled with with 2D Gabor wavelet features. The system is trained from examples to classify faces on the basis of high-level attributes, such as sex, \"race\", and expression, using linear discriminant analysis (LDA). Use of the Gabor representation relaxes the requirement for precise normalization of the face: approximate registration of a facial graph is sufficient. LDA allows simple and rapid training from examples, as well as straightforward interpretation of the role of the input features for classification. The algorithm is tested on three different facial image datasets, one of which was acquired under relatively uncontrolled conditions, on tasks of sex, \"race\" and expression classification. Results of these tests are presented. The discriminant vectors may be interpreted in terms of the saliency of the input features for the different classification tasks, which we portray visually with feature saliency maps for node position as well as filter spatial frequency and orientation.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124976993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-28DOI: 10.1109/AFGR.2000.840628
T. Fromherz, B. Takács, E. Hueso, Dimitris N. Metaxas, P. Stucki
Summary form only given. We describe a high-performance face tracking and animation solution used for low-cost production of TV commercials, films and special effects. The system combines a 3D sensor that captures full facial surfaces at arbitrary frame rates with our advanced facial tracking technology to produce highly accurate animation without the animator's intervention. We briefly review the state-of-the-art in facial tracking and animation and present experimental results proving the effectiveness of our method.
{"title":"Facial tracking and animation using a 3D sensor","authors":"T. Fromherz, B. Takács, E. Hueso, Dimitris N. Metaxas, P. Stucki","doi":"10.1109/AFGR.2000.840628","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840628","url":null,"abstract":"Summary form only given. We describe a high-performance face tracking and animation solution used for low-cost production of TV commercials, films and special effects. The system combines a 3D sensor that captures full facial surfaces at arbitrary frame rates with our advanced facial tracking technology to produce highly accurate animation without the animator's intervention. We briefly review the state-of-the-art in facial tracking and animation and present experimental results proving the effectiveness of our method.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133922336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840653
H. Miyamori, S. Iisaku
This paper proposes the automatic annotation of sports video for content-based retrieval. Conventional methods using position information of objects such as locus, relative positions, their transitions, etc., as indices, have drawbacks that tracking errors of a certain object due to occlusions causes recognition failures, and that representation by position information essentially has a limited number of recognizable events in the retrieval. Our approach incorporates human behavior analysis and specific domain knowledge with conventional methods, to develop an integrated reasoning module for richer expressiveness of events and robust recognition. Based on the proposed method, we implemented a content-based retrieval system which can identify several actions on real tennis video. We select court and net lines, players' positions, ball positions, and players' actions, as indices. Court and net lines are extracted using a court model and Hough transforms. Players and ball positions are tracked by adaptive template matching and particular predictions against sudden changes of motion direction. Players' actions are analyzed by 2D appearance-based matching using the transition of players' silhouettes and a hidden Markov model. The results using two sets of tennis video is presented, demonstrating the performance and the validity of our approach.
{"title":"Video annotation for content-based retrieval using human behavior analysis and domain knowledge","authors":"H. Miyamori, S. Iisaku","doi":"10.1109/AFGR.2000.840653","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840653","url":null,"abstract":"This paper proposes the automatic annotation of sports video for content-based retrieval. Conventional methods using position information of objects such as locus, relative positions, their transitions, etc., as indices, have drawbacks that tracking errors of a certain object due to occlusions causes recognition failures, and that representation by position information essentially has a limited number of recognizable events in the retrieval. Our approach incorporates human behavior analysis and specific domain knowledge with conventional methods, to develop an integrated reasoning module for richer expressiveness of events and robust recognition. Based on the proposed method, we implemented a content-based retrieval system which can identify several actions on real tennis video. We select court and net lines, players' positions, ball positions, and players' actions, as indices. Court and net lines are extracted using a court model and Hough transforms. Players and ball positions are tracked by adaptive template matching and particular predictions against sudden changes of motion direction. Players' actions are analyzed by 2D appearance-based matching using the transition of players' silhouettes and a hidden Markov model. The results using two sets of tennis video is presented, demonstrating the performance and the validity of our approach.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115119451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840668
G. Schmidt, D. House
We propose a new technique for gesture recognition that involves both physical and control models of gesture performance, and describe preliminary experiments done to validate the approach. The technique incorporates underlying dynamics and control models are used to augment a set of Kalman-filter-based recognizer modules so that each filters the input data under the a priori assumption that one of the gestures is being performed. The recognized gesture is the filter output that most closely matches the output of an unaugmented Kalman filter. In our preliminary experiments, we treated gestures made with simple motions of the right arm, done while tracking only hand position. We modeled the path that the hand traverses while performing a gesture as a point-mass moving through air. The control model for each specific gesture was simply an experimentally determined sequence of applied forces plus a proportional control based on spatial position. Our experiments showed that even using such a simple set of models we were able to obtain results reasonably comparable with a carefully hand-constructed feature-based discriminator on a limited set of spatially-distinct planar gestures.
{"title":"Towards model-based gesture recognition","authors":"G. Schmidt, D. House","doi":"10.1109/AFGR.2000.840668","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840668","url":null,"abstract":"We propose a new technique for gesture recognition that involves both physical and control models of gesture performance, and describe preliminary experiments done to validate the approach. The technique incorporates underlying dynamics and control models are used to augment a set of Kalman-filter-based recognizer modules so that each filters the input data under the a priori assumption that one of the gestures is being performed. The recognized gesture is the filter output that most closely matches the output of an unaugmented Kalman filter. In our preliminary experiments, we treated gestures made with simple motions of the right arm, done while tracking only hand position. We modeled the path that the hand traverses while performing a gesture as a point-mass moving through air. The control model for each specific gesture was simply an experimentally determined sequence of applied forces plus a proportional control based on spatial position. Our experiments showed that even using such a simple set of models we were able to obtain results reasonably comparable with a carefully hand-constructed feature-based discriminator on a limited set of spatially-distinct planar gestures.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127214948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840642
Fu Jie Huang, Tsuhan Chen, Zhi-Hua Zhou, HongJiang Zhang
We describe a novel neural network architecture, which can recognize human faces with any view in a certain viewing angle range (from left 30 degrees to right 30 degrees out of plane rotation). View-specific eigenface analysis is used as the front-end of the system to extract features, and the neural network ensemble is used for recognition. Experimental results show that the recognition accuracy of our network ensemble is higher than conventional methods such as using a single neural network to recognize faces of a specific view.
{"title":"Pose invariant face recognition","authors":"Fu Jie Huang, Tsuhan Chen, Zhi-Hua Zhou, HongJiang Zhang","doi":"10.1109/AFGR.2000.840642","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840642","url":null,"abstract":"We describe a novel neural network architecture, which can recognize human faces with any view in a certain viewing angle range (from left 30 degrees to right 30 degrees out of plane rotation). View-specific eigenface analysis is used as the front-end of the system to extract features, and the neural network ensemble is used for recognition. Experimental results show that the recognition accuracy of our network ensemble is higher than conventional methods such as using a single neural network to recognize faces of a specific view.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127484510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840614
Ming-Hsuan Yang, N. Ahuja, D. Kriegman
We present two methods using mixtures of linear sub-spaces for face detection in gray level images. One method uses a mixture of factor analyzers to concurrently perform clustering and, within each cluster, perform local dimensionality reduction. The parameters of the mixture model are estimated using an EM algorithm. A face is detected if the probability of an input sample is above a predefined threshold. The other mixture of subspaces method uses Kohonen's self-organizing map for clustering and Fisher linear discriminant to find the optimal projection for pattern classification, and a Gaussian distribution to model the class-conditioned density function of the projected samples for each class. The parameters of the class-conditioned density functions are maximum likelihood estimates and the decision rule is also based on maximum likelihood. A wide range of face images including ones in different poses, with different expressions and under different lighting conditions are used as the training set to capture the variations of human faces. Our methods have been tested on three sets of 225 images which contain 871 faces. Experimental results on the first two datasets show that our methods perform as well as the best methods in the literature, yet have fewer false detects.
{"title":"Face detection using mixtures of linear subspaces","authors":"Ming-Hsuan Yang, N. Ahuja, D. Kriegman","doi":"10.1109/AFGR.2000.840614","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840614","url":null,"abstract":"We present two methods using mixtures of linear sub-spaces for face detection in gray level images. One method uses a mixture of factor analyzers to concurrently perform clustering and, within each cluster, perform local dimensionality reduction. The parameters of the mixture model are estimated using an EM algorithm. A face is detected if the probability of an input sample is above a predefined threshold. The other mixture of subspaces method uses Kohonen's self-organizing map for clustering and Fisher linear discriminant to find the optimal projection for pattern classification, and a Gaussian distribution to model the class-conditioned density function of the projected samples for each class. The parameters of the class-conditioned density functions are maximum likelihood estimates and the decision rule is also based on maximum likelihood. A wide range of face images including ones in different poses, with different expressions and under different lighting conditions are used as the training set to capture the variations of human faces. Our methods have been tested on three sets of 225 images which contain 871 faces. Experimental results on the first two datasets show that our methods perform as well as the best methods in the literature, yet have fewer false detects.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121879805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840604
M. Zobel, A. Gebhard, D. Paulus, Joachim Denzler, H. Niemann
We consider the problem of robust localization of faces and some of their facial features. The task arises, e.g., in the medical field of visual analysis of facial paresis. We detect faces and facial features by means of appropriate DCT coefficients that we obtain by neatly using the coding capabilities of a JPEG hardware compressor. Beside an anthropometric localization approach we focus on how spatial coupling of the facial features can be used to improve robustness of the localization. Because the presented approach is embedded in a completely probabilistic framework, it is not restricted to facial features, it can be generalized to multipart objects of any kind. Therefore the notion of a "coupled structure" is introduced. Finally, the approach is applied to the problem of localizing facial features in DCT-coded images and results from our experiments are shown.
{"title":"Robust facial feature localization by coupled features","authors":"M. Zobel, A. Gebhard, D. Paulus, Joachim Denzler, H. Niemann","doi":"10.1109/AFGR.2000.840604","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840604","url":null,"abstract":"We consider the problem of robust localization of faces and some of their facial features. The task arises, e.g., in the medical field of visual analysis of facial paresis. We detect faces and facial features by means of appropriate DCT coefficients that we obtain by neatly using the coding capabilities of a JPEG hardware compressor. Beside an anthropometric localization approach we focus on how spatial coupling of the facial features can be used to improve robustness of the localization. Because the presented approach is embedded in a completely probabilistic framework, it is not restricted to facial features, it can be generalized to multipart objects of any kind. Therefore the notion of a \"coupled structure\" is introduced. Finally, the approach is applied to the problem of localizing facial features in DCT-coded images and results from our experiments are shown.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"120 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128690120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840676
N. Jojic, Thomas S. Huang, B. Brumitt, B. Meyers, Steve Harris
We describe a real-time system for detecting pointing gestures and estimating the direction of pointing using stereo cameras. Previously, similar systems were implemented using color-based blob trackers, which relied on effective skin color detection; this approach is sensitive to lighting changes and the clothing worn by the user. In contrast, we used a stereo system that produces dense disparity maps in real-time. Disparity maps are considerably less sensitive to lighting changes. Our system subtracts the background, analyzes the foreground pixels to break the body into parts using a robust mixture model, and estimates the direction of pointing. We have tested the system on both coarse and fine pointing by selecting the targets in a room and controlling the cursor on a wall screen, respectively.
{"title":"Detection and estimation of pointing gestures in dense disparity maps","authors":"N. Jojic, Thomas S. Huang, B. Brumitt, B. Meyers, Steve Harris","doi":"10.1109/AFGR.2000.840676","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840676","url":null,"abstract":"We describe a real-time system for detecting pointing gestures and estimating the direction of pointing using stereo cameras. Previously, similar systems were implemented using color-based blob trackers, which relied on effective skin color detection; this approach is sensitive to lighting changes and the clothing worn by the user. In contrast, we used a stereo system that produces dense disparity maps in real-time. Disparity maps are considerably less sensitive to lighting changes. Our system subtracts the background, analyzes the foreground pixels to break the body into parts using a robust mixture model, and estimates the direction of pointing. We have tested the system on both coarse and fine pointing by selecting the targets in a room and controlling the cursor on a wall screen, respectively.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128356829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840646
K. N. Walker, Tim Cootes, C. Taylor
In order to build a statistical model of facial appearance we require a set of images, each with a consistent set of landmarks. We address the problem of automatically placing a set of landmarks to define the correspondences across an image set. We can estimate correspondences between any pair of images by locating salient points on one and finding their corresponding position in the second. However, we wish to determine a globally consistent set of correspondences across all the images. We present an iterative scheme in which these pairwise correspondences are used to determine a global correspondence across the entire set. We show results on several training sets, and demonstrate that an appearance model trained on the correspondences is of higher quality than one built from hand-marked images.
{"title":"Determining correspondences for statistical models of facial appearance","authors":"K. N. Walker, Tim Cootes, C. Taylor","doi":"10.1109/AFGR.2000.840646","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840646","url":null,"abstract":"In order to build a statistical model of facial appearance we require a set of images, each with a consistent set of landmarks. We address the problem of automatically placing a set of landmarks to define the correspondences across an image set. We can estimate correspondences between any pair of images by locating salient points on one and finding their corresponding position in the second. However, we wish to determine a globally consistent set of correspondences across all the images. We present an iterative scheme in which these pairwise correspondences are used to determine a global correspondence across the entire set. We show results on several training sets, and demonstrate that an appearance model trained on the correspondences is of higher quality than one built from hand-marked images.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124180658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840679
F. D. L. Torre, Y. Yacoob, L. Davis
This paper describes an unified probabilistic framework for appearance-based tracking of rigid and non-rigid objects. A spatio-temporal dependent shape-texture eigenspace and mixture of diagonal Gaussians are learned in a hidden Markov model (HMM)-like structure to better constrain the model and for recognition purposes. Particle filtering is used to track the object while switching between different shape/texture models. This framework allows recognition and temporal segmentation of activities. Additionally an automatic stochastic initialization is proposed, the number of states in the HMM are selected based on the Akaike information criterion and comparison with deterministic tracking for 2D models is discussed. Preliminary results of eye tracking, lip tracking and temporal segmentation of mouth events are presented.
{"title":"A probabilistic framework for rigid and non-rigid appearance based tracking and recognition","authors":"F. D. L. Torre, Y. Yacoob, L. Davis","doi":"10.1109/AFGR.2000.840679","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840679","url":null,"abstract":"This paper describes an unified probabilistic framework for appearance-based tracking of rigid and non-rigid objects. A spatio-temporal dependent shape-texture eigenspace and mixture of diagonal Gaussians are learned in a hidden Markov model (HMM)-like structure to better constrain the model and for recognition purposes. Particle filtering is used to track the object while switching between different shape/texture models. This framework allows recognition and temporal segmentation of activities. Additionally an automatic stochastic initialization is proposed, the number of states in the HMM are selected based on the Akaike information criterion and comparison with deterministic tracking for 2D models is discussed. Preliminary results of eye tracking, lip tracking and temporal segmentation of mouth events are presented.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126485897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}