Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840678
Ying-li Tian, T. Kanade, J. Cohn
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g., happiness and anger). Such prototypic expressions, however, occur infrequently. Human emotions and intentions are communicated more often by changes in one or two discrete facial features. We develop an automatic system to analyze subtle changes in facial expressions based on both permanent (e.g., mouth, eye, and brow) and transient (e.g., furrows and wrinkles) facial features in a nearly frontal image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on these multi-state models, and without artificial enhancement, we detect and track the facial features, including mouth, eyes, brow, cheeks, and their related wrinkles and facial furrows. Moreover we recover detailed parametric descriptions of the facial features. With these features as the inputs, 11 individual action units or action unit combinations are recognized by a neural network algorithm. A recognition rate of 96.7% is obtained. The recognition results indicate that our system can identify action units regardless of whether they occur singly or in combinations.
{"title":"Recognizing lower face action units for facial expression analysis","authors":"Ying-li Tian, T. Kanade, J. Cohn","doi":"10.1109/AFGR.2000.840678","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840678","url":null,"abstract":"Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g., happiness and anger). Such prototypic expressions, however, occur infrequently. Human emotions and intentions are communicated more often by changes in one or two discrete facial features. We develop an automatic system to analyze subtle changes in facial expressions based on both permanent (e.g., mouth, eye, and brow) and transient (e.g., furrows and wrinkles) facial features in a nearly frontal image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on these multi-state models, and without artificial enhancement, we detect and track the facial features, including mouth, eyes, brow, cheeks, and their related wrinkles and facial furrows. Moreover we recover detailed parametric descriptions of the facial features. With these features as the inputs, 11 individual action units or action unit combinations are recognized by a neural network algorithm. A recognition rate of 96.7% is obtained. The recognition results indicate that our system can identify action units regardless of whether they occur singly or in combinations.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134198987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840681
Rómer Rosales, S. Sclaroff
A novel approach is presented for estimating human body posture and motion from a video sequence. Human pose is defined as the instantaneous image plane configuration of a single articulated body in terms of the position of a predetermined set of joints. First, statistical segmentation of the human bodies from the background is performed and low-level visual features are found given the segmented body shape. The goal is to be able to map these visual features to body configurations. Given a set of body motion sequences for training, a set of clusters is built in which each has statistically similar configurations. This unsupervised task is done using the expectation maximization algorithm. Then, for each of the clusters, a neural network is trained to build this mapping. Clustering body configurations improves the mapping accuracy. Given new visual features, a mapping from each cluster is performed providing a set of possible poses. From this set, the most likely pose is extracted given the learned probability distribution and the visual feature similarity between hypothesis and input. Performance of the system is characterized using a new set of known body postures, showing promising results.
{"title":"Learning and synthesizing human body motion and posture","authors":"Rómer Rosales, S. Sclaroff","doi":"10.1109/AFGR.2000.840681","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840681","url":null,"abstract":"A novel approach is presented for estimating human body posture and motion from a video sequence. Human pose is defined as the instantaneous image plane configuration of a single articulated body in terms of the position of a predetermined set of joints. First, statistical segmentation of the human bodies from the background is performed and low-level visual features are found given the segmented body shape. The goal is to be able to map these visual features to body configurations. Given a set of body motion sequences for training, a set of clusters is built in which each has statistically similar configurations. This unsupervised task is done using the expectation maximization algorithm. Then, for each of the clusters, a neural network is trained to build this mapping. Clustering body configurations improves the mapping accuracy. Given new visual features, a mapping from each cluster is performed providing a set of possible poses. From this set, the most likely pose is extracted given the learned probability distribution and the visual feature similarity between hypothesis and input. Performance of the system is characterized using a new set of known body postures, showing promising results.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131607756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840651
B. Moghaddam, Ming-Hsuan Yang
Support vector machines (SVM) are investigated for visual gender classification with low-resolution "thumbnail" faces (21-by-12 pixels) processed from 1755 images from the FERET face database. The performance of SVM (3.4% error) is shown to be superior to traditional pattern classifiers (linear, quadratic, Fisher linear discriminant, nearest-neighbor) as well as more modern techniques such as radial basis function (RBF) classifiers and large ensemble-RBF networks. SVM also out-performed human test subjects at the same task: in a perception study with 30 human test subjects, ranging in age from mid-20s to mid-40s, the average error rate was found to be 32% for the "thumbnails" and 6.7% with higher resolution images. The difference in performance between low- and high-resolution tests with SVM was only 1%, demonstrating robustness and relative scale invariance for visual classification.
{"title":"Gender classification with support vector machines","authors":"B. Moghaddam, Ming-Hsuan Yang","doi":"10.1109/AFGR.2000.840651","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840651","url":null,"abstract":"Support vector machines (SVM) are investigated for visual gender classification with low-resolution \"thumbnail\" faces (21-by-12 pixels) processed from 1755 images from the FERET face database. The performance of SVM (3.4% error) is shown to be superior to traditional pattern classifiers (linear, quadratic, Fisher linear discriminant, nearest-neighbor) as well as more modern techniques such as radial basis function (RBF) classifiers and large ensemble-RBF networks. SVM also out-performed human test subjects at the same task: in a perception study with 30 human test subjects, ranging in age from mid-20s to mid-40s, the average error rate was found to be 32% for the \"thumbnails\" and 6.7% with higher resolution images. The difference in performance between low- and high-resolution tests with SVM was only 1%, demonstrating robustness and relative scale invariance for visual classification.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124286182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840630
M. Malciu, F. Prêteux
We present a generic and robust method for model-based global 3D head pose estimation in monocular and non-calibrated video sequences. The proposed method relies on a 3D/2D matching between 2D image features estimated throughout the sequence and 3D object features of a generic head model. Specifically, it combines motion and texture features in an iterative optimization procedure based on the downhill simplex algorithm. A proper initialization of the pose parameters, based on a block matching procedure, is performed at each frame in order to take into account large amplitude motions. For the same reason, we have developed a nonlinear optical flow-based interpolation algorithm for increasing the frame rate. Experiments demonstrate that this method is stable over extended sequences including large head motions, occlusions, various head postures and lighting variations. The estimation accuracy is related to the head model, as established by using an ellipsoidal model and an ad hoc synthesized model. The proposed method is general enough to be applied to other tracking applications.
{"title":"A robust model-based approach for 3D head tracking in video sequences","authors":"M. Malciu, F. Prêteux","doi":"10.1109/AFGR.2000.840630","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840630","url":null,"abstract":"We present a generic and robust method for model-based global 3D head pose estimation in monocular and non-calibrated video sequences. The proposed method relies on a 3D/2D matching between 2D image features estimated throughout the sequence and 3D object features of a generic head model. Specifically, it combines motion and texture features in an iterative optimization procedure based on the downhill simplex algorithm. A proper initialization of the pose parameters, based on a block matching procedure, is performed at each frame in order to take into account large amplitude motions. For the same reason, we have developed a nonlinear optical flow-based interpolation algorithm for increasing the frame rate. Experiments demonstrate that this method is stable over extended sequences including large head motions, occlusions, various head postures and lighting variations. The estimation accuracy is related to the head model, as established by using an ellipsoidal model and an ad hoc synthesized model. The proposed method is general enough to be applied to other tracking applications.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114695755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840683
Changbo Hu, Q. Yu, Yi Li, Songde Ma
We present in this paper an approach to extracting a human parametric 2D model for the purpose of estimating human posture and recognizing human activity. This task is done in two steps. In the first step, a human silhouette is extracted from a complex background under a fixed camera through a statistical method. By this method, we can reconstruct the background dynamically and obtain the moving silhouette. In the second step, a genetic algorithm is used to match the silhouette of the human body to a model in parametric shape space. In order to reduce the searching dimension, a layer method is proposed to take the advantage of the human model. Additionally we apply a structure-oriented Kalman filter to estimate the motion of body parts. Therefore the initial population and value in the GA can be well constrained. Experiments on real video sequences show that our method can extract the human model robustly and accurately.
{"title":"Extraction of parametric human model for posture recognition using genetic algorithm","authors":"Changbo Hu, Q. Yu, Yi Li, Songde Ma","doi":"10.1109/AFGR.2000.840683","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840683","url":null,"abstract":"We present in this paper an approach to extracting a human parametric 2D model for the purpose of estimating human posture and recognizing human activity. This task is done in two steps. In the first step, a human silhouette is extracted from a complex background under a fixed camera through a statistical method. By this method, we can reconstruct the background dynamically and obtain the moving silhouette. In the second step, a genetic algorithm is used to match the silhouette of the human body to a model in parametric shape space. In order to reduce the searching dimension, a layer method is proposed to take the advantage of the human model. Additionally we apply a structure-oriented Kalman filter to estimate the motion of body parts. Therefore the initial population and value in the GA can be well constrained. Experiments on real video sequences show that our method can extract the human model robustly and accurately.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840641
Rui Liao, S. Li
An automatic face recognition system based on multiple facial features is described. Each facial feature is represented by a Gabor-based complex vector and is localized by an automatic facial feature detection scheme. Two face recognition approaches, named two-layer nearest neighbor (TLNN) and modular nearest feature line (MNFL) respectively, are proposed. Both TLNN and MNFL are based on the multiple facial features detected for each image and their superiority in face recognition is demonstrated.
{"title":"Face recognition based on multiple facial features","authors":"Rui Liao, S. Li","doi":"10.1109/AFGR.2000.840641","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840641","url":null,"abstract":"An automatic face recognition system based on multiple facial features is described. Each facial feature is represented by a Gabor-based complex vector and is localized by an automatic facial feature detection scheme. Two face recognition approaches, named two-layer nearest neighbor (TLNN) and modular nearest feature line (MNFL) respectively, are proposed. Both TLNN and MNFL are based on the multiple facial features detected for each image and their superiority in face recognition is demonstrated.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125002757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840649
R. Gross, Jie Yang, A. Waibel
We investigate the recognition of human faces in a meeting room. The major challenges of identifying human faces in this environment include low quality of input images, poor illumination, unrestricted head poses and continuously changing facial expressions and occlusion. In order to address these problems we propose a novel algorithm, dynamic space warping (DSW). The basic idea of the algorithm is to combine local features under certain spatial constraints. We compare DSW with the eigenface approach on data collected from various meetings. We have tested both front and profile face images and images with two stages of occlusion. The experimental results indicate that the DSW approach outperforms the eigenface approach in both cases.
{"title":"Face recognition in a meeting room","authors":"R. Gross, Jie Yang, A. Waibel","doi":"10.1109/AFGR.2000.840649","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840649","url":null,"abstract":"We investigate the recognition of human faces in a meeting room. The major challenges of identifying human faces in this environment include low quality of input images, poor illumination, unrestricted head poses and continuously changing facial expressions and occlusion. In order to address these problems we propose a novel algorithm, dynamic space warping (DSW). The basic idea of the algorithm is to combine local features under certain spatial constraints. We compare DSW with the eigenface approach on data collected from various meetings. We have tested both front and profile face images and images with two stages of occlusion. The experimental results indicate that the DSW approach outperforms the eigenface approach in both cases.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129739961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840627
H. Hongo, M. Yasumoto, Y. Niwa, M. Ohya, Kazuhiko Yamamoto
We propose a multi-camera system that can track multiple human faces and hands as well as focus on face and hand gestures for recognition. Our current system consists of four cameras. Two fixed cameras are used as a stereo system to estimate face and hand positions. The stereo camera detects faces and hands by a standard skin color method we propose. The distances of the targets are then estimated. Next to track multiple targets, we estimate the positions and sizes of targets between consecutive frames. The other two cameras perform tracking of such targets as faces and hands. If a target is not the appropriate size for recognition, the tracking cameras acquire its zoomed image. Since our system has two tracking cameras, it can track two targets at the same time. To recognize faces and hand gestures, we propose four directional features by using linear discriminant analysis. Using our system, we experimented on human position estimation, multiple face tracking, and face and hand gesture recognition. These experiments showed that our system could estimate human position with the stereo camera and track multiple targets by using target positions and sizes even if the persons overlapped with each other. In addition, our system could recognize faces and hand gestures by using the four directional features.
{"title":"Focus of attention for face and hand gesture recognition using multiple cameras","authors":"H. Hongo, M. Yasumoto, Y. Niwa, M. Ohya, Kazuhiko Yamamoto","doi":"10.1109/AFGR.2000.840627","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840627","url":null,"abstract":"We propose a multi-camera system that can track multiple human faces and hands as well as focus on face and hand gestures for recognition. Our current system consists of four cameras. Two fixed cameras are used as a stereo system to estimate face and hand positions. The stereo camera detects faces and hands by a standard skin color method we propose. The distances of the targets are then estimated. Next to track multiple targets, we estimate the positions and sizes of targets between consecutive frames. The other two cameras perform tracking of such targets as faces and hands. If a target is not the appropriate size for recognition, the tracking cameras acquire its zoomed image. Since our system has two tracking cameras, it can track two targets at the same time. To recognize faces and hand gestures, we propose four directional features by using linear discriminant analysis. Using our system, we experimented on human position estimation, multiple face tracking, and face and hand gesture recognition. These experiments showed that our system could estimate human position with the stereo camera and track multiple targets by using target positions and sizes even if the persons overlapped with each other. In addition, our system could recognize faces and hand gestures by using the four directional features.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130819165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840639
Tim Cootes, G. V. Wheeler, K. N. Walker, C. Taylor
We demonstrate that a small number of 2D statistical models are sufficient to capture the shape and appearance of a face from any viewpoint (full profile to front-to-parallel). Each model is linear and can be matched rapidly to new images using the active appearance model algorithm. We show how such a set of models can be used to estimate head pose, to track faces through large angles of head rotation and to synthesize faces from unseen viewpoints.
{"title":"View-based active appearance models","authors":"Tim Cootes, G. V. Wheeler, K. N. Walker, C. Taylor","doi":"10.1109/AFGR.2000.840639","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840639","url":null,"abstract":"We demonstrate that a small number of 2D statistical models are sufficient to capture the shape and appearance of a face from any viewpoint (full profile to front-to-parallel). Each model is linear and can be matched rapidly to new images using the active appearance model algorithm. We show how such a set of models can be used to estimate head pose, to track faces through large angles of head rotation and to synthesize faces from unseen viewpoints.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126453594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840617
Karl Schwerdt, J. Crowley
We discuss a new robust tracking technique applied to histograms of intensity-normalized color. This technique supports a video codec based on orthonormal basis coding. Orthonormal basis coding can be very efficient when the images to be coded have been normalized in size and position. However an imprecise tracking procedure can have a negative impact on the efficiency and the quality of reconstruction of this technique, since it may increase the size of the required basis space. The face tracking procedure described in this paper has certain advantages, such as greater stability, higher precision, and less jitter, over conventional tracking techniques using color histograms. In addition to those advantages, the features of the tracked object such as mean and variance are mathematically describable.
{"title":"Robust face tracking using color","authors":"Karl Schwerdt, J. Crowley","doi":"10.1109/AFGR.2000.840617","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840617","url":null,"abstract":"We discuss a new robust tracking technique applied to histograms of intensity-normalized color. This technique supports a video codec based on orthonormal basis coding. Orthonormal basis coding can be very efficient when the images to be coded have been normalized in size and position. However an imprecise tracking procedure can have a negative impact on the efficiency and the quality of reconstruction of this technique, since it may increase the size of the required basis space. The face tracking procedure described in this paper has certain advantages, such as greater stability, higher precision, and less jitter, over conventional tracking techniques using color histograms. In addition to those advantages, the features of the tracked object such as mean and variance are mathematically describable.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126515996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}